text
stringlengths 64
2.99M
|
---|
2404.14719 1 Source Code Vulnerability Detection: Combining Code Language Models and Code Property Graphs (cid:66) Ruitong Liu, Yanbin Wang , Haitao Xu, Bin Liu, Jianguo Sun, Zhenhao Guo, Wenrui Ma Abstract—Currently, deep learning successfully applies to graph-based approaches. Sequence-based approaches process code vulnerability detection by learning from code sequences the source code or its structures (e.g., Abstract Syntax Trees, or property graphs. However, sequence-based methods often AST) into serialized forms and interpret individual elements overlook essential code attributes such as syntax, control flow, as tokens, which could be entire lines of code or segments and data dependencies, whereas graph-based approaches might underestimate the semantics of code and face challenges in dividedbyspaces[4],[5].NeuralnetworkslikeRNNs,LSTMs capturing long-distance contextual information. [6], [7], GRUs, and CNNs [8] are employed for detecting To address this gap, we propose Vul-LMGNN, a unified and classifying vulnerabilities by extracting sequence features modelthatcombinespre-trainedcodelanguagemodelswithcode from the code. Although sequence-based approaches exhibit property graphs for code vulnerability detection. Vul-LMGNN strengths in learning the contextual information of code, they constructs a code property graph that integrates various code fall short in effectively capturing the program’s hierarchical attributes(includingsyntax,flowcontrol,anddatadependencies) into a unified graph structure, thereafter leveraging pre-trained structures,executionflows,anddataandcontroldependencies. codemodeltoextractlocalsemanticfeaturesasnodeembeddings Graph-based methods transform source code into hetero- in the code property graph. Furthermore, to effectively retain geneous graph structures, such as AST, Control Flow Graph dependency information among various attributes, we introduce (CFG), and Program Dependence Graph (PDG), to efficiently a gated code Graph Neural Network (GNN). By jointly training the code language model and the gated code GNN modules capture both local structures and dependencies within the in Vul-LMGNN, our proposed method efficiently leverages the code. These graphical representations enrich the analysis by strengths of both mechanisms. Finally, we utilize a pre-trained providingintricatesyntacticandsemanticconnectionsbeyond CodeBERT as an auxiliary classifier, with the final detection mere code sequences. Leveraging code graphs, models based results derived by learning the linear interpolation of Vul- on GNN have demonstrated their effectiveness in extracting LMGNNandCodeBERT.Theproposedmethod,evaluatedacross four real-world vulnerability datasets, demonstrated superior structural insights for vulnerability detection, as evidenced performance compared to six state-of-the-art approaches. Our by research conducted by Wang et al. [9] and Zhou et al. source code could be accessed via the link: https://github.com/ [6]. Although graph-based methods provide valuable insights, Vul-LMGNN/vul-LMGGNN. they often overlook subtle coding patterns and long-distance contexts, and with their process of abstraction potentially leading to the loss of specific logic and behaviors in the code. I. INTRODUCTION To address current challenges, we propose Vul-LMGNN, With the rapid expansion of the open-source community, a novel vulnerability detection approach that combines the software vulnerability detection technology has become a strengths of both pre-trained code language models (code- significant concern in the software industry and cybersecurity PLM) and GNN. Vul-LMGNN constructs a code property domain. Vulnerabilities pose a threat to the integrity and graph (CPG) that merges ASTs, CFGs, and Program De- availability of software and computer systems, potentially pendency Graphs, initializing node embeddings with a pre- leadingtoprivilegeescalation,leakageofsensitivedata,denial trained codeBERT, and utilizes a Gated Gated Neural Net- of service, and various other attacks, resulting in substantial work (GGNN) for vulnerability detection. By jointly training economic and societal losses [1]. In practice, developers and codeBERTwithGGNN,theproposedmethodimplicitlyfuses security engineers primarily rely on code analysis or testing contextual information from code sequences with diverse tools to detect and repair bugs, such as rule-based analysis informationwithinthecodepropertygraph.Ourcontributions and symbolic execution [2]. However, these methods require in this paper are as follows: extensive manual verification due to their high false-positive • The proposed approach achieves state-of-the-art perfor- rates. mance across four public datasets, outperforming previous To improve the efficiency of code vulnerability detection, methods. Notably, it achieves an ˜10% higher F1 score on extensive research has leveraged deep learning (DL) models small-scale datasets. for automated vulnerability detection. These methods extract features from the source code to generate initial embedding • WeintroducetheGatedCodeGNN,whichleveragesagat- vectors, which are then fed into neural networks to learn vul- ing mechanism to capture dependency information within nerability patterns and produce classification results, thereby the code property graph, thereby effectively aggregating achieving automatic detection capabilities [3]. syntax, control flow, and data flow information. Deep learning-based methods for code vulnerability detec- • We propose a joint training method that combines pre- tion are primarily divided into two types: sequence-based and trainedcodemodelswithGatedGNNs,successfullycaptur- 4202 rpA 32 ]RC.sc[ |
1v91741.4042:viXra2 ing the benefits of both code sequence and property graph. TABLEI • We introduce an auxiliary classifier designed to enhance SUMMARYOFDATASETS our proposed Vul-LMGNN model by integrating predic- Dataset #Vulnerable #Non-Vul Source CWEs tions using linear interpolation. This augmentation further DiverseVul 18,945 330,492 Snyk, Bugzilla 150 improves Vul-LMGNN’s performance with explicit fusion Devign 11,888 14,149 Github N/A of predictions from two classifiers. VDSIC 82,411 119,1955 GitHub, Debian 4 The rest of this paper is structured as follows: Section II ReVeal 1664 16,505 Chrome, Debian N/A provides an overview of the background and related work. Section III outlines the composition of the dataset. Section IV delves into the design specifics of our model. Section V [19]. During data preprocessing, the source code is trans- presentstheexperimentaloutcomesandevaluatesourmodel’s formedintovariousgraphrepresentations,suchasAST,CFG, performance relative to the baseline method across datasets. PDG,andCPG[20].Then,thenodesandedgesareconverted Finally, Section VI summarizes the paper and outlines direc- into vectors, enabling the graph to be fed into a GNN model, tions for future research. which can learn structural information and make the final prediction. The CPG is a comprehensive code representation thatcombinestheabstractsyntaxtree,controlflowgraph,and II. RELATEDWORK program dependency graph, encapsulating both the syntactic In this section, we review the most relevant works to our and structural information of the source code [4]. Methods study, focusing on those based on deep learning techniques. like AI4VA [21] and those proposed by Feng et al. [22] These can be broadly categorized into two groups: sequence- directly use the original versions of the four basic graphs as based approaches and graph-based approaches. their code representations. The Devign [23] was the first to employ a GNN for code vulnerability detection tasks, incor- porating Natural Code Sequence (NCS) edges into the CPG. A. Sequence Based Models Chakraborty et al. [24] proposed the ReVeal algorithm, which Current studies based on deep sequence models generally combinesgatedGNNswithmultilayerperceptrons;FUNDED follow the process of preprocessing, vectorization, and neural [9] introduced an enhanced AST with eight additional edge network modeling [4]. In data preprocessing, the raw source types. Unlike the aforementioned strategies that add structural code is subjected to slicing and normalization techniques, information,VulSPG[25]suggestseliminatingcodeunrelated after which it is parsed into a sequence of tokens. Subse- to vulnerabilities. It performs graph slicing on the CPG to quently, these tokens are transformed into vectors suitable generate the SPG. for neural network processing. RNN and transformer-based GNNs struggle to capture the contextual relationships be- models are used to learn contextual information within token tween distantly connected nodes, a limitation that models sequencesandtomakethefinaldefectprediction.RNN-based based on the Transformer architecture can effectively over- works, such as VulDeePecker [7] and SySeVR [10], have come. This insight led to our approach of integrating pre- introduced lexical analysis, which converts the source code trained code language models with code graph models. Our into a more fine-grained code snippet. A potential concern methodutilizesapre-trainedcodelanguagemodeltoinitialize with code slicing is that the extracted code representations theembeddingsofnodesinthecodegraph,jointlytrainingthe may not encompass all vulnerable code snippets. On the systemtotransferknowledgefrompre-trainedcodesequences contrary, transformer-based methods utilize token vectoriza- to the code GNN, thus reaping the benefits of both worlds. tion techniques that extract more vulnerability-aware features. Transformer-based approaches often omit code slicing and III. DATASETREVIEW normalization strategies, opting instead to directly tokenize the source code. Guo et al. [11] introduced CodeBERT, a To evaluate our proposed code vulnerability detection cross-lingual pre-trained programming language model that methodandotherbaselinemethods,itisimperativetopossess incorporates edge prediction and node alignment tasks during a substantial quantity of both vulnerable and non-vulnerable training.Additionally,GraphcodeBERT[12]utilizesdataflow sourcecode,spanningadiverserangeofvulnerabilities.Inthis in the pre-training stage. They can be applied to down- paper,wehaveselectedfourpubliccodevulnerabilitydatasets, stream detection tasks. Other methods have adopted different which include three widely-used popular datasets and one tokenization strategies from the NLP domain; for instance, newly released comprehensive dataset. We have summarized CodeT5 [13] uses byte-level byte-pair-encoding (BPE) [14] the distribution of positive and negative samples and sources to segment the code into tokens, while CoTEXT [15] opts for of the datasets, as well as whether they distinguish specific theSentencepiece[16]modeltoextracttokens.Thesemethods types of vulnerability, as shown in Table I. have been proven to be effective. The DiverseVul [26] dataset is a newly released dataset of vulnerable source code. It has been curated by crawling two security issue websites that feature the most commits in git B. Graph Based Models systems, extracting commits that fix vulnerabilities and the GNN-basedmethodsalsoconsistofthreesteps:preprocess- corresponding source codes from the projects. The dataset ing, vectorization, and neural network modeling [17], [18], also employs deduplication of functions based on their MD53 Feature Extraction Node Embedding Graph Embedding Detection Phase |
Pre-trained GGNN Conv Language Model AA A A’ GRU vi BB B B’ GRU 0 𝐸 𝐸 … 𝐸 ………Aggregation… … 1 2 𝑁 1 … … … … … 𝑇𝑟𝑚 𝑇𝑟𝑚 … 𝑇𝑟𝑚 ZZ Z C’ GRU Source Code Property Graph 𝑇𝑟𝑚 𝑇𝑟𝑚 … 𝑇𝑟𝑚 Node Feature Extraction Code 𝑇 𝑇 … 𝑇 Natural code Sequence 1 2 𝑁 … … Joint Training Tokenization … Vectorization ci … Trade-off Fig.1. OverviewoftheVul-LMGNNVulnerabilityDetectionFramework. more balanced distribution. Source Code 𝐷𝑎𝑟𝑔𝑣 𝐶𝑡𝑟𝑢𝑒 REVEAL [24] is a comprehensive real-world dataset, 1 int main(int argc, char **argv) 𝐷𝑎𝑟𝑔𝑣 𝐶𝑡𝑟𝑢𝑒 𝐷𝑠𝑡𝑟 false amassed by monitoring historical vulnerabilities from two 2 { 3 4 5 c ifh (a ar r g* cst >r 1; s) t{ r = argv[1]; FUNC DECL PRED trueSTMT CALL STMT prominent open-source projects: the Linux Debian Kernel 6 7 }test(str); char *str > = test ARGreturn 0 and Chromium. It involves the extraction of the respective 8 return 0; vulnerable and fixed versions of C/C++ source and header 9 } argc 1 str argv[1] str files that have been modified in patches, serving as positive and negative samples for research. Fig. 2. A CPG for the example source code. Edge-type legend: Blue = AST,Red=CFG,Purple=PDG. IV. VUL-LMGNN In this section, we provide a detailed exposition of how Vul-LMGNN integrates pre-trained code language models hashes. This dataset comprises 18,945 vulnerable functions with GNNs to achieve both implicit and explicit fusion of spanning over 150 CWEs, and 330,492 non-vulnerable func- information. For a clearer understanding, our explanation is tions extracted from 7,514 commits. The range of projects divided into several sections: code representation, creation of covered by this dataset exceeds the total of all previous the code property graph, node embedding initialization, the datasetsby295.Thisdataset’ssubstantialvolumeanddiversity operation of the gated code GNN (including its joint training present a challenge for vulnerability detection methodologies. with code language models), and interpolating predictions. TheDevigndatasetencompassesreal-worldfunctionexam- ples from GitHub, harvested from four renowned and diverse open-source libraries: Linux, FFmpeg, Qemu, and Wireshark. A. Code Representation These examples are manually labeled based on commit mes- Thepurposeofthisphasetotransformtheoriginalfunction- sages and code differences. However, it does not provide levelsourcecodeintofixed-lengthfeaturevectorsthatcontains informationonthetypeofvulnerabilityorfine-grainedlabels. both semantic and syntactic structural information. Such con- Additionally, this dataset is part of a programming language version prepares the suitable data format for efficient process- understanding evaluation benchmark known as CodeXGLUE ing by GNN models and code language models that follow. [27], and has been extensively used by various methods. To achieve this, we adopt two specialized code representation The Draper VDISC dataset [3] is an extensive collection of strategies for GNNs and code sequence language models. 1.27 million functions extracted from open-source software, For code graph representation: We employ the open- annotated with insights from three distinct static analyz- source code analysis tool Joern [28] to parse the source ers to flag potential vulnerabilities. It encompasses the four code and generate the CPG. This CPG provides a unified most common CWEs: CWE-120, CWE-119, CWE-469, and and concise representation that combines control and data CWE-476. Notably, the dataset exhibits a highly imbalanced flow with abstract syntax trees and dependency graphs. We distribution of positive and negative samples, with a ratio rigorously exclude functions with errors in graph generation nearing 1:14.5. This imbalance could adversely affect the real to ensure data quality. performance of our testing models. Therefore, in this paper, For code sequence representation: we adhere to the we have utilized a pre-processed version of the dataset with a approach presented in [23], by converting function-level code4 Algorithm 1: Vul-LMGNN: Code Vulnerability De- tection Input: Train data - D GRU Target Node train 1 Contribution of triplet loss - α GRU GRU 2 Contribution of regularization loss - β Neighbor Neighbor 3 Separation boundary - γ GRU GRU 4 5 L Tre aa drn eoin ffg pr aa rte am- el tr er - λ C sno id pe pet <code> … ’=GRU ( , σ ) Output: Trained model graph node encoded feature gate recurrent unit node embedding 6 Function Vul-LMGNN(): (a) Node Feature Extraction (b) GGNN Node Embedding 7 features←∅ 8 labels←∅ Fig.3. Featureextractionandnodeembeddingphases. 9 ▷ Features extraction process 10 for (C,L)∈D train do 11 (V,E)←extract code property graph(C) The representation of the CPG is denoted as G = (V,E), 12 for v ∈V do where V represents the nodes within the graph and E means 13 T v ←onehot(v.type()) the edges. Each vertex V in the CPG encompasses the vertex 14 C v,S v ←CodeBERT(v.fragment(),C) type and a segment of the original code. As illustrated in the 15 x v =concat(T v,C v) Fig.2,thenodesandblueedgesrepresenttheASTstructureof 16 end this function segment, with the purple edges marked “D ” argv 17 X˜ =GGNN(x v,E) indicating the data dependency from the subtree defining 18 x g =Aggregate(X˜) variable argv to the subtree using the defined value. The red 19 features←features∪x g∪S v edges denote the execution order within the function. 20 labels←labels∪L For the node set V, every node v ∈V can contain various 21 end types of information depending on its source, such as AST, 22 M ←Combined-RepresentationModel() CFG, or PDG. This includes CPG node type identifiers such 23 ▷ Model training process as IdentifierDeclType or keywords such as int,char,for, |
24 for (f g,l g)∈D train do or operators such as +,−. 25 ▷ Define the loss function. 26 L all ←loss function(M,D train,f g,l g,α,β,γ,λ) C. Initializing Node Embeddings with CodeBERT 27 θ represents the model parameters of M . Previous methods for generating node embeddings often Combined 28 θ ←θ−∇ θ(L all) involvedtrainingstaticwordembeddingmodelslikeWord2vec 29 end [29] on a dataset of code snippets to produce vectors for 30 return M θ each code token. In contrast, our approach seeks to harness the power of large-scale pre-trained code language models, drawing on arge-scale pretraining to acquire prior knowledge for initializing code graph node embeddings. This is achieved into natural code sequences. This method serializes the code bysynergisticallytrainingthepre-trainedcodelanguagemodel inalignmentwiththenaturalorderofthesourcecode,thereby and GNNs on target datasets to jointly optimize node repre- preserving the logical sequence of the code. sentations. Specifically, we use the pre-trained programming language model CodeBERT [11] for initializing graph node B. Code Property Graphs embeddings, as depicted in Fig. 3. IntheprocessofgeneratingCPG,functionsaretransformed Specifically, we start by decomposing the function into a into comprehensive graphs that comprise various types of sequence of statement sets C =c ,c ,c ,...,c , where each 1 2 3 n nodes, such as variables and function calls, and edges, in- c is directly mapped to a node v within the CPG. This i i cluding control flow and data flow, which convey distinct mapping ensures that the complex structure of a function is types of information. At the core of the CPG, the AST representedasaninterconnectedgraphofsimpler,manageable captures the syntactic information, modeling the hierarchical elements. Each statement set is tokenized using CodeBERT’s structure of functions in a way that outlines the grammar and pretrained Byte Pair Encoding (BPE) tokenizer [30], convert- composition of the code. However, since the AST primarily ing the statement into a series of tokens. offers a static representation, it lacks the capacity to infer Following this, we initialize the self-embedding layer the program’s dynamic behavior. To address this, CPGs in- weightsusingCodeBERT’strainedwordembeddingsforeach corporate additional types of edges to represent data flow and token and employ label encoding for node type embeddings. controlflow,therebyenrichingthegraphwithinsightsintothe In parallel, efforts are made to fine-tune CodeBERT on our execution context and dependencies between code segments. target dataset,intending to tailor the model’sunderstanding to This integration frames a more holistic understanding of both our specific domain and thereby enhance the accuracy of the the static structure and dynamic behavior of the program. tokenvectorizationprocess.Inspiredby[21],weremovecode5 propertiesfromnon-leafnodesintheCPG,astheseproperties Subsequently,weadoptatrainingmechanismsimilartothat are often redundantly encoded in the leaf nodes. Finally, the of [23], [3], which deconstructs the task into ’learning code node content embeddings derived from CodeBERT and the representation’ and ’learning vulnerability’. This approach node type embeddings obtained through label encoding are introduced an output layer designed to highlight the nodes concatenated to form a comprehensive initial representation with the most significant information for the task of vulner- for each node. ability detection. We utilized convolution and max-pooling operations, commonly employed in CNNs. α(·) is defined as D. Gated Code Graph Neural Network a one-dimensional convolutional layer accompanied by max pooling, denoted as: In this section, we leverage GGNNs to explore CPGs, utilizing their advanced capabilities to discern patterns of α(·)=MAXPOOL(Relu(CONV(·))) (7) information flow across nodes, thereby revealing structural insights pertinent to code properties. Given the total time steps T of the GGNN and the number GGNNs are fed with feature vectors of all the nodes of applications l of α(·), the Conv module is represented as: alongside the graph edges. For a specified embedded graph g i(V,X,A), where V indicates nodes, X their features, and Z1 =α([H(T) ,x ]),...,Z(l) =α(Z(l−1)) (8) A the adjacency relationships,the GGNN assigns a Gated i (v,g) i i i RecurrentUnit(GRU)toeachnodev j ∈V.ThisGRUupdates Y(1) =α(H(T) ),...,Y(l) =α(Y(l−1)) (9) the current vertex embedding by integrating the embeddings i (v,g) i i of all its neighboring nodes. Specifically, the initial state where we apply 1-D convolutional and dense layers to vector for a node h( j1) ∈ Rz, where z ≥ d, is initialized [H (( vT ,) g),x i] and H (( vT ,) g). Afterward, we make a pairwise mul- by copying x into the first dimensions and padding with tiplication on the two outputs and make a prediction. j additional zeros. To update node embeddings, we employ a neighborhood aggregation scheme. At each node, messages E. Joint Training of CodeBERT and GGNN are aggregated and subsequently utilized to update the asso- In our joint training approach, we optimize the parameters ciated node representation at the subsequent embedding layer. of both CodeBERT and GGNN, leveraging the complemen- Formally, tary strengths of each model—CodeBERT’s contextual un- (cid:16)(cid:104) (cid:105) (cid:17) at =AT h(t−1)T,...,h(t−1)T +b (1) derstanding and GGNN’s relational insights—to improve the v,g (v,g) 1 m |
model’s performance in detecting code vulnerabilities. This To be specific, t represents a specific time step, b denotes the joint optimization strategy is implemented through the use of bias vector, and A is the adjacency matrix. The subsequent cross-entropy loss across code graph nodes, allowing for the stateat ofnodev iscomputedbyaggregatingtheinforma- v,g j simultaneous optimization of parameters for CodeBERT and tion from all neighboring nodes as defined in the adjacency GGNN. The formulated loss function can be depicted as: matrix A v,g) for a particular edge type. ( Subsequently, the GRU algorithm is used to aggregate and M update the states for identical nodes across different graphs. L=−(cid:88) y log(Softmax(MLP(Z(l))⊙MLP(Y(l))) ) (10) The process is articulated as follows: ic i i ic c=1 z vt ,g =σ(Wz·AGG(at v,g)+Uzh( vt ,− g1)) (2) M represents the number of classes, y ic is a binary indicator (0 or 1) indicating whether class label c is the correct classi- rt =σ(Wr·AGG(at )+Urh(t−1)) (3) fication for observation i. In this training process, CodeBERT v,g v,g v,g updates the node embeddings with each iteration, thereby gradually improving the complementary advantages of both h(cid:103)t =tanh(W ·AGG(at )+U(rt ◦h(t−1))) (4) CodeBERT and GGNN. v,g v,g v,g v,g F. Interpolating Predictions ht =(1−zt )◦h(t−1)+zt ◦h(cid:103)t (5) v,g v,g v,g v,g v,g In the previous step, we implicitly combine CodeBERT Where h( vt ,− g1) is the hidden state of node v in graph g, z vt ,g and GGNN by utilizing CodeBERT to generate the node and rt are the update gate and reset gate, respectively. embeddings for GGNN. Here, we further explicitly combine v,g hˆt is the candidate hidden state, and ht is the output the benefits of pre-training and graph-based approaches by v,g v,g hidden state. AGG denotes the aggregation function, which leveraginginterpolationpredictions.Specifically,weintroduce is utilized to compile information from various edge types. In an auxiliary classifier that operates directly on CodeBERT our application, we have employed the SUM [23] function. embeddings by feeding code embeddings E into a dense The final step involves aggregating all vertex embeddings layer with softmax activation. Ultimately, we perform a linear into a single vector to represent the entire CPG. Specifically, interpolation [31] of the predictions from Vul-LMGNN and CodeBERT, which is expressed as follows: H(T) = (cid:88) ht (6) (v,g) v,g Pred=λPred +(1−λ)×Softmax(WE) (11) v∈V GGCN6 100 95 90 85 CWE-36 C2 WE-70 C3 WE-11 C9 WE-41 C6 WE-47 C6 WE-28 C7 WE-40 C0 WE-20 CWE-5 C9 WE-77 C0 WE-12 C5 WE-78 C7 WE-31 C0 WE-19 C0 WE-29 C5 WE-20 C0 WE-41 C5 WE-83 C5 WE-39 C9 WE-2 C2 WE-28 C4 WE-12 C0 WE-40 C1 WE-36 C9 WE-7 C8 WE-61 C7 WE-26 C9 WE-13 C4 WE-18 C9 WE-264 CWE Types )%( erocS Performance Comparison on specific CWEs CodeBERT GraphCodeBERT CodeBertGGNN Fig.4. Detectionaccuracyforthetop-30high-frequencyCWEvulnerabilitytypesinDiverseVul. The parameter λ controls the trade-off between the two TABLEII objectives. A value of λ = 1 signifies the exclusive use of PERFORMANCEMETRICSOFVARIOUSMODELSONDATASETSWITH thefullVul-LMGNNmodel,whereasλ=0indicatesreliance SPECIFICCWES. solely on the CodeBERT module. When λ is within the range Dataset Model ACC(%) P(%) R(%) F1(%) (0,1), it allows for a balanced integration of the predictions frombothmodels.Thefine-tunedCodeBERTmodelregulated and optimized the input graph for the GGNN. Subsequently, an interpolation prediction facilitated an appropriate trade- off between the graph model and sequence model detection results, yielding outstanding detection outcomes. V. EXPERIMENTSRESULTS In this section, we present the experimental setup and the outcomes of our evaluations conducted on our proposed model, alongside six state-of-the-art baselines across four datasets. We have formulated the following four Research Questions (RQs) and have addressed them through our ex- perimental investigations: • RQ1: How does our Vul-LMGNN performance compare with other learning-based methods for vulnerability identi- fication? • RQ2: With the variation of the trade-off parameter, what changes can be observed in the model’s performance? • RQ3: Is the fine-tuning process of pre-trained models a more efficient method for token vectorization in node embedding compared to initial word embedding weights? • RQ4: How do different GNN architectures and pre-trained models influence the overall performance of the model? The experiments were executed on single NVIDIA A100 80GB GPU. The system specifications comprised NVIDIA driver version 525.85.12 and CUDA version 11.8. The soft- ware environment was configured with Python 3.10.13 and torch 2.2.0. In our comparative analysis, Vul-LMGNN is benchmarked against the latest state-of-the-art detection models. This in- cludes Transformer-based models: Bert [32], CodeBert, and GraphCodeBert; GNN-based models: TextGCN [33] and De- vign; As well as the CNN-based model TextCNN [34]. For computationalefficiency,functionswithanodesizeexceeding 500 in the CPG were excluded from our analysis. In terms of our model’s configuration, the learning rate and batch size were set to 1e − 4 and 64, respectively. The training was luVesreviD Bert 91.99 27.95 13.09 17.83 |
CodeBert 92.40 28.26 20.02 23.44 GraphCodeBert 92.96 31.14 16.30 21.40 TextCNN 92.16 10.25 9.82 10.03 TextGCN 91.50 15.66 11.50 13.27 Devign(AST) 70.21 9.35 9.22 9.28 Our 93.06 32.21 18.54 23.54 CISDVreparD Bert 79.41 81.86 75.97 78.80 CodeBert 83.13 86.13 78.97 82.39 GraphCodeBert 83.98 84.74 83.17 83.95 TextCNN 66.54 65.36 70.55 67.86 TextGCN 67.55 67.66 67.63 67.64 Devign(AST) 59.30 58.84 68.93 63.49 Our 84.38 87.37 80.64 83.87 conductedover20epochswithanearlystoppingcriteriontrig- gered if no further optimization in performance. Specifically for the Devign model, AST was employed for the code graph representation. Given the absence of disclosed hyperparame- ters, we endeavored to replicate their methodology to the best of our ability. The following are the details of the baseline: • Bert: A powerful pre-trained language model developed byGoogle,widelyusedfornaturallanguageunderstanding tasks. • CodeBERT: A language model specifically fine-tuned for code-related tasks, including code summarization and code completion. • GraphCodeBERT: A pre-trained programming language model, expanding upon CodeBERT to integrate code data flow information into the training objective. • TextCNN: A CNN architecture from the field of natural language processing, widely used in code vulnerability detection [35], [36]. • TextGCN: TextGCN: An advanced method for learning thegraphrepresentationsfromtext,showcasingexceptional performance in code-related tasks. • Devign: A gated GNN-based model, which takes a code property graph as input and employs 1-D convolutional pooling to make predictions.7 TABLEIII PERFORMANCEMETRICSOFVARIOUSMODELSONDATASETSWITHNO SPECIFICCWES. Dataset Model ACC(%) P(%) R(%) F1(%) ngiveD Bert 60.58 57.67 54.64 56.11 CodeBert 63.93 60.30 63.00 61.62 GraphCodeBert 64.80 64.37 54.38 58.96 TextCNN 60.38 59.03 57.72 58.37 TextGCN 60.47 60.87 58.58 59.70 Devign(AST) 57.66 56.96 56.25 56.60 Our 65.70 64.53 56.34 60.16 laeVeR 0.90 0.89 0.88 0.87 Bert 86.88 32.70 40.13 36.04 CodeBert 88.64 38.26 38.13 38.19 GraphCodeBert 89.25 41.67 41.81 41.74 0.86 TextCNN 85.43 26.32 20.33 22.94 0.2 0.4 0.6 0.8 1.0 trade-off parameter TextGCN 87.25 24.61 17.85 20.69 Devign(AST) 65.47 17.38 18.09 17.72 Our 90.80 57.09 46.45 51.22 A. Comparison with Baselines (RQ1) To evaluate the performance of Vul-LMGNN on code vulnerability detection, we executed an extensive comparative analysis against six baseline models utilizing the four datasets delineated in Table 1. The experimental results are systemati- cally presented in Table II and III. WeinitiallytestedtheVul-LMGNNondatasetscategorized by specific CWEs and analyzed its capability to recognize theseCWEswithinthetestset.Intermsofaccuracy,precision, andF1score,Vul-LMGNNoutperformedallbaselinemodels. Specifically,withintheDiverseVuldataset,ourmodelachieved an accuracy of 93.06% and an F1-score of 23.54%. In the balancedversionoftheVDSICdataset,anaccuracyof84.38% was attained. As shown in Figure 4, among the top 30 most frequently occurring CWEs in the test set, our model achieved a highest accuracy rate of 50% CWEs. It canbe observed that for some CWEs, the recognition accuracy of the model is generally low, such as CWE-310 (Cryptographic Issues) and CWE- 189 (Numeric Errors), while for another subset of CWEs, there are high recognition accuracy rates, such as CWE- 134 (Controlled Format String) and CWE-770 (Allocation of Resources Without Limits or Throttling). Among the baseline sequence-based detection models, CodeBert and GraphCodeBert showcased superior detection capabilitiesowingtotheirprogramminglanguagepre-training tasks, despite not containing C/C++ programs in their pre- training datasets. As a component of our model, CodeBert attained accuracies of 92.40% and 83.13%, and precisions of 28.26% and 86.13%, respectively. These values were marginallylowerby0.66%and1.25%,and3.95%and1.67%, compared to our model. GraphCodeBert, with further in- tegration of data flow information, outperformed CodeBert, reducingtheprecisiongapwithourmodelto1.07%,although the recall gap widened to 2.24%. In the realm of graph-based detection models, TextGCN, while performing well in text classification, showed mediocre results in code vulnerability detection experiments, achieving only a 91.50% accuracy rate on DiverseVul, with precision and recall at 15.66% and 11.50%, respectively. This may be due to TextGCN’s focus on word co-occurrence, lacking egatnecreP Accuracy Fig.5. Accuracy of Vul-LMGNN when varying trade-off parameter on partialDiverseVuldataset. 0.55 0.50 0.45 0.40 0.35 0.30 0.2 0.4 0.6 0.8 1.0 trade-off parameter egatnecreP Precision Recall F1 Fig. 6. Precision, recall and f1 of Vul-LMGNN when varying trade-off parameteronpartialDiverseVuldataset. structural code information. The AST version of Devign, whichincorporatescontrolflowanddataflowinformationand uses Word2Vec along with the average of tokens for node vector representation, performed poorly with an accuracy of only 70.21%, a gap of 14.26% from our F1 score. This could be attributed to the neglect of local semantic information of codewithinthenode.Thesedisparitiesweremorepronounced ontheVDSICdataset,withgapsinaccuracyreaching16.83% |
and 25.08% compared to our model, respectively. Theperformancedisparityontheothertwodatasetswithout specific CWEs is similar to those observed in the previous CWE-specific evaluations, as illustrated in Table III. Overall, transformer-based models demonstrated better de- tection effectiveness, as the embedding layer of transformers can implicitly capture vulnerability-related signals from the source code. In contrast, methods like Word2Vec, trained by predicting adjacent tokens, extract contextual information that maynotbeeffectiveforvulnerabilitydetection.Vul-LMGNN, utilizing a programming language (PL) model and GNNs, preserves the sequential information in the code and better incorporates its inherent information. B. Impact of the Tradeoff Parameter (RQ2) The parameter λ controls the trade-off between the train- ing Vul-LMGNN and CodeBert. As λ approaches 1, the8 model’s decisions rely more on the graph structure with the the best overall performance, with the highest accuracy at PL model embedding layer. Conversely, when λ approaches 84.38% and an F1 score of 83.87%. Experimental results 0, the model leans toward sequence-based decisions. Our demonstrate that fine-tuning aids the pre-trained PL model experiments across different datasets reveal that the optimal in learning code embedding features specific to vulnerability value of λ varies for different tasks, likely due to variations distributions, further enhanced by GNNs for better detection in vulnerability types and data distributions. For instance, performance. on the partial Draper VDSIC dataset, increasing λ does not significantly improve model performance. This phenomenon D. Different GNN Model Combination (RQ4) canbeattributedtothestrongperformanceofsequence-based To investigate the impact of combining different GNNs methods on the former VDSIC dataset. with pre-trained language models on vulnerability detection Fig.5and6illustratetheevaluationmatrixofVul-LMGNN tasks, we compared three distinct GNNs: Graph Gated Neural with varying λ on the partial DiverseVul dataset. Accuracy Network(GGNN),GraphConvolutionalNetwork(GCN)[41], improves consistently as λ increases, reaching its peak at and Graph Attention Network (GAT) [42], all integrated λ=0.8, slightly outperforming the GGNN or CodeBert alone with CodeBERT. For a fair comparison, we followed the (at λ = 0 or 1). The achieved accuracy is 90.24%. During this configuration from [37], maintaining a consistent two-layer process, precision exhibits fluctuations but overall shows an GNN architecture and setting GAT’s number of heads to upward trend, reaching 46. 19% at λ = 0.8, an improvement 8. Additionally, we employed fine-tuned CodeBERT with of 10. 71% over using the sequence model alone. However, consistentmodelparameters.Thespecificexperimentalresults recall follows a declining trend, reaching 36.45% at λ=0.8, are shown in Table V. indicating that transformer-based PL models exhibit higher recall in certain vulnerability detection scenarios. TABLEV PERFORMANCEACROSSVARIOUSGNNARCHITECTURES. C. Evaluation of Fine-tuning Process (RQ3) Combination ACC(%) P(%) R(%) F1(%) Pre-trainedlanguagemodelshavedemonstratedoutstanding GGNN+CodeBERT 84.38 87.37 80.64 83.87 performance in various natural language processing tasks. GCN+CodeBERT 83.08 86.90 77.90 82.15 Currently, there is a growing focus among researchers on GAT+CodeBERT 79.29 81.92 75.15 78.39 employingpre-trainedlanguagemodelsforcode-relatedtasks, including code search, code completion, code summarization As shown in Table 5, our experiments were conducted and so on [37], [13], [38], [39]. This has led to promising on the partial Draper VDSIC dataset. The results indicate results in applications. This has prompted us to incorporate that GGNN exhibited the best overall performance, with an pre-trained models for programming languages in order to accuracy of 84.38% and an F1 score of 83.87%. Compared to construct a novel vulnerability detection model. GGNN,GCNexperienceddecreasesinaccuracyandprecision We utilize the word embedding layer of pre-trained models by 1.3% and 0.47%, respectively, with the most significant as tokenization tools to generate node embeddings for graphs. decrease observed in recall at 2.74%. This may be attributed These embeddings’ weights are further fine-tuned during to GCN treating all neighboring nodes equally during con- training. In our experiments, we explore three settings. First, volution, thus failing to assign different weights based on we initialize our embedding layer weights using a fine-tuned node importance, leading to inaccurate identification of nodes CodeBERT which perform fine-tuning on the target dataset related to code vulnerabilities. Additionally, GCN updates [40].Additionally,wecomparethisapproachwithtwoothers: node features for the entire graph in a single computation, initializingembeddinglayerweightsdirectlyusingpre-trained which poses challenges when dealing with complex code CodeBERTandGraphCodeBERT,respectively.Theresultsare graph structures in inductive learning tasks related to code summarized in Table IV. vulnerabilities. TheperformanceoftheGATmodelexhibitedaconsiderable TABLEIV gap compared to the previous two, with an accuracy of only PERFORMANCEACROSSVARIOUSNODEEMBEDDINGAND 79.29%. Although GAT utilizes self-attention mechanisms to INITIALIZATIONMETHODS. representeachnodeasaweightedsumofitsneighbors,itdoes notfullyleverageedgeinformation,onlyutilizingconnectivity, Base ACC(%) P(%) R(%) F1(%) whereas edge information encompasses the control and data CodeBERT 84.35 87.75 79.85 83.61 |
flow information of the code. In contrast, GGNN employs GraphCodeBert 84.05 86.46 80.74 83.50 Fine-tuned 84.38 87.37 80.64 83.87 GRU units, allowing each node to receive messages from neighboring nodes at each iteration. This approach effectively captures both code data flow features and long sequence ComparedtodirectlyusingCodeBERTfornodeembedding dependencies, resulting in outstanding performance. weightinitialization,GraphCodeBERTimprovesaccuracyand recall by 0.3% and 0.89%, respectively. However, CodeBERT outperforms in precision and overall F1 score, achieving VI. CONCLUSION 87.75% and 83.61%. In contrast, using CodeBERT fine-tuned In this paper, we propose a novel model, Vul-LMGNN, onthetargetdatasetforembeddingweightinitializationyields which integrates sequence and graph embedding techniques9 todetectvulnerabilitiesinfunction-levelsourcecode.Ourap- [13] Y. Wang, W. Wang, S. Joty, and S. C. Hoi, “Codet5: Identifier-aware proachleveragesthecodepropertygraphrepresentationofthe unifiedpre-trainedencoder-decodermodelsforcodeunderstandingand generation,”arXivpreprintarXiv:2109.00859,2021. sourcecodeastheprimaryinput.Specifically,weutilizeapre- [14] A.Radford,J.Wu,R.Child,D.Luan,D.Amodei,I.Sutskeveretal., trainedProgramLanguage(PL)modeltoextractlocalseman- “Language models are unsupervised multitask learners,” OpenAI blog, tic features from the code, which are then embedded as nodes vol.1,no.8,p.9,2019. in the graph using sequence-based embeddings. Subsequently, [15] L.Phan,H.Tran,D.Le,H.Nguyen,J.Anibal,A.Peltekian,andY.Ye, “Cotext:Multi-tasklearningwithcode-texttransformer,”arXivpreprint we employ a GGNN equipped with convolutional layers to arXiv:2105.08645,2021. effectively fuse heterogeneous information within the graph. [16] T.KudoandJ.Richardson,“Sentencepiece:Asimpleandlanguageinde- Finally, our model jointly learns and predicts vulnerabilities pendentsubwordtokenizeranddetokenizerforneuraltextprocessing,” arXivpreprintarXiv:1808.06226,2018. by combining the PL model with the GGNN. To validate [17] V.-A. Nguyen, D. Q. Nguyen, V. Nguyen, T. Le, Q. H. Tran, and the effectiveness of Vul-LMGNN, we conducted extensive D. Phung, “Regvd: Revisiting graph neural networks for vulnerability experiments on four real-world datasets, which demonstrated detection,” in Proceedings of the ACM/IEEE 44th International Con- ference on Software Engineering: Companion Proceedings, 2022, pp. its superior performance. We systematically explored trade- 178–182. off parameters, fine-tuning of the PL model, and variations [18] X. Cheng, G. Zhang, H. Wang, and Y. Sui, “Path-sensitive code of GNN architectures. Our findings further emphasize the embeddingviacontrastivelearningforsoftwarevulnerabilitydetection,” inProceedingsofthe31stACMSIGSOFTInternationalSymposiumon positive contributions of each module to the overall model SoftwareTestingandAnalysis,2022,pp.519–531. performance. As part of interesting future work, we intend [19] Y. Hu, S. Wang, W. Li, J. Peng, Y. Wu, D. Zou, and H. Jin, to explore more effective fusion networks for learning code “Interpreters for gnn-based vulnerability detection: Are we there yet?” representations and facilitating multiclass detection. in Proceedings of the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis, ser. ISSTA 2023. New York, NY, USA: Association for Computing Machinery, 2023, p. 1407–1419. REFERENCES [Online].Available:https://doi.org/10.1145/3597926.3598145 [20] F.Yamaguchi,N.Golde,D.Arp,andK.Rieck,“Modelinganddiscover- [1] H.Plate,S.E.Ponta,andA.Sabetta,“Impactassessmentforvulnera- ingvulnerabilitieswithcodepropertygraphs,”in2014IEEEsymposium bilities in open-source software libraries,” in 2015 IEEE International onsecurityandprivacy. IEEE,2014,pp.590–604. ConferenceonSoftwareMaintenanceandEvolution(ICSME). IEEE, [21] S.Suneja,Y.Zheng,Y.Zhuang,J.Laredo,andA.Morari,“Learningto 2015,pp.411–420. mapsourcecodetosoftwarevulnerabilityusingcode-as-a-graph,”arXiv [2] S. Lipp, S. Banescu, and A. Pretschner, “An empirical study on the preprintarXiv:2006.08614,2020. effectiveness of static c code analyzers for vulnerability detection,” in [22] Q.Feng,C.Feng,andW.Hong,“Graphneuralnetwork-basedvulner- Proceedings of the 31st ACM SIGSOFT International Symposium on abilitypredication,”in2020IEEEinternationalconferenceonsoftware SoftwareTestingandAnalysis,2022,pp.544–555. maintenanceandevolution(ICSME). IEEE,2020,pp.800–801. [3] R. Russell, L. Kim, L. Hamilton, T. Lazovich, J. Harer, O. Ozdemir, [23] Y. Zhou, S. Liu, J. Siow, X. Du, and Y. Liu, “Devign: Effective vul- P. Ellingwood, and M. McConley, “Automated vulnerability detection nerability identification by learning comprehensive program semantics in source code using deep representation learning,” in 2018 17th viagraphneuralnetworks,”Advancesinneuralinformationprocessing IEEE international conference on machine learning and applications systems,vol.32,2019. (ICMLA). IEEE,2018,pp.757–762. [24] S. Chakraborty, R. Krishna, Y. Ding, and B. Ray, “Deep learning |
[4] B. Wu, F. Zou et al., “Code vulnerability detection based on deep based vulnerability detection: Are we there yet?” IEEE Transactions sequence and graph models: A survey,” Security and Communication onSoftwareEngineering,vol.48,no.9,pp.3280–3296,2021. Networks,vol.2022,2022. [25] W.Zheng,Y.Jiang,andX.Su,“Vu1spg:Vulnerabilitydetectionbased [5] X. Nie, N. Li, K. Wang, S. Wang, X. Luo, and H. Wang, on slice property graph representation learning,” in 2021 IEEE 32nd “Understanding and tackling label errors in deep learning-based International Symposium on Software Reliability Engineering (ISSRE). vulnerability detection (experience paper),” in Proceedings of the IEEE,2021,pp.457–467. 32nd ACM SIGSOFT International Symposium on Software Testing [26] Y.Chen,Z.Ding,L.Alowain,X.Chen,andD.Wagner,“Diversevul:A and Analysis, ser. ISSTA 2023. New York, NY, USA: Association newvulnerablesourcecodedatasetfordeeplearningbasedvulnerability for Computing Machinery, 2023, p. 52–63. [Online]. Available: detection,” in Proceedings of the 26th International Symposium on https://doi.org/10.1145/3597926.3598037 ResearchinAttacks,IntrusionsandDefenses,2023,pp.654–668. [6] G.Lin,J.Zhang,W.Luo,L.Pan,O.DeVel,P.Montague,andY.Xiang, [27] S. Lu, D. Guo, S. Ren, J. Huang, A. Svyatkovskiy, A. Blanco, “Softwarevulnerabilitydiscoveryvialearningmulti-domainknowledge C.Clement,D.Drain,D.Jiang,D.Tangetal.,“Codexglue:Amachine bases,” IEEE Transactions on Dependable and Secure Computing, learning benchmark dataset for code understanding and generation,” vol.18,no.5,pp.2469–2485,2019. arXivpreprintarXiv:2102.04664,2021. [7] Z.Li,D.Zou,S.Xu,X.Ou,H.Jin,S.Wang,Z.Deng,andY.Zhong, [28] Anon. (2023) The bug hunter’s workbench. [Online]. Available: “Vuldeepecker: A deep learning-based system for vulnerability detec- https://joern.io/ tion,”arXivpreprintarXiv:1801.01681,2018. [29] T.Mikolov,K.Chen,G.Corrado,andJ.Dean,“Efficientestimationof [8] H. Liang, Y. Yang, L. Sun, and L. Jiang, “Jsac: A novel framework wordrepresentationsinvectorspace,”arXivpreprintarXiv:1301.3781, to detect malicious javascript via cnns over ast and cfg,” in 2019 2013. International Joint Conference on Neural Networks (IJCNN). IEEE, 2019,pp.1–8. [30] A. Araabi, C. Monz, and V. Niculae, “How effective is byte pair encoding for out-of-vocabulary words in neural machine translation?” [9] H. Wang, G. Ye, Z. Tang, S. H. Tan, S. Huang, D. Fang, Y. Feng, arXivpreprintarXiv:2208.05225,2022. L.Bian,andZ.Wang,“Combininggraph-basedlearningwithautomated datacollectionforcodevulnerabilitydetection,”IEEETransactionson [31] Y. Lin, Y. Meng, X. Sun, Q. Han, K. Kuang, J. Li, and F. Wu, InformationForensicsandSecurity,vol.16,pp.1943–1958,2020. “Bertgcn: Transductive text classification by combining gcn and bert,” [10] Z. Li, D. Zou, S. Xu, H. Jin, Y. Zhu, and Z. Chen, “Sysevr: A arXivpreprintarXiv:2105.05727,2021. framework for using deep learning to detect software vulnerabilities,” [32] J.Devlin,M.-W.Chang,K.Lee,andK.Toutanova,“Bert:Pre-training IEEETransactionsonDependableandSecureComputing,vol.19,no.4, of deep bidirectional transformers for language understanding,” arXiv pp.2244–2258,2021. preprintarXiv:1810.04805,2018. [11] Z.Feng,D.Guo,D.Tang,N.Duan,X.Feng,M.Gong,L.Shou,B.Qin, [33] L. Yao, C. Mao, and Y. Luo, “Graph convolutional networks for text T.Liu,D.Jiangetal.,“Codebert:Apre-trainedmodelforprogramming classification,” in Proceedings of the AAAI conference on artificial andnaturallanguages,”arXivpreprintarXiv:2002.08155,2020. intelligence,vol.33,no.01,2019,pp.7370–7377. [12] D. Guo, S. Ren, S. Lu, Z. Feng, D. Tang, S. Liu, L. Zhou, N. Duan, [34] B. Guo, C. Zhang, J. Liu, and X. Ma, “Improving text classification A.Svyatkovskiy,S.Fuetal.,“Graphcodebert:Pre-trainingcoderepre- with weighted word embeddings via a multi-channel textcnn model,” sentationswithdataflow,”arXivpreprintarXiv:2009.08366,2020. Neurocomputing,vol.363,pp.366–374,2019.10 [35] M. Pan, P. Wu, Y. Zou, C. Ruan, and T. Zhang, “An automatic vulnerabilityclassificationframeworkbasedonbigru-textcnn,”Procedia ComputerScience,vol.222,pp.377–386,2023. [36] K. Napier, T. Bhowmik, and S. Wang, “An empirical study of text- based machine learning models for vulnerability detection,” Empirical SoftwareEngineering,vol.28,no.2,p.38,2023. [37] W. Tang, M. Tang, M. Ban, Z. Zhao, and M. Feng, “Csgvd: A deep learningapproachcombiningsequenceandgraphembeddingforsource codevulnerabilitydetection,”JournalofSystemsandSoftware,vol.199, p.111623,2023. [38] S.Chakraborty,T.Ahmed,Y.Ding,P.T.Devanbu,andB.Ray,“Natgen: generative pre-training by “naturalizing” source code,” in Proceedings of the 30th ACM joint european software engineering conference and |
symposiumonthefoundationsofsoftwareengineering,2022,pp.18–30. [39] F. F. Xu, U. Alon, G. Neubig, and V. J. Hellendoorn, “A systematic evaluation of large language models of code,” in Proceedings of the 6thACMSIGPLANInternationalSymposiumonMachineProgramming, 2022,pp.1–10. [40] E. Shi, Y. Wang, H. Zhang, L. Du, S. Han, D. Zhang, and H. Sun, “Towards efficient fine-tuning of pre-trained code models: An experimental study and beyond,” in Proceedings of the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis, ser. ISSTA 2023. New York, NY, USA: Association for Computing Machinery, 2023, p. 39–51. [Online]. Available: https://doi.org/10.1145/3597926.3598036 [41] T.N.KipfandM.Welling,“Semi-supervisedclassificationwithgraph convolutionalnetworks,”arXivpreprintarXiv:1609.02907,2016. [42] P.Velicˇkovic´,G.Cucurull,A.Casanova,A.Romero,P.Lio,andY.Ben- gio,“Graphattentionnetworks,”arXivpreprintarXiv:1710.10903,2017. |
2404.15596 VulEval: Towards Repository-Level Evaluation of Software Vulnerability Detection Xin-ChengWen XinchenWang YujiaChen HarbinInstituteofTechnology HarbinInstituteofTechnology, HarbinInstituteofTechnology, Shenzhen,China Shenzhen,China Shenzhen,China xiamenwxc@foxmail.com 200111115@stu.hit.edu.cn yujiachen@stu.hit.edu.cn RuidaHu DavidLo CuiyunGao∗ HarbinInstituteofTechnology, SingaporeManagementUniversity HarbinInstituteofTechnology Shenzhen,China Singapore Shenzhen,China 200111107@stu.hit.edu.cn davidlo@smu.edu.sg gaocuiyun@hit.edu.cn ABSTRACT 1 INTRODUCTION DeepLearning(DL)-basedmethodshaveproventobeeffectivefor Softwarevulnerabilities,mostlycausedbyinsecurecode,canbe softwarevulnerabilitydetection,withapotentialforsubstantialpro- exploitedtoattacksoftwaresystems,andfurthercausethesecu- ductivityenhancementsfordetectingvulnerabilities.Currentmeth- rityissuessuchassystemcrash,dataleakage,andevencritical odsmainlyfocusondetectingsinglefunctions(i.e.,intra-procedural infrastructuredamage.Inthepasttenyears,thenumberofsoft- vulnerabilities),ignoringthemorecomplexinter-proceduralvulner- warevulnerabilitieshasincreasedmorethanfivetimes,risingfrom abilitydetectionscenariosinpractice.Forexample,developersrou- 5,697 in 2013 to 29,065 in 2023 [48]. This increasing growth in tinelyengagewithprogramanalysistodetectvulnerabilitiesthat boththequantityandtypeofsoftwarevulnerabilitieshasledto spanmultiplefunctionswithinrepositories.Inaddition,thewidely- increasingeconomiclosses[50].Forexample,Clopransomware usedbenchmarkdatasetsgenerallycontainonlyintra-procedural has successfully extorted more than $500 million from various vulnerabilities,leavingtheassessmentofinter-proceduralvulnera- organizations[37].Therefore,itisnecessarytodevelopeffective bilitydetectioncapabilitiesunexplored. technologiesforsoftwarevulnerabilitydetection. To mitigate the issues, we propose a repository-level evalua- Theexistingvulnerabilitydetectionmethodscanbecategorized tionsystem,namedVulEval,aimingatevaluatingthedetection intofourtypes:programanalysis-based,supervisedlearning-based, performanceofinter-andintra-proceduralvulnerabilitiessimul- fine-tuning-based,andprompt-basedmethods.Thetraditionalpro- taneously.Specifically,VulEvalconsistsofthreeinterconnected gramanalysis-basedvulnerabilitydetectiontechniques,suchas evaluationtasks:(1)Function-LevelVulnerabilityDetection, INFER[15]andCheckMarx[25],relyonpredefinedrulestoiden- aimingatdetectingintra-proceduralvulnerabilitygivenacodesnip- tifyvulnerabilities.Theseapproachesarelabor-intensiveandin- pet;(2)Vulnerability-RelatedDependencyPrediction,aiming efficientduetothediversetypesofvulnerabilitiesandlibraries. atretrievingthemostrelevantdependenciesfromcallgraphsfor Deeplearning(DL)-basedapproacheshaveemergedaseffective providingdeveloperswithexplanationsaboutthevulnerabilities; solutions, exhibiting notable success by mitigating the reliance and(3)Repository-LevelVulnerabilityDetection,aimingat ondomainexpertiseandenhancingtheabilitytodetectavari- detectinginter-proceduralvulnerabilitiesbycombiningwiththe etyofsoftwarevulnerabilities[9].EarlyDL-basedapproachesuse dependenciesidentifiedinthesecondtask.VulEvalalsoconsists thesupervisedlearning-basedmethods,whichleveragedConvolu- ofalarge-scaledataset,withatotalof4,196CVEentries,232,239 tionalNeuralNetworks(CNNs)[60,61],RecurrentNeuralNetworks functions,andcorresponding4,699repository-levelsourcecode (RNNs)[34,47],andGraphNeuralNetworks(GNNs)[6,65]for inC/C++programminglanguages.Byevaluating19vulnerability learningthevulnerabilityrepresentation. detectionmethodsonthedatasplitrandomlyandbytimerespec- Nevertheless, the effectiveness of these supervised learning- tively,weobservethattherepository-levelvulnerabilitydetection basedapproachesislimitedbythescarcityofvulnerabilitydata[41]. frameworkoutperformsthecorrespondingfunction-levelmeth- Theemergenceofpre-trainedmodelslikeCodeBERT[17]andUniX- ods,withanincreaseof1.51%inF1scoreand2.63%inMCCon coder[21],whicharetrainedonlarge-scaleopen-sourcecoderepos- average.Itindicatesthatincorporatingvulnerability-relateddepen- itories,hasnotablypropelledthisdomainforward.Thesemethods, denciesfacilitatesvulnerabilitydetection.Ourexperimentalresults equipped with extensive general programming knowledge, can alsodemonstratethattheperformanceofprogram-analysis-and befine-tunedwithvulnerabilitydatasetstogreatlyenhancethe prompt-basedmethodsarenotaffectedwhensplittingthedata vulnerabilitydetectionperformance,denotedasfine-tuning-based bytime.Inaddition,forthesevendependencyretrievalmethods methods.Nowadays,theprompt-basedtechniqueshaveutilized studied,wefindthatlexical-basedmethodsyieldsuperiorresults LargeLanguageModels(LLMs),suchasLLaMA[51]andCodeL- thansemantic-basedmethodsforidentifyingvulnerability-related lama[46],forvulnerabilitydetection,markingatrendofinclination |
dependencies.Ouranalysishighlightsthecurrentprogressand towardsunsupervisedmethodologiesinthedomain. futuredirectionsforsoftwarevulnerabilitydetection. Despite substantial advancements in vulnerability detection throughusingfine-tuningandprompttechniques,evaluatingthe ∗Correspondingauthor. efficacyofthesemethodsremainschallenging.Specifically,there 4202 rpA 42 ]ES.sc[ 1v69551.4042:viXraarxiv,April,2024 Xin-ChengWen,XinchenWang,YujiaChen,RuidaHu,DavidLo,andCuiyunGao 1 void dd_close(struct dump_dir *dd) 1 int dd_delete(struct dump_dir *dd) 1 static void dd_unlock(struct dump_dir *dd) . 2 { 2 { 2 { 3 if(!dd) 3 if(!dd->locked) 3 if(dd->locked) 4 return; 4 { 4 { 5 dd_unlock(dd); 5 error msg("...", 5 dd->locked = 0; 6 if(dd->next_dir) dd->dd_dirname); 6 unsigned dirname_len =strlen(dd->dd_irname); . 7 { 6 return -1; 7 char lock_buf[dirname_len + sizeof("/.lock")]; 8 closedir(dd->next_dir); 7 int r= delete_file_dir . . . . 8 strcpy(lock_buf, dd->dd_dirname); 9 } (dd->dd_dirname, true); . 9 strcpy(lock_buf + dirname_len, "/.lock"); 10 free(dd->dd_type); 8 dd->locked =0; . 1 0 x unlink(lock buf); 11 free(dd->dd_dirname); 9 dd_close(dd); . 11 log_info("Unlocked '%s'",lock_buf); 12 free(dd); 10 return r; 12 } 13} 11} 13} (a) Source Code (b) Callee Function (c) Caller Function Figure1:Aninter-proceduralvulnerabilityexampleoftheCWE-20.Lineshighlightedingreendenotethecallrelation(i.e., calleeandcaller),andreddenotesthevulnerablestatements. exists a gap between the current evaluation scenarios and real- Tomitigatetheissues,inthispaper,weproposeaholisticevalu- worldvulnerabilitydetectionscenarios,embodiedinthefollowing ationsystem,namedVulEval,designedforevaluatinginter-and twoaspects: intra-procedural vulnerabilities simultaneously. Specifically, we (1)Lackofmethodsfordetectinginter-proceduralvulner- performthreeinterconnectedtaskstoconstructtheevaluationsys- abilities.Despitethedemonstratedefficacyofvariousmethodsfor tem:(1)Function-levelVulnerabilityDetection,wherethetask vulnerabilitydetection,currentevaluationframeworksprimarily istopredictthegivencodesnippetwhetheritisvulnerableornot, focusonthegranularityofindividualfunctionorfile,failingto aimsatdetectingintra-proceduralvulnerability;(2)Vulnerability- fullyaccountforthecomplexitiesofvulnerabilitiesthatextend RelatedDependencyPrediction,wherethetaskisretrieving acrossmultiplefilesorentirerepositories.Thisnarrowfocusinade- thevulnerability-relateddependencyfromthecallgraph,thereby quatelymirrorsthecomplexityinherentinreal-worldvulnerability providingdeveloperswithexplanationsaboutthevulnerabilities; detectioncontexts,whereindevelopersroutinelycheckwithpro- and(3)Repository-levelVulnerabilityDetection,aimingat gramanalysistechniquestodetectvulnerabilitiesthatspanmultiple detectinginter-proceduralvulnerabilities.Toexplorethecurrent filesintherepositorylevel.Forexample,Figure1presentsaninter- vulnerabilitydetectionmethods’performanceinthethirdtask,we proceduralvulnerabilityofCWE-20(ImproperInputValidation)[2]. proposearepository-levelvulnerabilitydetectionframeworkby Figure1(a),(b),and(c)illustratethecodesnippetatthefunction combiningdependenciesidentifiedinthesecondtask. level,theassociatedcalleeandcallerfunctions,respectively.Specifi- Wecollectalarge-scalerepository-levelsourcecodeforeach cally,thefunctiondd_closeassumesthattheddpointerisnon-null vulnerabilitypatchtoproviderepositoryinformation.Itconsists withoutverification,andproceedstoinvokedd_unlockinLine5 of4,196CVEentries,232,239functions,andcorresponding4,699 ofFigure1(a)andaccessmembervariables.Itcancausethevul- repository-levelsourcecodeinC/C++programminglanguages.We nerability(Lines6-7inFigure1(c)).Similarly,dd_deleteperforms alsoextract347,533functiondependencies(i.e.,CalleeandCaller) anoperationcontingentuponthelockedstatuswithoutensuring and9,538vulnerability-relateddependenciesfromtherepository thattheddpointerisvalid(Lines7-8inFigure1(b)).Suchinter- todetecttheinter-proceduralvulnerability. proceduralvulnerabilitiesacrossmultiplefunctionsarehardtobe Basedontheproposedevaluationsystem,weempiricallystudy identifiedbyexistingmethods. theperformanceofthefourtypesofvulnerabilitydetectionmeth- (2)Lackofacomprehensiveevaluationsystemforvul- ods(i.e.,19baselines)onVulEvalforfunction-andrepository-level nerabilitydetection.Theexistingworkgenerallyconductsthe vulnerabilitydetection.Wealsoevaluatethethreetypesofretrieval evaluationonrandomlysplitfunction-/file-leveldatasets,without methods(i.e.,sevenbaselines)forvulnerability-relateddependency consideringdifferentscenariosseparatelyandthetimeliness.The prediction.Duringtheevaluation,weanalyzetheeffectivenessin |
previousdatasets[16,35]onlyusethevulnerabilitypatchestocon- twosettings(i.e.,randomsplitandsplitbytime).Furthermore,we structthedataset,whichignoresthecorrespondingdependencies highlightthecurrentprogressandshedlightonfuturedirections. (e.g.,calleeandcaller)intherepository.Inaddition,duetothe KeyFindings.Basedontheextensiveexperiments,ourstudy largenumberofdependenciesfromthecallgraph,itisnecessary revealsseveralkeyfindings: toretrievevulnerability-relateddependenciesfordevelopers.Fur- thermore,giventhesubstantialvulnerabilitiesidentifiedeveryyear, (1) Incorporating contexts related to vulnerabilities in theutilizationofhistoricalvulnerabilitydatafordetectingfuture repository-level vulnerability detection enhances the vulnerabilitiesemergesasacriticalneed.However,theexisting performance compared with function-level vulnerability random-splitsettingmayleadtorisksofdataleakageandthepo- detection. tentialforinflatedperformance,whichultimatelycompromisesthe (2) Supervisedlearning-andfine-tuning-basedmethodsexhibit reliabilityofvulnerabilitydetectionmethodsandreflectsthechal- performancedegradationwithinthetime-splitsetting;while lengespresentinreal-worldsoftwaredevelopmentenvironments. the performance of program-analysis- and prompt-based methodsarenotaffected.VulEval:TowardsRepository-LevelEvaluationofSoftwareVulnerabilityDetection arxiv,April,2024 Sequence void /*@alt char * @*/strcpy (/*@unique@*/ /*@out@*/ /*@returned@*/ char *s1, char *s2) /*@modifies *s1@*/ ··· Classifier /*@requires maxSet(s1) >= maxRead(s2) @*/ /*@ensures maxRead(s1) == maxRead (s2) @*/; Deep-Learning Source Code ··· Graph Model (a) Program Analysis-based Methods (b) Supervised Learning-based Methods void Frozen func + System: [Role] Answer: Fine-tuning User: [Task] [Vulnerable] ( ... Pre-trained Source Code Prompt Design LLMs Generation Source Code Tokenizer Model (c) Fine-tuning-based Methods (d) Prompt-based Methods Figure2:Thefourtypesofvulnerabilitydetectionmethods. (3) Lexical-basedmethodsyieldsuperiorresultsthansemantic- willalertthedevelopertoapossiblevulnerability.Theadvantageof basedmethodsinidentifyingvulnerability-relateddepen- thesemethodsliesintheirindependencefromextensivevulnerabil- dency.Itisessentialtodevelopmoreeffectiveretrievaltech- itydatasets.Moreover,theyexplainthedetectedvulnerabilitiesby niquesforretrievingvulnerability-relateddependencies. reportingthevulnerability-triggeringpath[9].Thispathcomprises Contributions.Insummary,themajorcontributionsofthis asequenceofcodesnippets,therebyfacilitatingdevelopers’veri- paperaresummarizedasfollows: ficationprocesses.However,designingwell-definedvulnerability rulesorpatternsistime-consumingandlaborious[30,31],making (1) Tothebestofourknowledge,wearethefirsttopropose itchallengingtocoverallvulnerabilities. aholisticevaluationsystemforevaluatinginterandintra- proceduralvulnerabilitiessimultaneously. 2.2 SupervisedLearning-basedMethods (2) We collect a large-scale repository-level source code and extractcorrespondingdependenciesthatproviderepository- In recent years, many supervised-learning-based methods have level information. We extract 347,533 dependencies and beenproposedthatutilizerepresentationlearningtechniquesto 9,538vulnerability-relateddependenciestodetecttheinter- capture vulnerability patterns. It mainly includes the sequence- proceduralvulnerability. based [34, 35] and graph-based [6, 33, 65] approaches. Figure 2 (3) Weperformanextensiveevaluationof19vulnerabilityde- (b)illustratestheprocessofthesemethods.Thesequence-based tectionmethodsandsevendependencyretrievalmethodsin methodstypicallyusesourcecodeasinputandlearnthecorre- twosettings.Ouranalysishighlightsthecurrentprogress spondingrepresentationsfordeterminingwhetherthegivencode andfuturedirectionsforsoftwarevulnerabilitydetection. snippetisvulnerableornot.Forinstance,SySeVR[34]extractsthe codegadgetandthenusesthebidirectionalLongShort-TermMem- 2 BACKGROUND orynetworkforvulnerabilitydetection.VulCNN[61]transforms Inthissection,weintroducetheexistingvulnerabilitydetection sourcecodeintoimagesandusestheCNNstodetectvulnerabilities. methods,includingprogramanalysis-based,supervisedlearning- Recentstudieshaveshownthatgraph-basedmethodsascendin based,fine-tuning-basedandprompt-basedmethods,asillustrated prominenceduetotheirsuperiorinterpretabilityandeffectiveness. inFigure2. Comparedtosequence-basedapproaches,thesemethodsextract structuredrepresentationsfromsourcecode,includingAbstract 2.1 ProgramAnalysis-basedMethods SyntaxTrees(AST),ControlFlowGraphs(CFG),DataFlowGraphs (DFG),andCodePropertyGraphs(CPG)[55].Subsequently,GNNs Numerousprogramanalysis-basedmethodshavebeenproposed areutilizedtolearnthegraphrepresentationsforvulnerabilityde- and widely used in the industry, such as CheckMarx [25], tection.Incontrasttoprogramanalysis-basedmethods,thesemeth- |
FlawFinder [58], PCA [29] and RATs [3]. These methods lever- odscanautomaticallycapturevulnerabilitypatterns,therebymiti- agepre-definedrulesorpatternsdesignedbyexpertstoidentify gatingtheexpenditureofhumanresourcesandtime-consuming. specifictypesofvulnerabilities,suchasstack-basedbufferoverflow, Nevertheless,theeffectivenessoftheseapproacheshighlydepends heap-basedbufferoverflow,andsoon. ontheavailabilityoflargeandhigh-qualitydatasetsfortraining. Figure2(a)showsanexamplefromSplint[14].Itrepresentsafor- malspecificationdesignedtoexpresstheexpectedbehaviorofthe 2.3 Fine-tuning-basedMethods strcpyfunctionandconcurrentlyprovidesarulefordetectingpo- tentialbufferoverflowvulnerabilities.Specifically,itchecksthecall Althoughsupervisedlearning-basedmethodshavedemonstrated complieswiththeconditionmaxSet(s1) >= maxRead(s2).Ifthe effectivenessforvulnerabilitydetection,Croftetal.[12]havepin- Splintidentifiesanyinvocationthatcontravenestheseconditions,it pointedthatexistingvulnerabilitydatasetsoftenlackinqualityarxiv,April,2024 Xin-ChengWen,XinchenWang,YujiaChen,RuidaHu,DavidLo,andCuiyunGao andaccuracy.Itischallengingtoapplytheminreal-worldscenar- Table1:Statisticsofthedataset. ios[57]. Figure 2 (c) shows the process of fine-tuning-based meth- Set #Function #Repository #Dependency #Vul-Dependency ods[18,22,64].Thesemethodscommencewithpre-trainingona Train 185,791/185,656 3,537/2,872 277,408/253,063 7,580/6,848 vastcorpusofcodeandtextualdata,andthenfine-tunethepre- Valid 23,224/23,312 2,970/349 37,176/40,619 957/813 trained model for a specific task. For instance, CodeBERT [17] Test 23,224/23,271 2,984/331 32,949/53,851 1,001/1,877 employstheTransformerarchitecture,utilizinganencoderforits All 232,239 4,699 347,533 9,538 training process. Similarly, CodeT5 [56] and UniXcoder[21] are specificallydesignedtoprovidebothencoderanddecoderincode- relatedtasks.Throughtheexploitationofknowledgeencapsulated withinpre-trainedmodels,theseapproacheshavebeenshownto snippets,whereeachfunction-levelcodesnippetcontainsthecor- excelinvulnerabilitydetection.EPVD[63]introducesanalgorithm respondingrepository-levelsourcecodeseparately. fortheselectionofexecutionpathsandleveragesapre-trained AsshowninTable1,wecollect4,699repository-levelsource modeltolearnpathrepresentations.PILOT[57]proposesaposi- codeforvulnerabilitydetection.Inrepository-levelvulnerability tiveandunlabeledframeworkandusesthepre-trainedmodelto detection,wealsoutilizethefunction-levellabelofthetargetfunc- constructtheclassifier.However,theseapproachesarelimitedto tionastherepository-levellabel(i.e.,“1”forvulnerabilityand“0” thelengthofinputcodeandexhibitadeficiencyininterpretability. fornon-vulnerability).Thetargetfunctionandthecorresponding dependenciesareusedasawholesampletoserveastheinputfor 2.4 Prompt-basedMethods therepository-levelsample. Inrecentyears,LLMshavedemonstratedsuperiorperformance 3.1.3 ContextualDependencyExtraction. Oneofthemajorcontri- inthefieldsofSoftwareEngineering(SE)[23]duetotheirbroad butionsofVulEvalisthatVulEvalconsidersthetargetcodesnip- generalizationandreasoningabilities.Prominentamongthesede- pet’s contextual dependency, which refers to the external code velopmentsistheseriesofgenerativepre-trainedtransformermod- functionsthatareessentialforvulnerabilitydetection. els,developedbyOpenAI,includingChatGPT[7]andGPT-4[40], We extract the contextual dependencies of a code snippet aswellastheLLaMAmodelsunveiledbyMeta,comprisingboth through program analysis of its belonging repository with two LLaMA[51]andLLaMA2[52].Figure2(d)presentstheprocessof steps.(1)Beforetheextractionprocess,wefirstconstructtherepos- prompt-basedmethods[7,19].Ittakessourcecodeasinput,sub- itory database for each vulnerability patch, which includes the sequentlyconstructsaprompttailoredforvulnerabilitydetection, correspondingrepositorysourcecodewithdifferentheaderfiles andfeedsthispromptintotheLLMs.Then,theLLMsgeneratea (i.e.,.ℎ)andsourcecodefiles(i.e.,.𝑐and.𝑐𝑝𝑝).(2)Then,weselect responsetodetectwhetherthesourcecodeisvulnerableornot. thecodechangedfileinthevulnerabilitypatchandemploystatic However,theseLLMsfacenotablechallengesinsoftwarevulnera- programanalysistool[42]toextractthedependencyelements.We bilitydetection[19],whichprimarilystemsfromtwoaspects.First, classifythemintothe“Callee”and“Caller”dependencies.Specifi- thecodesnippetsoftenlackenoughcontextualinformationfor cally,“Callee”representstheuser-definedfunctionbeinginvoked effectively detectingvulnerabilities. Second, LLMs lack the spe- orexecutedbythevulnerabilitypatch.The“Caller”denotesthe cificdomainknowledgerequiredforvulnerabilitydetection,which user-definedfunctionoftherepositorysourcecoderesponsiblefor |
significantlyhamperstheirperformance. invocatingthefunctioninthevulnerabilitypatch. AsshowninTable1,weextract347,533dependenciesinthe 3 VULEVALSYSTEM repository-level source code. We also label 9,538 vulnerability- Inthissection,weintroducetheevaluationsystemofVulEval.It relateddependencies(i.e.,denotedas“Vul-Dependency”),whichare mainlyincludestwoparts:datacollection,andevaluationtask. directlyinvolvedincodechangesofvulnerabilitypatches.Allthe otherdependenciesareconsideredunrelatedtothevulnerability. 3.1 DataCollection 3.2 EvaluationTask 3.1.1 DataSource. Followingthepreviouswork[54],therawdata usedtobuildVulEvalconsistsofavastcollectionofCVEentries VulEval involves three evaluation tasks: function-level vulnera- fromtheMend[59].Thedatasetconsistsofatotalof4,196CVE bilitydetection,vulnerability-relateddependencyprediction,and entries,4,699vulnerabilitypatches,and164vulnerabilitytypesin repository-levelvulnerabilitydetection,withdetailsasbelow. C/C++programminglanguages. 3.2.1 Function-levelVulnerabilityDetection(Detector). Thistask 3.1.2 Repository Code Collection. For evaluating the inter- aimstopredictwhetherthefunctioncontainsavulnerabilityor proceduralvulnerabilities,wefurthercollecttherepositorysource not.AsshowninFigure3,Functionvulnerabilitydetectionfocuses codeviathreesteps:(1)Weselecttherepositoriesfromwhichwe solelyonthesourcecodeofthetargetpredictionfunctionasinput, canretrievecompletesourcecodeandcommitlogsviaGitHub, abstaining from incorporating any inter-procedure information Chrome,andAndroid.(2)Foreachvulnerabilitypatch,wegather beyondthefunctionitself.Thegoalofthistaskistolearnadetector therepository-levelsourcecodecorrespondingtothecommittime 𝑓 thatcanbeillustratedasfollows: of the vulnerability patches. (3) For each file in the vulnerabil- itypatch,weuseTree-sitter[1]tosliceitasfunction-levelcode 𝑓 :X↦→Y,Y ={0,1} (1)VulEval:TowardsRepository-LevelEvaluationofSoftwareVulnerabilityDetection arxiv,April,2024 (b) Function-Level (c) Vulnerability-Related (d) Repository-Level (a) Data Collection Vulnerabiliity Detection Dependency Prediction Vulnerabiliity Detection Processing Construct Source code Retriever Correspoding Input Source code Source code External Dependency Detector Repository Code Collection Detector Source code Knowledge Database Evaluate Processing Extracting Dependency Retrieval Methods Answer Dependency External Dependency Vul Answer Knowledge Database Top-K Retriever Vul Non-Vul Vul Non-Vul Dependency Non-Vul Callee Caller Figure3:TheoverviewofVulEval.Figure(a),(b),(c),and(d)denotetheprocessofdatacollection,function-levelvulnerability detection,vulnerability-relateddependencyprediction,andrepository-levelvulnerabilitydetection,respectively. (a) whereXdenotestheinputoffunction-levelcodesnippetandY wherethe𝐶𝑎𝑙𝑙𝑒𝑒 X and𝐶𝑎𝑙𝑙𝑒𝑟 X denotetheretrieved“Callee”and [Note]: denotesthelabelwhichissetas1forvulnerablecodesnippetsand “Caller”dependencyfromcodesnippetX,respectively. [Note]: • Focusing on the information within 0otherwise. code snippet and corresponding • Focusing only on the repository code (i.e., dependency). information within 4 EXPERIMENTALSETUP 3.2.2 Vulnerability-RelatedDependencyPrgeidveicnt cioodne (sRnieptpreite.ver). The • Detecting inter- and intra-procedural vulnerabilities simultaneously. task aims at providing developers wit•h eLxacpklinagn ianttieor-nprsocaebduorualt the 4.1 ResearchQuestions vulnerabilities vulnerabilities.Table1showsthatthedatasethas347,533depen- Ourexperimentintendstoanswerthefollowingresearchquestions: dencies,butonly9,538dependenciesarerelatedtovulnerabilities. • RQ1:Howdoprogramanalysis-,supervisedlearning-,fine- Thus,itisnecessarytoretrievevulnerability-relateddependencies tuning-andprompt-basedmethodsperforminfunction-level fromthelargenumberofdependenciesintherepositorysource.As vulnerabilitydetection? showninFigure3(c),theprocessofdependencypredictiongener- • RQ2:Howdotheretrievalmethodsperforminidentifying allyinvolvesthe“Callee”(𝐶𝑎𝑙𝑙𝑒𝑒)and“Caller”(𝐶𝑎𝑙𝑙𝑒𝑟)dependency thevulnerability-relevantdependency? extractedfromtheinputfunctionX,followedbythecalculationof • RQ3:Howdothesemethodsperforminrepository-level thedegreeofvulnerability-relatedbetweentheinputcodesnippet vulnerabilitydetection? andeachcandidatedependency.Thegeneralretrievalfunction𝑔 • RQ4:Howdothesevulnerabilitydetectionmethodsperform foridentifyingvulnerability-relateddependencycanbeformulated foreachCWEtype? asfollows: 𝑚𝑘 𝑎𝑥 𝑔(X,𝐶𝑎𝑙𝑙𝑒𝑒𝑖,𝐶𝑎𝑙𝑙𝑒𝑟𝑗 ) (2) 4.2 ExperimentalMethodology 𝑖,𝑗∈{1,2,..,𝑚+𝑛} 4.2.1 ComparisononVulnerabilityDetectionApproaches. Toevalu- where𝐶𝑎𝑙𝑙𝑒𝑒𝑖 and𝐶𝑎𝑙𝑙𝑒𝑟𝑗 representthe𝑖-thand 𝑗-thcandidate atetheefficacyofvulnerabilitydetectionacrossfunction-leveland repository-levelcontexts,ourbenchmarkcomparesfourtypesof dependency,respectively,and𝑚and𝑛arethenumberof“Callee” vulnerabilitydetectionapproaches:(1)Programanalysis-based and“Caller”candidatedependencies,respectively.𝑘 denotesthe methods: Following the previous works [5, 61], we select four top𝑘relevantdependenciestoberetrievedinthistask. popularprogramanalysis-basedvulnerabilitydetectors,i.e.,Cp- |
3.2.3 Repository-level Vulnerability Detection. Repository-level pcheck[11],Flawfinder[58],RATS[3],andSemgrep[44].These vulnerabilitydetectionisourproposedtask,whichintegratesde- methodsleveragepredefinedrulesandpatternstodiscernpoten- pendenciesidentifiedinthesecondtaskforvulnerabilitydetection, tially improper operations within source code. (2) Supervised asshowninFigure3(d).Itfirstusesthe“Retriever”toretrievethe learning-based methods: We use Devign [65] and Reveal [6] associateddependencyofthegivencodesnippet.Then,theidenti- asrepresentativesupervisedbaselines,whicharewidelyadopted fieddependencies(i.e.,retrievedby“Retriever’),areconcatenated asbaselinesinrecentworks[4,33,57].Thesemethodsconstruct withthetargetfunctionasinput.Then,itusesthe“Detector”to graphsfromsourcecodeandthenperformvulnerabilitydetection determinewhethertheinputisvulnerableornot.Thedefinition usingfeaturesobtainedthroughGatedGraphNeuralNetworks[32]. ofrepository-levelvulnerabilitydetectionℎcanberepresentedas (3)Fine-tuning-basedmethods:Thefine-tuning-basedmethods follows: consiststhreegeneralpre-trainedmodelsandfourstate-of-the- artapproachesspecializedforvulnerabilitydetection.Weselect ℎ:(X,𝐶𝑎𝑙𝑙𝑒𝑒 X,𝐶𝑎𝑙𝑙𝑒𝑟 X)↦→Y,Y ={0,1} (3) threegeneralpre-trainedmodels,CodeBERT[17],CodeT5[56],andarxiv,April,2024 Xin-ChengWen,XinchenWang,YujiaChen,RuidaHu,DavidLo,andCuiyunGao UniXcoder[21],fortheirwidespreadadoptionincode-relatedtasks thecountofcorrectlypredicteddependenciesamongtheTop-K andfurtherfine-tunethesemodelsforvulnerabilitydetection.In predicteddependency. addition,wechoosefourstate-of-the-artmodelsdesignedspecifi- Recall@K(Rec@K):Itistheproportionofcorrectlypredicted callyforvulnerabilitydetection,includingPILOT[57],EPVD[63], dependencyamongsttheground-truthdependency,whichiscom- LineVul[18]andPDBERT[36].(4)Prompt-basedmethods:We putedas𝑅𝑒𝑐@𝐾 = 𝑀𝐴𝑇𝐶 𝐺𝐻 𝑇𝑘(𝑚) ,whereGTrepresentsthetotal choosetwoopen-sourceLLMs:LLaMA[51]andCodeLlama[46] countofground-truth,vulnerability-relateddependencies. fortheirproficiencyintextandcodegeneration,respectively.Addi- tionally,wealsoincorporatetwoclosed-sourceLLMs:ChatGPT(i.e., 4.4 DataSplit GPT-3.5-turbo)andGPT-3.5-instruct,developedbyOpenAI,which Inthispaper,weexperimentunderthefollowingtwosettings:(1) producetextwith175billionparameters.FortheseLLMs,weutilize RandomSplit:Followingthepreviouswork[6,65],werandomly theprocessdescribedinSection2(d)toassesstheireffectiveness splitthedatasetsintodisjointtraining,validation,andtestsetsina invulnerabilitydetection. ratioof8:1:1.(2)TimeSplit:Tomitigatetheriskofdataleakage 4.2.2 ComparisononDependencyPredictionApproaches. Wefirst andeffectivelyevaluatethemethods’abilitytoidentifyemerging extract all functions from the call graph as dependency candi- vulnerabilities,weadoptatime-splitsettingbasedonthe“commit dates. Then, three types and seven baselines are employed for date”ofvulnerabilitypatches.Wedividethedatasetintotraining, thevulnerability-relateddependencypredictiontask:(1)Random validation,andtestsetsinan8:1:1ratio.Specifically,patchesbefore method:Thismethodretrievescodesnippetsrandomly,serving 2018-03-21 are designated for training, those before 2022-07-21 asafoundationalbaselineforevaluatingotherpredictionmeth- constitutethevalidationset,andpatchesafterthisdateareused ods.Tomitigatesamplingbias,werepeatthisrandomizedprocess forthetestset. 100timesandreporttheaverageresults.(2)Lexical-basedmeth- ods:Weevaluatetherelevanceofvulnerabilitydependenciesusing 4.5 ImplementationDetails twoprimarymetricsasbaselines:JaccardSimilarityandEditSim- Forprogramanalysis-,supervised-,andfine-tuning-basedmethods, ilarity[49].Additionally,weuseBM25[45]andBM25+[53]as wedirectlyusethereplicationpackagesandhyper-parametersthat lexical-basedbaselinesweightingfunctionstorankdependencies havebeenmadepubliclyaccessible.Forprompt-basedmethods,we bytheirrelevancetospecificcodesnippets.(3)Semantic-based downloadedLLaMA(i.e.,7Band13B)andCodeLlama(i.e.,7Band methods:Weleveragepre-trainedmodelsasbackbone,specifically 13B)fromtheHuggingFaceHub[24]anddeploythemlocallyby CodeBERT[17]andUniXcoder[21])toobtainfeatureembeddings thevLLM[26]framework.ForChatGPT(“gpt-3.5-turbo-0301”)and andthenemployCosineSimilarity[62]tomeasurethesemantic GPT-3.5-instruct(‘gpt-3.5-turbo-instruct”),weusethepublicAPIs relevancebetweencodeanddependencysnippets. andinitialparameterssettingprovidedbyOpenAI.Allevaluations areconductedonaserverequippedwithfourNVIDIAA100-SXM4- 4.3 EvaluationMetrics 40GB. 4.3.1 MetricsforVulnerabilityDetectionTask. Weusethefollow- ingfourwidely-usedperformancemetricsforvulnerabilitydetec- 5 EXPERIMENTALRESULTS tion: 5.1 RQ1:EffectivenessinFunction-level |
Precision: It is calculated as the ratio of true positives (TP) tothesumoftruepositivesandfalsepositives(FP),expressedas VulnerabilityDetection Precision= TPT +P FP.Itsignifiestheproportionofcorrectlyidentified 5.1.1 EffectivenessinRandomSplitSetting. ToanswerRQ1,we vulnerabilitiesamongallretrievedvulnerabilities. compare the four types of vulnerability detection methods, in- Recall: Recall is computed as the ratio of TP to the sum of cludingprogramanalysis-,supervisedlearning-,fine-tuning-,and TPandfalsenegatives(FN),givenbyRec= TPT +P FN.Itrepresents prompt-basedmethods.Theresultsareshowninthemiddlecolumn theproportionofvulnerabilitiesdetectedbybaselinesoutofall ofTable2. vulnerabilities. Fine-tuning-basedmethodsdemonstratesuperiorperfor- F1 Score (F1): F1 score is defined as the harmonic mean of mancecomparedtoothermethodsintherandomsplit.Specif- precisionandrecall,calculatedusingtheformulaF1=2× P Pr re e× +RR ee cc. ically,thesemethodsyieldaverageresultsof51.80%inprecision, Itservesasacombinedmeasureofprecisionandrecall,providing 38.97%inF1score,and39.03%inMCCintherandomsetting.We insightintothebalancebetweenthem. alsoobservethatfine-tuning-basedmethodsfallshortintermsof MatthewsCorrelationCoefficient(MCC):MCCisameasure recall,withanaverageof32.13%,comparedtoprompt-basedmeth- ofbinaryclassification,particularlyusefulinimbalanceddatasets, ods,whichachieveanaverageresultof55.80%intermsofrecall.In computedasMCC= √ TP×TN−FP×FN ,whereTN thebroaderevaluationforTop-3performanceacrossfourmetrics (TP+FP)(TP+FN)(TN+FP)(𝑇𝑁+FP) (comprising12instances),thesemethodsdemonstratesuperiority denotesthetruenegatives. in9outof12cases,achievingthehighestprecision,F1andMCC, 4.3.2 MetricsforDependencyPredictionTask. Weproposethefol- withascoreat63.64%,42.47%and41.80%,respectively. lowingtwometricsforidentifyingdependency: The program analysis- and supervised learning-based Precision@K(Pre@K):Itistheproportionofcorrectlypre- methodsconsistentlyexhibitinferiorperformanceacross dicteddependencyamongsttheTop-Kpredicteddependency,cal- allmetrics.Programanalysis-basedmethodsoftentargetonly culatedasfollows:𝑃𝑟𝑒@𝐾 = 𝑀𝐴𝑇 𝑘𝐶𝐻𝑘.where𝑀𝐴𝑇𝐶𝐻 𝑘 denotes specificvulnerabilitytypesandconsequentlyyieldpoorresultsinVulEval:TowardsRepository-LevelEvaluationofSoftwareVulnerabilityDetection arxiv,April,2024 Table2:Theexperimentalresultsoffunction-levelvulnerabilitydetectioninrandomandtimesplitsettings.Boldtextcells representthebestperformance.Thecellsingreyrepresenttheperformanceofthetop-3bestmethods,withdarkercolors representingbetterperformance. SplitMethods Random Time Type Baseline Precision Recall F1 MCC Precision Recall F1 MCC Cppcheck 12.12 1.79 3.12 3.61 19.43 4.34 7.10 7.45 Flawfinder 6.54 24.78 10.35 7.65 8.55 32.52 13.54 9.73 ProgramAnalysis RATS 7.06 12.54 9.04 5.80 11.18 20.13 14.38 10.14 Semgrep 10.36 7.76 8.87 6.64 8.37 6.67 7.42 7.45 Devign 38.36 24.26 29.72 28.88 9.41 5.26 6.75 3.95 SupervisedLearning Reveal 5.95 33.35 10.08 7.96 7.20 24.68 10.99 5.83 CodeBERT 51.45 31.79 39.30 39.09 13.85 2.86 4.74 4.56 CodeT5 51.83 35.97 42.47 41.80 17.23 5.40 8.23 7.58 UniXcoder 63.64 18.81 29.03 33.66 13.36 4.13 6.31 5.31 Fine-tuning PILOT 49.01 33.28 39.64 38.96 4.26 91.63 8.15 2.74 EPVD 46.84 35.33 40.28 39.18 12.76 4.41 6.55 5.39 LineVul 47.95 34.93 40.41 39.44 12.79 2.97 4.82 4.31 PDBERT 51.89 34.78 41.64 41.11 34.97 5.30 9.20 12.32 LLaMA-7B 2.73 68.66 5.25 -1.51 4.17 70.66 7.88 0.87 LLaMA-13B 2.87 57.46 5.47 0.00 3.91 57.84 7.33 -0.90 Prompt CodeLlama-7B 0.88 72.09 1.74 -2.94 2.48 75.65 4.81 -3.33 CodeLlama-13B 2.26 50.45 4.33 -4.98 3.28 53.60 6.18 -5.51 GPT-3.5-instruct 4.02 53.58 7.48 5.37 5.10 46.93 9.20 4.09 ChatGPT 7.38 32.55 12.03 10.44 9.69 26.69 14.22 10.13 generalvulnerabilitydetection.Thepre-trainedmodelsutilizedin inthetime-splitsetting,respectively.Besides,ChatGPTdemon- fine-tuningandprompt-basedmethodslearnmoregeneralknowl- stratesnear-optimalperformanceonbothF1scoreandMCCmet- edgeduringthepre-trainingphase,therebyendowingthemwith ricsat14.22%and10.13%,respectively.Notably,ChatGPTistrained superiorefficacythansupervisedlearning-basedmethods. solelyondatauptoSeptember2021,whichavoidsthedataleakage problem.TheChatGPT’sperformancecanbeattributedtoitsvast trainingcorpuscontaininggeneralknowledge,whichenablesitto 5.1.2 EffectivenessinTimeSplitSetting. Wealsoevaluateallbase- maintainconsistentperformanceacrossdiversedatadistributions. linemethodsinthetimesplitsettingtocomprehensivelyverify |
theireffectivenessagainstreal-worldscenarioswithoutdataleak- SummaryforRQ1:Experimentresultsrevealthatfine-tuning- age.TheresultsareshownintherightcolumnofTable2. basedmethodsexhibitsuperiorperformanceintherandom Degradationinperformanceofsupervisedlearning-and splitsetting.Wealsoobserveaperformancedegradationin fine-tuning-basedmethodswithintimesplit.Analyzingthere- supervisedlearningandfine-tuning-basedbaselineswithina sultsinTable2,weobservethatfine-tuning-basedmethodsexhibit time-splitsetting.Inaddition,theprogramanalysisandprompt- asubstantialdecreaseinallfourmetrics,showcasinganaverage basedmethodsarenotinfluencedwithinthetime-splitsetting, decrementof36.20%inprecision,15.46%inrecall,32.11%inF1 therebypreservingefficacyacrossreal-worldscenarios. score,and33.00%inMCC.Similarly,thesupervisedlearning-based methodsalsodemonstrateadeclineinperformanceacrossfour 5.2 RQ2:EffectivenessinVulnerability-related metricsby13.85%,13.84%,11.03%,and13.53%,respectively.This DependencyPrediction canbeattributedtotheirheavyrelianceonextractingsemantics fromhistoricaldata,ratherthandirectlycapturingvulnerability To answer RQ2, we evaluate the performance of three types of patterns.However,mostvulnerabilitiesarediscoveredmuchlater methodsunderbothrandom-splitandthetime-splitsettings.We thantheyareintroduced.Consequently,wecanachievethatthese presentthetop-1,3,5Pre@kandRec@kinTable3. methodsstruggletoidentifynewemergingvulnerabilitiesinreal- Superiorperformanceoflexical-basedmethodsforiden- worldscenarios. tifyingdependency.Theexperimentalresultsshowthatboth Theperformanceofprogramanalysis-andprompt-based lexicalandsemantic-basedtechniquesenhancesperformance,yield- methodsarenotinfluencedunderthetime-splitsetting.Our ingtheaveragelyimprovementsof10.21%inPre@1and5.74%in empiricalresultsindicatethatprogramanalysisachievedsuperior Rec@1foridentifyingdependency.Notably,thelexical-basedre- performanceduetothepredefinedrules,withaverageF1score trievaltechniquescontributethelargestimprovements,withthe andMCCfrom7.85%,5.93%intherandomsettingto10.61%,8.69% consistentimprovements3.44%∼18.83%and3.45%∼18.91%ofarxiv,April,2024 Xin-ChengWen,XinchenWang,YujiaChen,RuidaHu,DavidLo,andCuiyunGao Table3:Theexperimentalresultsofvulnerability-relateddependencypredictionintherandomandtime-splitsettings.The shadedcellsrepresenttheperformanceofthebestmethodsineachmetric.Boldtextcellsrepresentthebestperformance. SplitMethods Random Time Type Baseline Pre@1 Pre@3 Pre@5 Rec@1 Rec@3 Rec@5 Pre@1 Pre@3 Pre@5 Rec@1 Rec@3 Rec@5 Random Random 56.90 68.60 75.26 30.48 55.46 68.67 55.36 68.11 78.67 32.26 62.42 75.77 JaccardSimilarity 69.70 70.68 77.94 37.34 57.14 71.10 68.04 72.33 81.33 39.65 66.29 78.33 EditSimilarity 67.27 73.09 76.51 36.04 59.09 69.81 67.77 76.18 86.17 39.49 69.82 82.99 Lexical BM25 62.42 70.68 76.87 33.44 57.14 70.13 63.91 71.45 80.50 37.24 65.49 77.53 BM25+ 67.27 70.28 77.22 36.04 56.82 70.45 67.22 71.45 80.33 39.17 65.49 77.37 CodeBERT 68.48 72.69 75.44 36.69 58.77 68.83 64.19 72.33 78.00 37.4 66.29 75.12 Semantic UnixCoder 61.82 69.48 76.87 33.12 56.17 70.13 68.04 73.03 81.00 39.65 66.93 78.01 Pre@k and Rec@k (𝑘 = 1,3,5), respectively. When𝑘 = 1, the prompt-basedmethods.Theexperimentalresultsarepresentedin performancebenefitsmorefromtheknowledgeassociatedwith Table4. retrievaltechniques.Forinstance,allretrievalmethodsshowan The incorporation of vulnerability-related dependency averageincreaseof16.27%inPre@1,3.72%inPre@3,and2.06% contextsimprovesvulnerabilitydetectionperformance.We inPre@5comparedtorandommethod.Moreover,thesemantic- observethatrepository-levelapproachesthatutilizethe“Upper” basedretrievalmethodsshowmoderateperformance.Thismay strategygenerallyoutperformfunction-levelmethodspreviously beattributedtopre-trainedmodels’focusongeneralsemantics mentioned. Specifically,when applyingthe “Upper”strategy in ratherthandomain-specificvulnerabilityknowledge,indicating fine-tuning-basedmethods,wecanobserveperformanceenhance- thatincorporatingvulnerability-specificcharacteristicsinretrieval mentsinfiveoutofsixbaselinesinVulEval.ExceptforPILOT, methodsisbeneficial. theserepository-levelmethodsdemonstrateanaverageimprove- JaccardSimilarityandEditSimilarityoutperforminran- mentoverthecorrespondingbaselinesof7.43%inprecision,3.38% dom and time split settings, respectively. As shown in Ta- inrecall,4.91%inF1score,and5.24%inMCC.Thissuggeststhat ble3,JaccardSimilarityisthemosteffectivemethodunderthe incorporating vulnerability-related dependencies provides addi- randomsplitsetting,demonstratingsuperiorityin4outof6cases. tionalcontextualinformation,whichallowsthemodeltoleverage |
Itachievesthebestperformance69.70%ofPre@1and37.34%of amorecomprehensiveunderstandingofthecoderepository.The Rec@1,respectively.TheEditSimilarityperformsbestinthetime observedperformancedeclineinPILOTmaybeattributedtoits splitsetting,whichoutperformstheJaccardSimilarityby3.85%and weakly supervised learning, which can be a consequence of an 3.53%,withrespecttoPre@3andRec@3,respectively.Thisfinding excessofunlabeledsamplesinthedataset. impliesthattheretrievingcommontokensbetweencodesnippets Larger models benefit more from the repository-level andvulnerability-relateddependenciesiseffectiveonidentifying vulnerability-related knowledge. Our experimental findings dependency. indicatethatthebenefitsderivedfromrepository-levelinformation aremarginalfortheLLaMAandCodeLlama.Itmaybeduetothe Summary for RQ2: Our empirical analysis indicates that limitationsofthemodel’sabilitiesforcapturingvulnerabilitypat- lexical-basedmethodsyieldsuperiorresultsinidentifyingde- terns.Incontrast,ChatGPTexhibitsperformanceimprovements pendency.Specifically,theJaccardSimilarityandEditSimilar- acrossallfourevaluationmetricsincombingrepository-levelde- ityachievethebestperformanceintherandomandtime-split pendencies,withincreasesof11.60%,24.43%,14.48%,and26.35%, settings,respectively. respectively.Theseresultssuggestthatmodelswithalargerfounda- tionalarchitecturepossesssuperiorcomprehensionabilitieswhen dealingwithextensivetextualinput. 5.3 RQ3:EffectivenessinRepository-level Itisimperativetoexploremoreeffectiveretrievalmethods VulnerabilityDetection foridentifyingdependency.Despiteutilizingthemosteffective This research question aims to investigate whether integrating retrievalapproachidentifiedinRQ2foridentifyingdependencies, vulnerability-relateddependenciescanenhancetheexistingvulner- combiningitwithexistingrepository-levelvulnerabilitydetection abilitydetectionmethods’performance.Weemploytwostrategies: techniquesdoesnotgreatlyenhancetheperformance.Forinstance, “Upper”and“Prediction”toevaluatethebaselinesperformance. underarandomsetting,ChatGPTemployingEditSimilarityyields “Upper”referstoincorporatethevulnerability-relateddependency improvementsof4.07%,10.63%,5.24%,and8.43%respectivelyacross as input for vulnerability detection. “Prediction” represents the thefourmetrics.However,usingEditSimilarityunderthetime-split mosteffectiveretrievemethodasidentifiedinRQ2(i.e.,Jaccard settingisnoteffective.Theseobservationsunderscoretheneedfor SimilarityinrandomsplitsettingandEditSimilarityintime-split advancementsinretrievalstrategiestobettercaptureandleverage setting).Fortherepository-levelvulnerabilitydetection,dueto vulnerability-relateddependency. theinputlengthlimited,weonlyevaluatethefine-tuning-andVulEval:TowardsRepository-LevelEvaluationofSoftwareVulnerabilityDetection arxiv,April,2024 Table4:Theexperimentalresultsofrepository-levelvulnerabilitydetectioninthetwosplitsettings.Thedarkandlightshaded cellsrepresentthebestperformancebyusingvulnerability-relatedandpredicteddependency,respectively. SplitMethods Random Time Type Baseline Strategy Precision Recall F1 MCC Precision Recall F1 MCC Upper 56.75 33.88 42.43 42.60 26.63 5.61 9.27 10.63 CodeBERT Prediction 50.51 29.61 37.34 37.32 20.88 4.03 6.75 7.57 Upper 52.82 37.76 44.04 43.29 23.59 9.75 13.79 12.93 CodeT5 Prediction 49.66 32.14 39.02 38.54 17.03 5.72 8.56 7.73 Upper 57.53 31.34 40.48 41.25 32.68 7.10 11.66 13.68 UniXcoder Fine-tuning Prediction 54.39 28.57 37.46 38.17 25.22 6.14 9.88 10.72 Upper 69.78 14.48 23.98 31.01 41.09 11.23 17.64 21.56 PILOT Prediction 68.00 12.65 21.33 28.57 20.00 2.33 4.17 5.57 Upper 57.63 32.69 41.71 42.18 20.81 6.57 9.98 9.67 LineVul Prediction 55.71 29.02 38.16 38.98 18.68 5.40 8.38 8.08 Upper 57.72 34.03 42.82 43.09 47.08 11.97 19.09 22.26 PDBERT Prediction 54.57 28.42 37.38 38.14 26.43 3.92 6.83 8.82 Upper 1.71 5.37 2.59 -2.22 2.14 4.45 2.89 -2.94 LLaMA-7B Prediction 1.55 4.91 2.36 -2.54 1.70 3.60 2.31 -3.65 Upper 2.04 15.22 3.60 -2.65 2.32 11.12 3.84 -4.32 LLaMA-13B Prediction 2.06 15.33 3.63 -2.60 2.35 11.12 3.88 -4.21 Upper 2.35 29.70 4.36 -2.41 2.67 23.83 4.80 -5.31 CodeLlama-7B Prediction 2.11 26.79 3.91 -3.56 2.65 23.20 4.76 -5.29 Prompt Upper 2.19 27.16 4.05 -3.09 2.83 24.05 5.06 -4.51 CodeLLama-13B Prediction 2.05 24.85 3.79 -3.69 2.94 24.79 5.25 -4.10 Upper 3.92 49.70 7.27 4.70 5.18 56.46 9.49 5.07 GPT-3.5-instruct Prediction 3.78 48.07 7.00 4.02 5.03 42.27 8.98 3.53 Upper 8.61 41.19 14.25 13.69 10.44 32.52 15.80 12.30 ChatGPT Prediction 7.68 36.01 12.66 11.32 8.83 26.59 13.26 9.03 |
SummaryforRQ3:Theexperimentresultsrevealthatin- 47.53%F1scoreaveragely.Forexample,withintheCWE-416,Chat- corporatingcontextsrelatedtovulnerabilitiesenhancesthe GPTcancorrectlydetect131samplesandachieve45.67%F1score performanceofvulnerabilitydetection.Itisnoteworthythat respectively,demonstratingeffectivenesssuperiortogeneralvul- largermodelsparticularlygainimprovementfromtheinte- nerabilitydetection.Moreover,ChatGPTexclusivelyidentified20 grationofvulnerabilityknowledgeattherepositorylevel.In samples,whileonly2,2,and1samplescanbedetectedbyRATS, addition,itbecomesessentialtodevelopmoreeffectiveretrieval Devign,andPDBERTrespectively.Therefore,itispracticaltolever- techniquesforidentifyingvulnerability-relateddependencies. ageLLMstodesignadetectorforspecificCWEtypevulnerability inreal-worldscenarios. Itisworthwhiletoexplorehowtocombinethevulner- abilitydetectioncapabilitiesofdifferentbaselines. Among 5.4 RQ4:EffectivenessinEachCWEType the200samplesinCWE-787,thefourbaselinescorrectlydetect VulnerabilityDetection 122samplesinaverage.Thecapabilitiesofthefourmodelsareevi- dencedbytheirabilitytocorrectlypredict181samples,showcasing ToanswerRQ4,weselectfourmethodsfromdifferenttypesof theircomplementarystrengths.Concurrently,thereexistsasubset methods(i.e.,RATS,Devign,PDBERT,andChatGPT),whichper- of75samplesthataredetectablebyanyofthefourbaselines,illus- formthebestoverallperformanceintheirtypes.Wethenevaluate tratingtheoverlapintheirdetectioncapabilities.Inthefuture,itis thesemethodsinCWE-190,CWE-400,CWE-415,CWE-416,and worthwhiletoexplorehowtocombinethevulnerabilitydetection CWE-787.Thesevulnerabilitytypesrepresentthemostrecurrent capabilitiesofdifferentmethods. vulnerabilities,highlightingtheirelevatedpotentialforsoftware damage.Foreachtype,wedeliberatelychoose200representative sampleswithinthetime-splitsettingtoavoidtheproblemsofdata leakage.Figure4showsthenumberofcorrectlypredictedsamples bythefourbaselinesoneachofthevulnerabilitytype. SummaryforRQ4:Theexperimentalresultsrevealthatthe ThesuperiorperformanceofChatGPTforeachCWEvul- superiorperformanceofChatGPTforeachCWEtypevulner- nerabilitydetection.Inthedomainofsingularvulnerabilityde- abilitydetection.Besides,itisworthwhiletoexplorehowto tection,ouranalysisrevealsthatChatGPThasdemonstratedsupe- combinedifferentmethods’capabilitiesforsoftwarevulnera- riorperformance,correctlyidentifying668samplesandachieving bilitydetectioninthefuture.arxiv,April,2024 Xin-ChengWen,XinchenWang,YujiaChen,RuidaHu,DavidLo,andCuiyunGao RATS ChatGPT RATS ChatGPT RATS ChatGPT RATS ChatGPT RATS ChatGPT 26 28 0 6 1 41 2 20 24 16 0 5 0 9 23 2 3 0 0 1 1 0 0 1 15 0 2 0 37 0 2 0 2 0 1 2 2 1 0 1 3 2 1 0 4 79 110 89 100 75 2 7 0 4 0 1 0 0 0 3 0 13 0 18 0 8 0 10 2 17 Devign 0 PDBERT Devign 0 PDBERT Devign 0 PDBERT Devign 0 PDBERT Devign 0 PDBERT (a)CWE-190 (b)CWE-400 (c)CWE-415 (d)CWE-416 (e)CWE-787 Figure4:Theexperimentalresultsofseveralvulnerabilitytypes,includingCWE-190,CWE-400,CWE-415,CWE-416,and CWE-787.Thegreen,blue,red,andyellowcirclesdenotetheresultsofDevign,RATS,PDBERT,andChatGPT,respectively. 6 DISCUSSION CodeGeex[64],StarCoder[28],andGPT-4[40].Futureresearchwill conductmorecomprehensiveexperimentsacrossbroaderbaselines. 6.1 ImplicationsofFindings GeneralizabilityonOtherProgrammingLanguages.Inthispaper, Inthissection,wediscusstheimplicationsofourworkforsoft- ourexperimentalanalysisfocusessolelyonC/C++programming warevulnerabilitydetection.Ourexperimentalresultsalsoshow languages,excludingotherpopularlanguagessuchasJavaand potentialresearchdirectionsintheeraofsoftwarevulnerability Python. However, the system of VulEval can be generalized to detection.Specifically: otherprogramminglanguagesbecausetheapproachdoesnotrely onlanguage-specificfeatures.Infutureresearch,weintendtoeval- (1) ForRQ1,fine-tuning-basedmethodsexhibitsuperiorperfor- uatetheefficacyofVulEvalinthecontextofabroaderrangeof manceintherandomsplitsetting.Theyneedtoconsider programminglanguages. timefactorstobemoreeffectiveinreal-worldscenarios.The Implementationofbaselines.Toreplicatethebaselines,wemetic- program-analysis-andprompt-basedmethodseffectively ulouslyusethemethodologiesdelineatedintheopen-sourcecodes arenotaffectedbytime-splitsetting.LeveragingLLMsand andtheoriginalpapers.However,owingtotheunavailabilityof prompttechniquescanbeasolutionforalleviatingperfor- theimplementationdetailsandhyper-parametersforDevign[65], mancedegradationbytime-splitsetting,therebyenhancing ourreproductionisguidedbyReveal’simplementation[6]. theapplicabilityinreal-worldscenarios. (2) For RQ2, using lexical-based methods for identifying vulnerability-relateddependencyleadstorelativelybetter 7 RELATEDWORK performancethanothersemantic-basedmethods.However, the number of dependencies is not consistent across the WehaveelaboratedonthevulnerabilitydetectionmethodsinSec- |
codesamples.Therefore,howtoautomaticallyidentifythe tion2,andfocusonillustratingthevulnerabilitydatasetsinthis dependenciesneedstobefurtherinvestigated. section.Thesedatasetscanbebroadlyclassifiedintothreegroups: (3) ForRQ3,incorporatingcontextsrelatedtovulnerabilitiesin function-,slice-andfile-level.Thefunction-leveldatasets[8,16,65] repository-levelvulnerabilitydetectionenhancestheperfor- utilize cases crafted artificially or extract the functional source mancecomparedwithfunction-levelvulnerabilitydetection. code from the real-world scenarios code snippets. For example, Moreover,thelargerLLMsbenefitmorefromtherepository- SARD[39]constructssamplesbymanualcheckingandindustrial level vulnerability-related knowledge. The retrieval tech- production.Reveal[6]collectspatchesfromopen-sourcereposi- niquesforpredictingvulnerability-relateddependencyare toriesandextractsfunction-leveldatafromcodechangesinthe onemajorbottleneckforimprovingtheperformanceofcur- patches.Theprimarylimitationofthesefunction-leveldatasetsis rentrepository-levelapproachesandremainunsolved. constrainedbythecontext-providedcodesegments.Theslice-level (4) ForRQ4,ChatGPTismoreeffectivethanothervulnerability datasets[10,43]typicallyemploypre-definedrulesorstatictoolsto detectionmethodsfordetectingthespecificCWEvulnera- extractDFGandCFGfromthesourcecode.Forexample,Vuldeep- bilitytypes.Inaddition,itisworthwhiletoexplorehowto ecker[35]constructsCodeGadgetDatabase(CGD)bygenerating combinedifferentmethods’capabilitiesforsoftwarevulner- codegadgets,consistingofCWE-119andCWE-399vulnerabilities. abilitydetectioninthefuture. 𝜇Vuldeepecker[66]expandsuponthisapproachbycollectingthe40 typesofvulnerabilitiesandtheircorrespondinglabels.Thefile-level datasets[13,20,27]generallyprovideentirefilesfromvulnerability 6.2 ThreatstoValidity patches.Suchdatasetspresentacomprehensivesnapshotofthe RepresentativenessofBaselinesSelection.Apotentialthreattothe sourcecode,whichisbeneficialindetectingthebroadercontext validity of our study arises from the representativeness of the withinwhichvulnerabilitiesmayexist.Forexample,CrossVul[38] baselines employed in our experiments for vulnerability detec- constructsthedatasetspanning40programminglanguagesand tion.Owingtotheconstraintsposedbycomputationalresources 1,675projects,whileitonlyprovidesfile-levelsourcecode. andtheexcessivelyexpensivecostsassociatedwiththeusageof However, the previous works more focus on collecting APIs,werefrainedfromconductingexperimentsinvolvinga34B function/file-leveldata,andignorerepository-leveldependencies modelsize,aswellasseveralothercontemporarymodelsincluding whichareimportantfordetectinginter-proceduralvulnerability.VulEval:TowardsRepository-LevelEvaluationofSoftwareVulnerabilityDetection arxiv,April,2024 8 CONCLUSIONANDFUTUREWORK We?CoRRabs/2310.09810(2023). https://doi.org/10.48550/ARXIV.2310.09810 arXiv:2310.09810 Inthispaper,weproposeaholisticmulti-levelevaluationsystem [20] SeyedMohammadGhaffarianandHamidRezaShahriari.2021.Neuralsoftware VulEval,aimingatevaluatingthesoftwarevulnerabilitydetection vulnerabilityanalysisusingrichintermediategraphrepresentationsofprograms. performanceofinter-andintra-proceduralvulnerabilitiessimul- Inf.Sci.553(2021),189–207. [21] DayaGuo,ShuaiLu,NanDuan,YanlinWang,MingZhou,andJianYin.2022. taneously.Specifically,VulEvalconsistsofthreeevaluationtasks: UniXcoder:UnifiedCross-ModalPre-trainingforCodeRepresentation.InPro- function-levelvulnerabilitydetection,vulnerability-relateddepen- ceedingsofthe60thAnnualMeetingoftheAssociationforComputationalLinguistics (Volume1:LongPapers),ACL2022,Dublin,Ireland,May22-27,2022,Smaranda dencypredictionandrepository-levelvulnerabilitydetection.VulE- Muresan,PreslavNakov,andAlineVillavicencio(Eds.).AssociationforCompu- valalsoconsistsofalarge-scalevulnerabilitydataset.Byevaluating tationalLinguistics,7212–7225. 19vulnerabilitydetectionmethodsonthedatasplitrandomlyand [22] DayaGuo,ShuoRen,ShuaiLu,ZhangyinFeng,DuyuTang,ShujieLiu,Long Zhou,NanDuan,AlexeySvyatkovskiy,ShengyuFu,MicheleTufano,ShaoKun bytimerespectively,weobservethatincorporatingvulnerability- Deng,ColinB.Clement,DawnDrain,NeelSundaresan,JianYin,DaxinJiang, relateddependenciesfacilitatesrepository-levelvulnerabilityde- andMingZhou.2021.GraphCodeBERT:Pre-trainingCodeRepresentationswith tectionperformancecomparedwithfunction-levelvulnerability DataFlow.In9thInternationalConferenceonLearningRepresentations,ICLR2021, VirtualEvent,Austria,May3-7,2021.OpenReview.net. detection.Ouranalysishighlightsthecurrentprogressandfuture [23] XinyiHou,YanjieZhao,YueLiu,ZhouYang,KailongWang,LiLi,XiapuLuo, directionsforsoftwarevulnerabilitydetection.Inthefuture,we DavidLo,JohnC.Grundy,andHaoyuWang.2023.LargeLanguageModelsfor SoftwareEngineering:ASystematicLiteratureReview. CoRRabs/2308.10620 willexploremoreaspectsofrepository-levelvulnerabilitydetection (2023). |
suchasdesigningretrievalmethodsforidentifyingvulnerability- [24] Huggingfacehub.2023.HuggingFace.https://huggingface.co/. relateddependenciesandintegratingdependencyinformationin [25] Israel.[n.d.].Checkmarx. https://www.checkmarx.com/. [26] Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, prompts. CodyHaoYu,JosephGonzalez,HaoZhang,andIonStoica.2023. Efficient MemoryManagementforLargeLanguageModelServingwithPagedAttention. REFERENCES InProceedingsofthe29thSymposiumonOperatingSystemsPrinciples,SOSP2023, Koblenz,Germany,October23-26,2023,JasonFlinn,MargoI.Seltzer,PeterDr- [1] 2023.Tree-sitter. https://tree-sitter.github.io/tree-sitter/ uschel,AntoineKaufmann,andJonathanMace(Eds.).ACM,611–626. [2] [n.d.].CWE-20:ImproperInputValidation. https://cwe.mitre.org/data/definitions/ [27] JianLi,PinjiaHe,JiemingZhu,andMichaelR.Lyu.2017. SoftwareDefect 20.html PredictionviaConvolutionalNeuralNetwork.InQRS.IEEE,318–328. [3] [n.d.].RoughAuditToolforSecurity. https://code.google.com/archive/p/rough- [28] RaymondLi,LoubnaBenAllal,YangtianZi,NiklasMuennighoff,DenisKocetkov, auditing-tool-for-security. ChenghaoMou,MarcMarone,ChristopherAkiki,JiaLi,JennyChim,QianLiu, [4] SicongCao,XiaobingSun,LiliBo,YingWei,andBinLi.2021. BGNN4VD: EvgeniiZheltonozhskii,TerryYueZhuo,ThomasWang,OlivierDehaene,Mishig ConstructingBidirectionalGraphNeural-NetworkforVulnerabilityDetection. Davaadorj,JoelLamy-Poirier,JoãoMonteiro,OlehShliazhko,NicolasGontier, Inf.Softw.Technol.136(2021),106576. NicholasMeade,ArmelZebaze,Ming-HoYee,LogeshKumarUmapathi,Jian [5] SicongCao,XiaobingSun,LiliBo,RongxinWu,BinLi,andChuanqiTao.2022. Zhu,BenjaminLipkin,MuhtashamOblokulov,ZhiruoWang,RudraMurthyV, MVD:Memory-RelatedVulnerabilityDetectionBasedonFlow-SensitiveGraph JasonStillerman,SivaSankalpPatel,DmitryAbulkhanov,MarcoZocca,Manan NeuralNetworks.In44thIEEE/ACM44thInternationalConferenceonSoftware Dey,ZhihanZhang,NourMoustafa-Fahmy,UrvashiBhattacharyya,WenhaoYu, Engineering,ICSE2022,Pittsburgh,PA,USA,May25-27,2022.ACM,1456–1468. SwayamSingh,SashaLuccioni,PauloVillegas,MaximKunakov,FedorZhdanov, [6] SaikatChakraborty,RahulKrishna,YangruiboDing,andBaishakhiRay.2020. ManuelRomero,TonyLee,NadavTimor,JenniferDing,ClaireSchlesinger,Hailey Deep Learning based Vulnerability Detection: Are We There Yet? CoRR Schoelkopf,JanEbert,TriDao,MayankMishra,AlexGu,JenniferRobinson, abs/2009.07235(2020). CarolynJaneAnderson,BrendanDolan-Gavitt,DanishContractor,SivaReddy, [7] ChatGPT.2022.ChatGPT.https://chat.openai.com/. DanielFried,DzmitryBahdanau,YacineJernite,CarlosMuñozFerrandis,Sean [8] YizhengChen,ZhoujieDing,LamyaAlowain,XinyunChen,andDavidA.Wag- Hughes,ThomasWolf,ArjunGuha,LeandrovonWerra,andHarmdeVries.2023. ner.2023.DiverseVul:ANewVulnerableSourceCodeDatasetforDeepLearning StarCoder:maythesourcebewithyou!CoRRabs/2305.06161(2023). BasedVulnerabilityDetection.InRAID.ACM,654–668. [29] WenLi,HaipengCai,YuleiSui,andDavidManz.2020.PCA:memoryleakdetec- [9] XiaoCheng,XuNie,NingkeLi,HaoyuWangandZhengZheng,andYuleiSui. tionusingpartialcall-pathanalysis.InESEC/FSE’20:28thACMJointEuropean 2022. HowAboutBug-TriggeringPaths?-UnderstandingandCharacterizing SoftwareEngineeringConferenceandSymposiumontheFoundationsofSoftware Learning-BasedVulnerabilityDetectors.IEEETransactionsonDependableand Engineering,VirtualEvent,USA,November8-13,2020,PremDevanbu,MyraB. SecureComputing. Cohen,andThomasZimmermann(Eds.).ACM,1621–1625. [10] XiaoCheng,HaoyuWang,JiayiHua,GuoaiXu,andYuleiSui.2021.DeepWukong: [30] YueLi,TianTan,AndersMøller,andYannisSmaragdakis.2018.Precision-guided StaticallyDetectingSoftwareVulnerabilitiesUsingDeepGraphNeuralNetwork. contextsensitivityforpointeranalysis.Proc.ACMProgram.Lang.2,OOPSLA ACMTrans.Softw.Eng.Methodol.30,3(2021),38:1–38:33. (2018),141:1–141:29. [11] Cppcheck-team.[n.d.].“Cppcheck”. http://cppcheck.sourceforge.net/.. [31] YueLi,TianTan,AndersMoller,andYannisSmaragdakis.2020.APrincipled [12] RolandCroft,MuhammadAliBabar,andM.MehdiKholoosi.2023.DataQuality ApproachtoSelectiveContextSensitivityforPointerAnalysis. ACMTrans. forSoftwareVulnerabilityDatasets.CoRRabs/2301.05456(2023). Program.Lang.Syst.42,2(2020),10:1–10:40. [13] HoaKhanhDam,TrangPham,ShienWeeNg,TruyenTran,JohnC.Grundy, [32] YujiaLi,DanielTarlow,MarcBrockschmidt,andRichardS.Zemel.2016.Gated |
AdityaGhose,TaeksuKim,andChul-JooKim.2019.Lessonslearnedfromusing GraphSequenceNeuralNetworks.In4thInternationalConferenceonLearning adeeptree-basedmodelforsoftwaredefectpredictioninpractice.InMSR.IEEE Representations,ICLR2016. /ACM,46–57. [33] YiLi,ShaohuaWang,andTienN.Nguyen.2021.Vulnerabilitydetectionwith [14] DavidEvansandDavidLarochelle.2002.ImprovingSecurityUsingExtensible fine-grainedinterpretations.InESEC/SIGSOFTFSE.ACM,292–303. LightweightStaticAnalysis.IEEESoftw.19,1(2002),42–51. [34] ZhenLi,DeqingZou,ShouhuaiXu,HaiJin,YaweiZhu,andZhaoxuanChen.2022. [15] Facebook.[n.d.].Infer. https://fbinfer.com/. SySeVR:AFrameworkforUsingDeepLearningtoDetectSoftwareVulnerabilities. [16] JiahaoFan,YiLi,ShaohuaWang,andTienN.Nguyen.2020. AC/C++Code IEEETrans.DependableSecur.Comput.19,4(2022),2244–2258. VulnerabilityDatasetwithCodeChangesandCVESummaries.InMSR’20: [35] ZhenLi,DeqingZou,ShouhuaiXu,XinyuOu,HaiJin,SujuanWang,Zhijun 17thInternationalConferenceonMiningSoftwareRepositories,Seoul,Republicof Deng,andYuyiZhong.2018.VulDeePecker:ADeepLearning-BasedSystemfor Korea,29-30June,2020,SunghunKim,GeorgiosGousios,SarahNadi,andJoseph VulnerabilityDetection.In25thAnnualNetworkandDistributedSystemSecurity Hejderup(Eds.).ACM,508–512. Symposium,NDSS2018,SanDiego,California,USA,February18-21,2018.The [17] ZhangyinFeng,DayaGuo,DuyuTang,NanDuan,XiaochengFeng,MingGong, InternetSociety. LinjunShou,BingQin,TingLiu,DaxinJiang,andMingZhou.2020.CodeBERT: [36] ZhongxinLiu,ZhijieTang,JunweiZhang,XinXia,andXiaohuYang.2024.Pre- APre-TrainedModelforProgrammingandNaturalLanguages.InFindingsof trainingbyPredictingProgramDependenciesforVulnerabilityAnalysisTasks. theAssociationforComputationalLinguistics:EMNLP2020,OnlineEvent,16-20 CoRRabs/2402.00657(2024). November2020(FindingsofACL,Vol.EMNLP2020),TrevorCohn,YulanHe,and [37] mimecast.[n.d.]].ThehistoryofClopransomware. https://www.mimecast.com/ YangLiu(Eds.).AssociationforComputationalLinguistics,1536–1547. content/clop-ransomware/ [18] MichaelFuandChakkritTantithamthavorn.2022.LineVul:ATransformer-based [38] Georgios Nikitopoulos, Konstantina Dritsa, Panos Louridas, and Dimitris Line-LevelVulnerabilityPrediction.InMSR.ACM,608–620. Mitropoulos.2021.CrossVul:across-languagevulnerabilitydatasetwithcommit [19] MichaelFu,ChakkritTantithamthavorn,VanNguyen,andTrungLe.2023. ChatGPTforVulnerabilityDetection,Classification,andRepair:HowFarArearxiv,April,2024 Xin-ChengWen,XinchenWang,YujiaChen,RuidaHu,DavidLo,andCuiyunGao data.InESEC/SIGSOFTFSE.ACM,1565–1569. Taylor,AdinaWilliams,JianXiangKuan,PuxinXu,ZhengYan,IliyanZarov, [39] NIST.2022.SARD:Softwareassurancereferencedataset. https://samate.nist. YuchenZhang,AngelaFan,MelanieKambadur,SharanNarang,AurélienRo- gov/SRD/index.php. driguez,RobertStojnic,SergeyEdunov,andThomasScialom.2023. Llama2: [40] OpenAI.2023.GPT-4TechnicalReport.CoRRabs/2303.08774(2023). OpenFoundationandFine-TunedChatModels.CoRRabs/2307.09288(2023). [41] YunPeng,ChaozhengWang,WenxuanWang,CuiyunGao,andMichaelR.Lyu. [53] AndrewTrotman,AnttiPuurula,andBlakeBurgess.2014.ImprovementstoBM25 2023.GenerativeTypeInferenceforPython.CoRRabs/2307.09163(2023). andLanguageModelsExamined.InProceedingsofthe2014AustralasianDocument [42] SergeyPoznyakoff.2005."GNUcflow". https://www.gnu.org/software/cflow/. ComputingSymposium,ADCS2014,Melbourne,VIC,Australia,November27-28, [43] MichaelPradelandKoushikSen.2018.DeepBugs:alearningapproachtoname- 2014,J.ShaneCulpepper,LaurenceAnthonyF.Park,andGuidoZuccon(Eds.). basedbugdetection.Proc.ACMProgram.Lang.2,OOPSLA(2018),147:1–147:25. ACM,58. [44] r2c.2021.“Semgrep”. https://semgrep.dev. [54] ChaozhengWang,ZongjieLi,YunPeng,ShuzhengGao,SirongChen,Shuai [45] StephenE.RobertsonandHugoZaragoza.2009. TheProbabilisticRelevance Wang,CuiyunGao,andMichaelR.Lyu.2023.REEF:AFrameworkforCollecting Framework:BM25andBeyond.Found.TrendsInf.Retr.3,4(2009),333–389. Real-WorldVulnerabilitiesandFixes.InASE.IEEE,1952–1962. [46] BaptisteRozière,JonasGehring,FabianGloeckle,StenSootla,ItaiGat,Xi- [55] XiaomengWang,TaoZhang,RunpuWu,WeiXin,andChangyuHou.2018. aoqingEllenTan,YossiAdi,JingyuLiu,TalRemez,JérémyRapin,Artyom CPGVA:CodePropertyGraphbasedVulnerabilityAnalysisbyDeepLearning. Kozhevnikov,IvanEvtimov,JoannaBitton,ManishBhatt,CristianCanton-Ferrer, In10thInternationalConferenceonAdvancedInfocommTechnology,ICAIT2018, |
AaronGrattafiori,WenhanXiong,AlexandreDéfossez,JadeCopet,FaisalAzhar, Stockholm,Sweden,August12-15,2018.IEEE,184–188. HugoTouvron,LouisMartin,NicolasUsunier,ThomasScialom,andGabrielSyn- [56] YueWang,WeishiWang,ShafiqR.Joty,andStevenC.H.Hoi.2021. CodeT5: naeve.2023.CodeLlama:OpenFoundationModelsforCode.CoRRabs/2308.12950 Identifier-awareUnifiedPre-trainedEncoder-DecoderModelsforCodeUnder- (2023). standingandGeneration.InProceedingsofthe2021ConferenceonEmpirical [47] RebeccaL.Russell,LouisY.Kim,LeiH.Hamilton,TomoLazovich,JacobHarer, MethodsinNaturalLanguageProcessing,EMNLP2021,VirtualEvent/Punta OnurOzdemir,PaulM.Ellingwood,andMarcW.McConley.2018.Automated Cana,DominicanRepublic,7-11November,2021,Marie-FrancineMoens,Xuanjing VulnerabilityDetectioninSourceCodeUsingDeepRepresentationLearning.In Huang,LuciaSpecia,andScottWen-tauYih(Eds.).AssociationforComputational ICMLA.IEEE,757–762. Linguistics,8696–8708. [48] Statista.2024. NumberofcommonITsecurityvulnerabilitiesandexposures [57] Xin-ChengWen,XinchenWang,CuiyunGao,ShaohuaWang,YangLiu,and (CVEs)worldwidefrom2009to2024YTD. https://www.statista.com/statistics/ ZhaoquanGu.2023. WhenLessisEnough:PositiveandUnlabeledLearning 500755/worldwide-common-vulnerabilities-and-exposures/ ModelforVulnerabilityDetection.CoRRabs/2308.10523(2023). [49] AlexeySvyatkovskiy,ShaoKunDeng,ShengyuFu,andNeelSundaresan.2020. [58] DavidA.Wheeler.[n.d.].Flawfinder. https://dwheeler.com/flawfinder/ IntelliCodecompose:codegenerationusingtransformer.InESEC/FSE’20:28th [59] WhiteSource.2023.“Mendbolt”. https://www.mend.io/free-developer-tools/. ACMJointEuropeanSoftwareEngineeringConferenceandSymposiumonthe [60] FangWu,JigangWang,JiqiangLiu,andWeiWang.2017.Vulnerabilitydetection FoundationsofSoftwareEngineering,VirtualEvent,USA,November8-13,2020, withdeeplearning.In20173rdIEEEinternationalconferenceoncomputerand PremDevanbu,MyraB.Cohen,andThomasZimmermann(Eds.).ACM,1433– communications(ICCC).IEEE,1298–1302. 1443. [61] YuemingWu,DeqingZou,ShihanDou,WeiYang,DuoXu,andHaiJin.2022. [50] RahulTelangandSunilWattal.2007. AnEmpiricalAnalysisoftheImpactof VulCNN:AnImage-inspiredScalableVulnerabilityDetectionSystem.In44th SoftwareVulnerabilityAnnouncementsonFirmStockPrice.IEEETransactions IEEE/ACM44thInternationalConferenceonSoftwareEngineering,ICSE2022,Pitts- onSoftwareEngineering33,8(2007),544–557. https://doi.org/10.1109/TSE.2007. burgh,PA,USA,May25-27,2022.ACM,2365–2376. 70712 [62] PeipeiXia,LiZhang,andFanzhangLi.2015. Learningsimilaritywithcosine [51] HugoTouvron,ThibautLavril,GautierIzacard,XavierMartinet,Marie-Anne similarityensemble.Inf.Sci.307(2015),39–52. Lachaux,TimothéeLacroix,BaptisteRozière,NamanGoyal,EricHambro,Faisal [63] JunweiZhang,ZhongxinLiu,XingHu,XinXia,andShanpingLi.2023.Vulnera- Azhar,AurélienRodriguez,ArmandJoulin,EdouardGrave,andGuillaumeLam- bilityDetectionbyLearningFromSyntax-BasedExecutionPathsofCode.IEEE ple.2023. LLaMA:OpenandEfficientFoundationLanguageModels. CoRR Trans.SoftwareEng.49,8(2023),4196–4212. abs/2302.13971(2023). [64] QinkaiZheng,XiaoXia,XuZou,YuxiaoDong,ShanWang,YufeiXue,Zihan [52] HugoTouvron,LouisMartin,KevinStone,PeterAlbert,AmjadAlmahairi,Yas- Wang,LeiShen,AndiWang,YangLi,etal.2023. CodeGeeX:APre-Trained mineBabaei,NikolayBashlykov,SoumyaBatra,PrajjwalBhargava,ShrutiBhos- ModelforCodeGenerationwithMultilingualEvaluationsonHumanEval-X. ale,DanBikel,LukasBlecher,CristianCanton-Ferrer,MoyaChen,GuillemCucu- CoRRabs/2303.17568(2023). rull,DavidEsiobu,JudeFernandes,JeremyFu,WenyinFu,BrianFuller,Cynthia [65] YaqinZhou,ShangqingLiu,JingKaiSiow,XiaoningDu,andYangLiu.2019.De- Gao,VedanujGoswami,NamanGoyal,AnthonyHartshorn,SagharHosseini, vign:EffectiveVulnerabilityIdentificationbyLearningComprehensiveProgram RuiHou,HakanInan,MarcinKardas,ViktorKerkez,MadianKhabsa,Isabel SemanticsviaGraphNeuralNetworks.InAdvancesinNeuralInformationPro- Kloumann,ArtemKorenev,PunitSinghKoura,Marie-AnneLachaux,Thibaut cessingSystems32:AnnualConferenceonNeuralInformationProcessingSystems Lavril,JenyaLee,DianaLiskovich,YinghaiLu,YuningMao,XavierMartinet, 2019,NeurIPS2019.10197–10207. TodorMihaylov,PushkarMishra,IgorMolybog,YixinNie,AndrewPoulton, [66] Deqing Zou, Sujuan Wang, Shouhuai Xu, Zhen Li, and Hai Jin. 2021. JeremyReizenstein,RashiRungta,KalyanSaladi,AlanSchelten,RuanSilva, 𝜇VulDeePecker:ADeepLearning-BasedSystemforMulticlassVulnerability |
2404.15687 Graph Neural Networks for Vulnerability Detection: A Counterfactual Explanation ZhaoyangChu∗ YaoWan∗† QianLi SchoolofComputerScienceand SchoolofComputerScienceand SchoolofElectricalEngineering, Technology,HuazhongUniversityof Technology,HuazhongUniversityof ComputingandMathematical ScienceandTechnology,China ScienceandTechnology,China Sciences,CurtinUniversity,Australia chuzhaoyang@hust.edu.cn wanyao@hust.edu.cn qli@curtin.edu.au YangWu∗ HongyuZhang YuleiSui SchoolofComputerScienceand SchoolofBigDataandSoftware SchoolofComputerScienceand Technology,HuazhongUniversityof Engineering,ChongqingUniversity, Engineering,UniversityofNewSouth ScienceandTechnology,China China Wales,Australia wuyang_emily@hust.edu.cn hyzhang@cqu.edu.cn y.sui@unsw.edu.au GuandongXu HaiJin∗ SchoolofComputerScience, SchoolofComputerScienceand UniversityofTechnologySydney, Technology,HuazhongUniversityof Australia ScienceandTechnology,China guandong.xu@uts.edu.au hjin@hust.edu.cn Abstract forvulnerabilitydetection.Wetermthisperturbationacounterfac- Vulnerabilitydetectioniscrucialforensuringthesecurityandrelia- tualexplanation,whichcanpinpointtherootcausesofthedetected bilityofsoftwaresystems.Recently,GraphNeuralNetworks(GNNs) vulnerabilityandfurnishvaluableinsightsfordeveloperstoun- haveemergedasaprominentcodeembeddingapproachforvulner- dertakeappropriateactionsforfixingthevulnerability.Extensive abilitydetection,owingtotheirabilitytocapturetheunderlying experimentsonfourGNN-basedvulnerabilitydetectionmodels semanticstructureofsourcecode.However,GNNsfacesignificant demonstratetheeffectivenessofCFExplaineroverexistingstate- challengesinexplainabilityduetotheirinherentlyblack-boxnature. of-the-artfactualreasoning-basedexplainers. Tothisend,severalfactualreasoning-basedexplainershavebeen CCSConcepts proposed.Theseexplainersprovideexplanationsforthepredic- tionsmadebyGNNsbyanalyzingthekeyfeaturesthatcontribute •Softwareanditsengineering→Softwarereliability. totheoutcomes.Wearguethatthesefactualreasoning-basedex- Keywords planationscannotanswercriticalwhat-if questions:“Whatwould happentotheGNN’sdecisionifweweretoalterthecodegraphinto Vulnerabilitydetection,graphneuralnetworks,modelexplainabil- alternativestructures?”Inspiredbyadvancementsofcounterfac- ity,counterfactualreasoning,what-if analysis. tualreasoninginartificialintelligence,weproposeCFExplainer,a ACMReferenceFormat: novelcounterfactualexplainerforGNN-basedvulnerabilitydetec- ZhaoyangChu,YaoWan,QianLi,YangWu,HongyuZhang,YuleiSui, tion.Unlikefactualreasoning-basedexplainers,CFExplainerseeks GuandongXu,andHaiJin.2024.GraphNeuralNetworksforVulnerability theminimalperturbationtotheinputcodegraphthatleadstoa Detection:ACounterfactualExplanation.InProceedingsofthe33rdACM changeintheprediction,therebyaddressingthewhat-if questions SIGSOFTInternationalSymposiumonSoftwareTestingandAnalysis(ISSTA ’24),September16–20,2024,Vienna,Austria.ACM,NewYork,NY,USA, ∗AlsowithNationalEngineeringResearchCenterforBigDataTechnologyandSystem, 13pages.https://doi.org/10.1145/3650212.3652136 ServicesComputingTechnologyandSystemLab,ClusterandGridComputingLab, HuazhongUniversityofScienceandTechnology,Wuhan,430074,China. 1 Introduction †YaoWanisthecorrespondingauthor. Softwarevulnerabilities,whichexposeweaknessesinaprogram, presentasignificantrisktodataintegrity,userprivacy,andoverall Permissiontomakedigitalorhardcopiesofallorpartofthisworkforpersonalor cybersecurity[29,31,66].Asofnow,theCommonVulnerabilities classroomuseisgrantedwithoutfeeprovidedthatcopiesarenotmadeordistributed forprofitorcommercialadvantageandthatcopiesbearthisnoticeandthefullcitation andExposures(CVE)[18]hasreportedtensofthousandsofsoftware onthefirstpage.Copyrightsforcomponentsofthisworkownedbyothersthanthe vulnerabilitiesannually.Thus,vulnerabilitydetection,whichaims author(s)mustbehonored.Abstractingwithcreditispermitted.Tocopyotherwise,or toautomaticallyidentifypotentiallyvulnerablecode,playsapivotal republish,topostonserversortoredistributetolists,requirespriorspecificpermission and/orafee.Requestpermissionsfrompermissions@acm.org. roleinensuringthesecurityandreliabilityofsoftware. ISSTA’24,September16–20,2024,Vienna,Austria Existingeffortsonvulnerabilitydetectionprimarilyfallwithin ©2024Copyrightheldbytheowner/author(s).PublicationrightslicensedtoACM. twomaincategories:staticanalysis-basedapproaches[16,45,49] ACMISBN979-8-4007-0612-7/24/09 https://doi.org/10.1145/3650212.3652136 anddeeplearning-basedapproaches[20,30,31,66].Traditional 4202 luJ 51 ]ES.sc[ 2v78651.4042:viXraISSTA’24,September16–20,2024,Vienna,Austria ZhaoyangChu,YaoWan,QianLi,YangWu,HongyuZhang,YuleiSui,GuandongXu,andHaiJin CVE-2016-10190 Vulnerability Detection Open static int http_read_stream( 4 7 URLContext *h, uint8_t *buf, int size) 1 { 3 5 8 Why my written code is H ..T .TPContext*s = h->priv_data; 2 2 Code Gra1 ph 6 bla Gc Nk N-b sox Vulnerable detected as vulnerable? if (s->chunksize >= 0) { if (!s->chunksize) { 3 Generating Explanations |
... s->chunksize = strtoll(line, NULL, 16); 4 ... Factual Reasoning 4 7 7 } size = FFMIN(size, s->chunksize); 5 The selected sub-graph (①, ⑤, ⑦, and ⑧) is the key 3 5 8 5 8 } feature that contributed to the detected vulnerability. 2 1 6 1 int len; 6 len = s->buf_end - s->buf_ptr; Analysis of What-If i f i( fl e (n le n> 0 > ) s{ ize) len = size; The chunksize, passed from ④, causes a miscalculation (a) 4 7 memcpy(buf, s->buf_ptr, len); 7 of size at ⑤, which in turn triggers the vulnerability at ⑦ 3 5 8 } . e. l. se { and ⑧. Thus, please inspect the value of chunksize in ④. 2 1 6 ... l ..e .n = ffurl_read(s->hd, buf, size); 8 (b) 4 7 (c) 4 7 (d) 4 7 } 3 5 8 3 5 8 3 5 8 ... } 2 1 6 2 1 6 2 1 6 Figure1:Illustrationoffactualreasoning-basedexplanation(rightmiddle)andwhat-if analysis(rightbottom). staticanalysis-basedapproaches(e.g.,SVF[45]andInfer[1])relyon motivatesustodevelopanovelparadigmforanalyzingdetected humanexpertstomanuallydefinespecificrulesfordetectingvul- vulnerabilitiesinsourcecode-what-if analysis.Inourcases,what- nerabilities.Recently,deeplearning-basedapproaches,exemplified if analysisexploreshypotheticalcodeinstanceswithalternative bypioneeringworkssuchasVulDeePecker[31]andDevign[66], structures.Thisapproachaimstoidentifypotentialchangesthat havemaderemarkablestrides,largelyattributedtotheircapacityto wouldfixthevulnerability,therebyprovidingabetterexplanation learncomprehensivecoderepresentations,therebyenhancingthe oftherootcausesandfactorscontributingtoitsexistence. detectioncapabilitiesacrossdiversevulnerabilities.Amongthese WhyWhat-If Analysis?AMotivatingExample. WeuseFig- approaches,GraphNeuralNetworks(GNNs)[6,7,20,29,66]have ure1asanexampletoillustratetheadvantageofanalyzingwhat-if recentlyattractedsubstantialattention,owingtotheircapacityto inexplainingvulnerabilitydetectioncomparedtofactualreasoning- captureintricatestructuralinformationofcode,e.g.,syntaxtrees, based explanations. This example involves a heap-based buffer controlflows,anddataflows. overflowvulnerabilityintheFFmpegproject1,reportedbyCVE- DespitethesignificantprogressmadebyGNNsinvulnerability 2016-101902,whichallowsremoteWebserverstoexecutearbitrary detection,existingdetectionsystemssufferfromtheexplainability codeviaanegativechunksizeinanHTTPresponse.Specifically, issuesduetotheblack-boxandcomplicatednatureofdeepneural thisvulnerabilityarisesfrommisuseofthestrtollfunctionfor networks.Givenapredictedresult,developersareoftenconfused parsing chunksize from HTTP responses into int64_t format, bythefollowingquestion:“Whymycodeisdetectedasvulnerable?” withoutproperlyvalidatingfornegativevalues(○4).Then,anega- Fromourinvestigation,existingstudies[15,21,29]onexplainable tivechunksizecanresultinanerroneouscalculationintheFFMIN vulnerabilitydetectionaretypicallyfactualreasoning-basedexplain- function,producinganegativesizeforbufferoperations(○5).This ers.Thecoreideaoftheseexplainersistoidentifykeyfeaturesin negativesizepotentiallytriggersout-of-boundswriteoperations, theinputdata(e.g.,sub-graphsinthecodegraph)thatcontributeto ultimatelyleadingtoaheapbufferoverflow(○7 and○8).Inthis thefinalpredictions.Theselectedfeaturesarecommonlyregarded example,thevulnerabilitydetectionsystemparsesthecodesnippet asfactualexplanations,astheyderivefromempiricalinputdata intoasemanticcodegraph(e.g.,AbstractSyntaxTree(AST),Control andserveasfactualevidenceforparticularoutcomes. FlowGraph(CFG),DataFlowGraph(DFG),orProgramDependency Here,wecontendthatthosefactualreasoning-basedexplana- Graph(PDG)).Here,withoutlossofgenerality,weconsiderthe tions,whichmerelydelineatethefeaturesorsub-graphscontribut- parsedcodegraphasaDFGforbetterillustration. ingtotheidentifiedvulnerability,arenotconvincingenough.One ThevulnerabilitydetectionsystemsemployGNNstomodelthe reason is that developers remain uncertain about the actual in- DFGandyieldapredictionoutcomethatclassifiestheinputcode fluenceofthecodesegments,whichconstitutetheexplanation snippetasvulnerable.Toexplainthepredictionof“vulnerable”, sub-graph, on the detection result. In other words, the factual thefactualreasoning-basedexplanationidentifiesacompactsub- reasoning-basedexplanationscannotanswer“Whatwouldhap- graphinthecodegraph(○1,○5,○7,and○8)asthekeyfeature pentothedetectionsystem’sdecisionifweweretoalterthesecode thatcontributestothedetectedvulnerability.Thisallowsdevelop- segmentsintoalternativestructures?”Thisperspectiveofwhat-if is erstorecognizesegments○7 and○8,whichinvolvebufferwrite oftenassociatedwithahumancognitiveactivitythatimaginesother possiblescenariosforeventsthathavealreadyhappened[39].This 1https://github.com/FFmpeg/FFmpeg |
2https://www.cvedetails.com/cve/CVE-2016-10190GraphNeuralNetworksforVulnerabilityDetection:ACounterfactualExplanation ISSTA’24,September16–20,2024,Vienna,Austria operations(○1 and○5 arenotinvolved),aspotentiallyvulnera- • Weproposeacounterfactualreasoning-basedexplainer,named bleblocks.However,theexplanationprovidedisinadequatefor CFExplainer,togenerateexplanationsforthedecisionsmade guidingcoderectificationtoalterthedetectionsystem’sdecision, bytheGNN-basedvulnerabilitydetectionsystems,whichcan leavingdeveloperstomanuallycheckvariablessuchaslen,size, helpdevelopersdiscoverthevulnerabilitycauses. andchunksizetoidentifytheactualcauseofthevulnerability. • WeconductextensiveexperimentsonfourGNN-basedvulnera- In contrast, to investigate the context of vulnerability occur- bilitydetectionsystemstovalidatetheeffectivenessof CFEx- rences, what-if analysis proactively and iteratively explores di- plainer. Our results demonstrate that CFExplainer outper- versehypotheticalcodestructures(e.g.,(a),(b),(c),and(d)),by formsthestate-of-the-artfactualreasoning-basedexplainers. inputtingeachintothedetectionsystemtoobservevariedpredic- tionoutcomes.Takingstructure(a)asanexample,itisevidentthat 2 Background removingthedata-flowdependencies○5→○7 and○5→○8,while Inthissection,webeginbyintroducingessentialpreliminaryknowl- retaining○6→○7,leadstoapredictionof“non-vulnerable”.Itsug- edgenecessaryforabetterunderstandingofourmodel.Subse- geststhatcalculatingsizeat○5 maybeavulnerabilitysource, quently,wepresentamathematicalformulationoftheproblem whilethecomputationoflenat○6 doesnotcontributetothevul- understudyinthispaper. nerability.Throughiterativeexplorationforsubsequentstructures (b),(c),and(d),what-if analysisfunctionsasan“optimization”pro- 2.1 GNN-BasedVulnerabilityDetectionModel cess,eventually“converging”toaminimalchangethataltersthe detectionsystem’sdecision,i.e.,onlyremoving○4→○5 instructure Supposethatwehaveasetof𝑁codesnippetsD ={𝐶 1,𝐶 2,...,𝐶 𝑁}, (d).Theminimalchangehighlightsthedataflow○4→○5 asthe and each code snippet𝐶 𝑘 is associated with a ground-truth la- rootcause,whichpassespotentiallyincorrectchunksize,result- bel𝑌 𝑘 ∈ {0,1},whichcategorizesthecodesnippetaseithernon- inginamiscalculationof sizeat○5 andinturntriggeringthe vulnerable(0)orvulnerable(1).Thegoalofvulnerabilitydetection bufferoverflowat○7 and○8.Consequently,developersreceivean istolearnamappingfunction𝑓(·)thatassignsacodesnippetto actionableinsight,i.e.,directlyinspectingthevalueof chunksize eitheranon-vulnerableorvulnerablelabel. at○4 forpotentialerrors.Overall,wecanconcludethatthewhat-if Current deep learning-based approaches follow a fundamen- talpipelinewhereinthesemanticsofthesourcecodeareembed- analysisessentiallysimulatestheinteractionsbetweendevelopers dedintoahiddenvector,whichisthenfedintoaclassifier.Re- andthevulnerabilitydetectionsystemduringdebugging,methodi- cently,GNNshavebeendesignedtocapturethesemanticstructures callyidentifyingtherootcausesofthedetectedvulnerabilitiesand ofsourcecode,e.g.,ASTs,CFGs,DFGs,andPDGs.Givenacode guidingdeveloperstoeffectivesolutions. graph𝐺 𝑘 of𝐶 𝑘,GNNtypicallyfollowsatwo-stepmessage-passing OurSolutionandContributions. Recentadvancesofcounterfac- scheme(i.e.,aggregateandupdate)ateachlayer𝑙 tolearnnode tualreasoninginartificialintelligence[3,26,27,33,47,48,54,60] representationsfor𝐺 𝑘. 𝑙 shedlightonthepossibilityofapplyingwhat-if analysisforGNN- Firstly,GNNgeneratesanintermediaterepresentationm𝑖 for basedvulnerabilitydetection.Acounterfactualinstancerepresents eachnode𝑖 in𝐺 𝑘 byaggregatinginformationfromitsneighbor aninstancethat,whilecloselysimilartotheoriginalinstance,is nodes,denotedbyN(𝑖),usinganaggregationfunction: c fala cs tusi afi le rd eab sy onth ine gb ala imck s-b toox idm eno td ife yl min ina id miff ale cre hn at nc gl ea sss in.T inh pu us t,c feo au tn ut re er s- m𝑙 𝑖 =Aggregation({h𝑙 𝑗−1 | 𝑗 ∈N(𝑖)}), (1) tha Bt uc ia ln dia nl gte ur po ou ntc to hm ises m, oth tie vr ae tb iy ona ,d wdr ees ps ri on pg ot sh ee Cw Fh Ea xt- pi lf aq inue es rt ,io thn es. whereh𝑙 𝑗−1denotestherepresentationofnode 𝑗 intheprevious layer.Subsequently,theGNNupdatestheintermediaterepresenta- firstexplainertointroducecounterfactualreasoningforenhanc- ingtheexplainabilityofGNNsinvulnerabilitydetection.Givena tionm𝑙 𝑖 foreachnode𝑖viaanupdatefunction: codeinstance,CFExplaineraimstoidentifyaminimalperturba- h𝑙 𝑖 =Update(m𝑙 𝑖,h𝑙 𝑖−1). (2) tiontothecodegraphinputthatcanflipthedetectionsystem’s predictionfrom“vulnerable”to“non-vulnerable”.CFExplainer Fora𝐿-layerGNN,thefinalrepresentationofthenode𝑖ish𝑖𝐿 .To formulatesthesearchproblemforcounterfactualperturbations obtainagraphrepresentationh𝑘 forthecodegraph𝐺 𝑘,areadout asanedgemasklearningtask,whichlearnsadifferentiableedge function(e.g.,graphmeanpooling)isappliedtointegrateallthe masktorepresenttheperturbation.Basedonthedifferentiableedge noderepresentationsof𝐺 𝑘: mask,CFExplainerbuildsacounterfactualreasoningframework |
togenerateinsightfulcounterfactualexplanationsforthedetection h𝑘 =Readout({h𝑖𝐿 }). (3) results.ExtensiveexperimentsonfourrepresentativeGNNsfor vulnerabilitydetection(i.e.,GCN,GGNN,GIN,andGraphConv) Finally, the graph representation h𝑘 is fed into a classifier (e.g. Multi-LayerPerception(MLP))followedbyaSoftmaxfunctionto validatetheeffectivenessofourproposedCFExplainer,bothin calculatetheprobabilitydistributionofnon-vulnerableandvulner- termsofvulnerability-orientedandmodel-orientedmetrics. ableclasses,asfollows: Thekeycontributionsofthispaperareasfollows. 𝑃(𝑐 |𝐺 𝑘)=Softmax(MLP(h𝑘)), (4) • Tothebestofourknowledge,wearethefirsttodiscussthe what-if questionandintroducetheperspectiveofcounterfactual where𝑃(𝑐 | 𝐺 𝑘) isthepredictedprobabilityofthecodesnippet reasoningforGNN-basedvulnerabilitydetection. 𝐶 𝑘 thatbelongstoeachclassin{0,1},i.e.,𝐶 𝑘 isvulnerableornot.ISSTA’24,September16–20,2024,Vienna,Austria ZhaoyangChu,YaoWan,QianLi,YangWu,HongyuZhang,YuleiSui,GuandongXu,andHaiJin TheGNNmodelcanbeoptimizedbyminimizingthebinarycross- original𝐺 𝑘 butisclassifiedinadifferentclass,i.e.,𝑓(𝐺˜ 𝑘)≠𝑓(𝐺 𝑘). entropylossbetweenthepredictedprobabilitiesandtheground- Asaresult,counterfactualreasoningaimstoidentifyaminimal truth labels, allowing it to learn from both non-vulnerable and perturbationto𝐺 𝑘 thataltersthedecisionofthedetectionsystem. vulnerablecodeinstancesinthetrainingset. Wemathematicallyformulatethecounterfactualreasoningproblem Asthemodeltrained,inthetestingphase,whenpresentedwith asfollows: acodesnippet𝐶 𝐾 accompaniedbyitscodegraph𝐺 𝑘,thetrained min𝑑(𝐺˜ 𝑘,𝐺 𝑘), GNNmodel𝑓(·)isemployedtocomputethepredictedprobability 𝐺˜ 𝑘 (7) 𝑃(𝑐 |𝐺 𝑘)foreachclass.Theresultingestimatedlabel𝑌ˆ 𝑘 for𝐶 𝑘 is s.t., argmax𝑃(𝑐 |𝐺˜ 𝑘)≠𝑌ˆ 𝑘, 𝑐∈{0,1} determinedbyselectingtheclasswiththehighestprobability: where𝑑(·,·)representsadistancemetricthatquantifiesthediffer- 𝑌ˆ 𝑘 =a 𝑐r ∈g {m 0,1a }x𝑃(𝑐 |𝐺 𝑘). (5) encesbetween𝐺˜ 𝑘 and𝐺 𝑘,e.g.,thenumberofedgesremovedby theperturbation. InvestigatedGNNsforVulnerabilityDetection. Inthisstudy, weinvestigatefourwidelyusedGNNsforvulnerabilitydetection. 3 ProposedCFExplainer TheseGNNsemployvariousimplementationsoftheAggregation(·) Inthissection,weproposeacounterfactualreasoning-basedex- andUpdate(·)functionstocapturestructuralcodeinformationfor plainer,namedCFExplainer,forGNN-basedvulnerabilitydetec- vulnerabilitydetection. tion.CFExplainercomprisesseveralkeycomponents:(1)Code ⊲GraphConvolutionalNetwork(GCN)[25]generalizestheidea GraphPerturbation.CFExplaineremploysadifferentiableedge ofconvolutionalneuralnetworkstographs.Itaggregatesneighbor masktorepresenttheperturbationtothecodegraph,whichtrans- noderepresentationsbysummingthemandutilizesanMLPto formsthediscretesearchtaskforcounterfactualperturbationsinto updatetheaggregatednoderepresentations. acontinuouslearningtaskforedgemasks.(2)Counterfactual ⊲GatedGraphNeuralNetwork(GGNN)[28]utilizesaGated ReasoningFramework.Basedonthedifferentiableedgemask, RecurrentUnit[8]tocontrolinformationflowthroughedgeswhen CFExplainerconstructsacounterfactualreasoningframework updatingtheaggregatednoderepresentations. anddesignsadifferentiablelossfunctiontomakethisframework ⊲GraphIsomorphismNetwork(GIN)[56]introducesthecon- optimizable, as illustrated in Figure 2. (3) Counterfactual Ex- ceptofgraphisomorphismtoensurepermutationinvariance.It planationGeneration.Afteroptimizationforthecounterfactual employsagraphisomorphismoperatortoupdatetheaggregated reasoningframework,CFExplainergeneratescounterfactualex- noderepresentations. planationsforthedetectionsystem’spredictions.Wewillelaborate ⊲GraphConv[36]incorporateshigher-ordergraphstructuresat oneachcomponentof CFExplainerinthefollowing. multiplescalestoenhanceGNN’sexpressivepower. 3.1 CodeGraphPerturbation 2.2 ModelExplainability:TheProblem SupposethatwehaveatrainedGNNmodel𝑓(·)anditsprediction In our scenario, vulnerabilities often arise from incorrect or in- 𝑌 pˆ 𝑘 apo en r,t whe et ea xr pg le ot rc eo td he e𝐶 e𝑘 xpr le ap inre as be in lit te yd ob fy Ga Nc No sde wg itr ha ip nh a𝐺 b𝑘 la. cIn k-t bh oi xs c ao nn ds dis at te an flt ost wru flc atu wr sa .l Tre hl ua st ,io fon rs ti hn eth ge ivs eo nu crc oe deco gd re a, ps hu 𝐺ch 𝑘,a wsc eo fn ot cr uo sl onperturbingitsgraphstructures(i.e.,edges),representedbythe s me ott din elg i, nr te ec ro pg ren ti az be id lita ys ,wa hm ero ere acc ch ea sl sle tong min og dec lo pn at re ax mt ef to er rse ,x tp ralo inri in ng g adjacencymatrixA𝑘 ∈{0,1}𝑛×𝑛 ,ratherthanperturbingthenode data,andgradientsofeachlayerisunavailable.Intheblack-box featuresX𝑘 ∈R𝑛×𝑑 ,where𝑛isthenumberofnodesin𝐺 𝑘 and𝑑 setting,weconstraintheexplainertoderivethepredictionprob- representsthefeaturedimension.Notethatthecodegraph𝐺 𝑘 isa ability𝑃(𝑐 |𝐺 𝑘)exclusivelybyqueryingthemodel𝑓(·)withthe directedgraph,hence,A𝑘 isunsymmetrical. codegraph𝐺 𝑘 astheinput. Onestraightforwardapproachforgeneratingcounterfactualper- turbationsisthroughgreedysearch,whichiterativelyeditsthecode Undertheaforementionedscenario,thefactualreasoning-based |
graphbyremovingorre-addingedges.However,itspracticality explainersprovideexplainabilitybyidentifyingkeyfeaturesthat is limited by the vast size of the search space, leading to ineffi- contributetothemodel’sprediction.Forexample,Lietal.[29] proposetoseekacompactsub-graph𝐺𝑆 thatmaintainsthesame ciency[3].Althoughheuristicstrategiescanpotentiallyexplore 𝑘 prediction result as using the whole code graph𝐺 𝑘. They opti- thesearchspacemoreefficiently,identifyingtheoptimalcounter- factualinstancewithprecisionischallenging.Specifically,thereis mizetheexplainerbymaximizingtheprobabilityofpredictingthe originalestimatedlabel𝑌ˆ 𝑘 whentheinputgraphislimitedtothe noguaranteethatthecounterfactualperturbationidentifiedisthe sub-graph𝐺𝑆 ,definedas: minimalonenecessary. 𝑘 m 𝐺a 𝑆x𝑃(𝑌ˆ 𝑘 |𝐺 𝑘𝑆 ). (6) E ind sg pe ireM da bs yk- pB ra iose rd wP oe rr ktu [2r 9b ,a 3ti 3o ,n 5. 8]T ,o wo eve ar dc oo pm te thth ee es de gli em mit aa sti ko in ns g, 𝑘 technique.Thistechniquetreatsthesearchingforcounterfactual Onthecontrary,counterfactualreasoningprovidesexplainability perturbationsasanedgemasklearningtask.Theideaisthata bygeneratingcounterfactualinstancestoaddresswhat-if ques- perturbedgraph𝐺˜ 𝑘 canbederivedbymaskingoutedgesfromthe tions.Forthegivencodegraph𝐺 𝑘,wegenerateitscounterfactual originalcodegraph𝐺 𝑘,asfollows: instancebyintroducingasubtleperturbationtoit,resultingina newgraph𝐺˜ 𝑘.Theperturbedgraph𝐺˜ 𝑘 differsminimallyfromthe A˜ 𝑘 =A𝑘 ⊙M𝑘, (8)GraphNeuralNetworksforVulnerabilityDetection:ACounterfactualExplanation ISSTA’24,September16–20,2024,Vienna,Austria Yˆ k Y k pred (a) Source Code (c) Graph Neural Networks (d) Vulnerability Detection static int Code Graph http_read_stream(URLContext *h, uint8_t *buf, int size) { HTTPContext *s = h->priv_data; ... if (s->chunksize >= 0) { if (!s->chunksize) { Prediction ... Loss Item s->chunksize = strtoll(line, NULL, 16); ... } size = FFMIN(size, s->chunksize); } ... } G k A k (b) Code Graph Perturbation ( ˆ )k M Perturbed Code GraphG A k k dist Distance Loss Item Figure2:Anoverviewofourproposedcounterfactualreasoningframework. whereA˜ 𝑘istheperturbedversionofA𝑘,M𝑘 ∈{0,1}𝑛×𝑛 isabinary Here,theconstraintpartaimstoensurethatthenewprediction edgemaskmatrix,and⊙denoteselement-wisemultiplication.If 𝑌˜ 𝑘 isdifferentfromtheoriginalprediction𝑌ˆ 𝑘,whiletheobjective anelementM𝑘,𝑖𝑗 = 0,itindicatestheedge (𝑖,𝑗) ismaskedout partaimstoencouragethattheperturbedadjacencymatrixA˜ 𝑘 is in A𝑘. As directly learning the binary edge mask matrix M𝑘 is ascloseaspossibletotheoriginaladjacencymatrixA𝑘. notdifferentiable,werelaxM𝑘 tocontinuousrealvalues,which DirectoptimizationofEq.(11)ischallengingsincebothitsob- isMˆ 𝑘 ∈ R𝑛×𝑛 .Then,asillustratedinFigure2(b),theperturbed jectiveandconstraintpartsarenon-differentiable.Toaddressthis, adjacencymatrixisgeneratedby: wedesigntwodifferentiablelossfunctionitemstomakethetwo A˜ 𝑘 =A𝑘 ⊙𝜎(Mˆ 𝑘), (9) partsoptimizable,respectively. where𝜎(·)representsthesigmoidfunctionthatmapstheedgemask PredictionLossItem. TosatisfytheconstraintconditioninEq.(11), intotherange[0,1],allowingasmoothtransitionbetweenthepres- wedesignapredictionlossitemL𝑝𝑟𝑒𝑑 toencouragethedetection enceandabsenceofedges.Asaresult,startingfromarandomly systemtowardsproducingadifferentpredictionwhentheoriginal initializededgemaskmatrix,Mˆ 𝑘 canbeoptimizedviagradient codegraph𝐺 𝑘 isperturbedinto𝐺˜ 𝑘,asfollows: descent.Thisapproachenablesaquickerandmoreprecisedeter- L𝑝𝑟𝑒𝑑 =𝑃(𝑌ˆ 𝑘 |A˜ 𝑘,X𝑘). (12) minationoftheminimalcounterfactualperturbationcomparedto Thislossitemaimstominimizethelikelihoodthattheperturbed search-basedstrategies. graph𝐺˜ 𝑘 willmaintaintheoriginalprediction𝑌ˆ 𝑘,therebymaxi- 3.2 CounterfactualReasoningFramework mizingthechancesofachievinganalteredpredictionoutcome. Webuildacounterfactualreasoningframeworktogenerateexpla- DistanceLossItem. ToaddresstheobjectivepartinEq.(11),we nationsforthepredictionsmadebytheGNN-basedvulnerability utilizebinarycrossentropyasadifferentiabledistancefunction detectionsystem.Thecoreideaofourproposedframeworkisto to quantify the divergence between the original and perturbed identifyaminimalperturbationtothecodegraphthatflipsthe adjacencymatrixes,whichisformulatedasfollows: detection system’s prediction. This is achieved by addressing a L𝑑𝑖𝑠𝑡 =BinaryCrossEntropy(A˜ 𝑘,A𝑘). (13) counterfactualoptimizationproblem,whichwillbeformulatedin Thisdistancefunctionischosenforitsefficacyinmeasuringthe thefollowing. differencebetweentwoprobabilitydistributions.Inourcase,we SupposethatwehaveatrainedGNNmodel(whoseweightpa- rameterWisfixedandinaccessible)andthecodegraph𝐺 𝑘 forthe considerthepresenceandabsenceofedgesinthegraphasbinary targetcodesnippet𝐶 𝑘.WefirstapplytheedgemaskMˆ 𝑘onthecode classes,thusconceptualizingA˜ 𝑘 astheestimateddistributionof graph𝐺 𝑘 togenerateaperturbedgraph,i.e.,𝐺˜ 𝑘.Subsequently,as edgesandA𝑘 astheactualdistribution.Duringoptimization,L𝑑𝑖𝑠𝑡 showninFigure2,wefeedtheoriginalandperturbedcodegraphs ensuresthatA˜ 𝑘remainsascloseaspossibletoA𝑘,thusdetermining intotheGNNmodeltoproducerespectiveestimatedlabels: aminimalcounterfactualperturbationtothecodegraph𝐺 𝑘. 𝑌ˆ 𝑘 =GNN(A𝑘,X𝑘 |W), OverallLossFunction. Weintegratetheabovetwolossitems (10) intoanoveralllossfunctiontooptimizethemcollaboratively: 𝑌˜ 𝑘 =GNN(A˜ 𝑘,X𝑘 |W). L=𝛼·L𝑝𝑟𝑒𝑑 +(1−𝛼)·L𝑑𝑖𝑠𝑡, (14) whereX𝑘 denotesthefeaturesofthenodesin𝐺 𝑘.Toidentifya minimalcounterfactualperturbation,welearntheedgemaskMˆ 𝑘 where𝛼 isahyper-parameterthatregulatesthetrade-offbetween |
thepredictionlossitemandthedistancelossitem.Higher𝛼 priori- basedontheoptimizationobjectiveofthecounterfactualreasoning tizeschangingthepredictionoutcome,potentiallyattheexpenseof problem.Specifically,wereformulateEq.(7)asfollows: alargerperturbation,whereaslower𝛼focusesmoreonminimizing min𝑑(A˜ 𝑘,A𝑘), s.t., 𝑌˜ 𝑘 ≠𝑌ˆ 𝑘. (11) theperturbation.Basedontheoveralllossfunction,weoptimize Mˆ𝑘ISSTA’24,September16–20,2024,Vienna,Austria ZhaoyangChu,YaoWan,QianLi,YangWu,HongyuZhang,YuleiSui,GuandongXu,andHaiJin thecounterfactualreasoningframeworkusingthegradientdescent moredetailed,statement-levelcodechangesderivedfromoriginal algorithmandtheAdamoptimizer[24].Notethatourframework gitcommits.Thesecodechangesforfixingvulnerabilitiesarecru- operatesintheblack-box setting,indicatingthattheprocessof cialinourstudy.Theyenableustobuildground-truthlabelsfor counterfactualreasoningfocusessolelyonupdatingtheedgemask quantitativelyevaluatingthequalityofthegeneratedexplanations Mˆ 𝑘 tofindtheoptimalperturbationwhileholdingtheunderlying (seeSection4.4). GNNmodel’sparametersfixed. Toenhancethedataset’squality,wefollowthecleaningproce- dureproposedbyHinetal.[20].Specifically,weremovecomment 3.3 CounterfactualExplanationGeneration linesfromthecodeandignorepurelycosmeticcodechanges(e.g., Utilizinganoptimizedcounterfactualreasoningframework,we changestowhitespace).Wealsoexcludeimproperlytruncatedor generate counterfactual explanations to explain the predictions unparsablecodesnippets.Additionally,followingthepracticesof madebythevulnerabilitydetectionsystems. previousresearch[14,20],weperformrandomundersamplingfor non-vulnerablecodesnippetstoobtainabalanceddataset.Inthis GeneratingOptimalCounterfatualExplanation. Afteropti- work,weemployanopen-sourcecodeanalysistool,Joern[2,57],to mization,weobtaintheoptimaledgemaskMˆ 𝑘∗.Inthismaskma- parseeachcodesnippetintoaPDG,whichservesastheinputforthe trix,highervaluesindicatetheircorrespondingedgesshouldbe GNN-baseddetectionmodel.PDGisacommonlyusedgraphrepre- preservedwhilelowervaluesindicatetheircorrespondingedges sentationforcodeinvulnerabilitydetectionresearch[14,20,21,29], shouldberemovedtoreversethedetectionsystem’sdecision.To whichtakescodestatementsasnodesandcontrol-flowordata-flow formthefinalexplanation,weemployahyper-parameter𝐾 𝑀 to dependenciesasedges.Finally,thedatasetisrandomlydividedinto controlthenumberofedgestobeperturbed,i.e.,takingthe𝐾 𝑀 training,validation,andtestingsetswitharatioof8:1:1.Notethat edgeswiththelowestmaskvalues.Then,weobtaintheoptimal theexplainersonlygenerateexplanationsforthedetectionmodel’s counterfactualperturbedgraph𝐺˜ 𝑘∗ byremovingthe𝐾 𝑀 selected predictionsonthetestset. edgesandderiveasub-graph: 4.2 Baselines 𝐺 𝑘𝑆∗=𝐺 𝑘 −𝐺˜ 𝑘∗. (15) Toprovideacomparativeanalysis,weinvestigatesixprominent Asaresult,theoptimalcounterfactualexplanationtakesthefol- factualreasoning-basedGNNexplainersasourbaselines: lowingform:thederivedsub-graph𝐺𝑆∗isthemostcriticalfactor 𝑘 • GNNExplainer[33]seeksacrucialsub-graphbymaximizing onthedetectionresult,thatifremoved,thenthecodewouldnotbe themutualinformationbetweentheoriginalGNN’sprediction predictedasvulnerable. andthesub-graphdistribution. DerivingDiverseCounterfactualExplanations. Inreal-world • PGExplainer[34]learnsanedgemaskpredictorbasedonthe scenarios,developersmayneeddiversecounterfactualexplanations mutualinformationlossusedin[33].Itaccessesthetrainingset toexploreandunderstandthecontextofthedetectedvulnerability. totraintheedgemaskpredictor. Toachievethis,webuildanarrowedsearchspacebasedonthe • SubgraphX [63] employs the Monte Carlo tree search algo- sub-graph𝐺𝑆∗.Withinthisspace,weemployexhaustivesearch rithm[44]toefficientlyidentifyimportantsub-graphswitha 𝑘 tomethodicallyexploreandfiltervariousedgecombinationsin nodepruningstrategy. 𝐺𝑆∗whoseremovalwouldalterthedetectionsystem’sprediction. • GNN-LRP[40]decomposestheGNN’spredictionscoresinto 𝑘 Thisprocessgeneratesasetofdiversecounterfactualexplanations, theimportanceofvariousgraphwalksusingahigher-order eachofferinginsightsintothedetectedvulnerabilityfromdifferent Taylordecompositionandreturnsasetofmostimportantgraph perspectives.Moreover,suchdiversityprovidesdeveloperswith walksasanexplanation. multipleactionableoptionstoaddressthedetectedvulnerability. • DeepLIFT[43]isanotherdecomposition-basedexplainerbut originallydesignedforimageclassification.Apreviouswork[62] 4 ExperimentalSetup extendsittoexplainGNNmodels,denotedasDeepLIFT-Graph. Inthissection,webeginbypresentingthedataset,thebaseline • GradCam[41]isapopulargradient-basedexplainerforimage explainersforcomparison,andtheimplementationdetails.Subse- classification.Itbackpropagatesthepredictionscorestocompute quently,weintroducetwotypesofevaluationmetricstoquantita- thegradients,whicharethenusedtoapproximatetheinput tivelyevaluatetheeffectivenessofourproposedCFExplainer. importance.Thepreviouswork[62]adaptsitforexplaining GNNmodels,denotedasGradCam-Graph. 4.1 Dataset Forthehyper-parametersofthesebaselineexplainers,weadopt Aligningwithpreviousstudies[14,20,21,29],weconductourex- theimplementationprovidedbypreviousresearch[21,62].Note |
perimentsonthewidely-usedvulnerabilitydataset,Big-Vul[13]. thatPGExplainer,GNN-LRP,DeepLIFT-Graph,andGradCam-Graph LinkedtothepublicCVEdatabase[18],Big-Vulcomprisesextensive donotoperateintheblack-boxsetting,astheyrequireaccessto sourcecodevulnerabilitiesextractedfrom348open-sourceC/C++ modelparameters,trainingdata,andgradientinformationofGNNs. GitHubprojects,spanningfrom2002to2019.Itencompassesa 4.3 ImplementationDetails totalof188,636C/C++functions,including10,900vulnerableones, covering91variousvulnerabilitytypes.Unlikeotherexistingvul- Ourimplementationcomprisestwomaincomponents:training nerabilitydatasets(i.e.,Devign[66]andReveal[6])whichonly GNN-basedvulnerabilitydetectionmodelsandgeneratingexplana- providevulnerabilitylabelsatthefunctionlevel,Big-Vuloffers tionsforthedetectionmodel’spredictions.GraphNeuralNetworksforVulnerabilityDetection:ACounterfactualExplanation ISSTA’24,September16–20,2024,Vienna,Austria Table1:TheperformanceofthereimplementedGNN-based reliedonmanuallabelingforevaluation,whichiscostly,noteasily vulnerabilitydetectionmodels. scalable,andlacksstandardization.Fortunately,theBig-Vuldataset mitigates this issue by providing detailed statement-level fixes GNNCore Acc(%) Pr(%) Re(%) 𝐹 1(%) withingitcommits,whichaccuratelyreflectthechangesaddressing vulnerabilities.Weutilizethesecommitstoconstructstandardized GCN 72.05 60.39 44.81 51.44 ground-truthlabelsforourgeneratedcounterfactualexplanations. GGNN 71.89 59.43 47.08 52.54 Inthevulnerability-orientedevaluation,followingmethodolo- GIN 72.16 58.71 53.08 55.75 giesestablishedinvulnerabilitydetectionresearch[13,20,21,29], GraphConv 70.98 56.61 52.11 54.27 weadoptthestatementsthataredeletedormodifiedinthecommit (marked with “-” signs) as ground-truth labels. Specifically, we extractallthestatementsfromthevulnerableversionofthecodeto 4.3.1 GNN-BasedVulnerabilityDetection. Inourexperiments,we buildabinaryground-truthvector,denotedas𝑆 = [𝑠 1,𝑠 2,...,𝑠 𝑟], reimplementfourvulnerabilitydetectionmodelsemployingdif- where𝑠 𝑖 = 1indicatesthe𝑖-thstatementisdeletedormodified ferentGNNcores(i.e.,GCN,GGNN,GIN,andGraphConv).Each inthefixedversion,and𝑠 𝑖 = 0otherwise.Correspondingly,we detectionmodeladoptsatwo-layerGNNarchitecturewithahid- constructabinaryexplanationvectorΔ = [𝛿 0,𝛿 1,...,𝛿 𝑟],where dendimensionof256,followedbygraphmeanpoolingtoderive non-zerovaluesinΔrepresentthecorrespondingstatementsin- graph-levelrepresentations.Thegraph-levelrepresentationsare cludedinthegeneratedexplanationsub-graph.Thecomparisonof theninputtoatwo-layerMLPclassifierforvulnerabilitydetec- Δwiththeground-truthvector𝑆allowsforaquantitativeevalua- tion.Inthemodel,weutilizeGraphCodeBERT’stokenembedding tionofhowaccuratelythegeneratedexplanationsidentifycritical layer[19]toinitializenodefeaturesfortheinputcodegraph.ReLU statementsassociatedwiththevulnerability. activationfunctionsareusedaftereachlayer,exceptforthefinal Consideragivensetof𝑀vulnerablecodesnippetsdenotedas one,tointroducenon-linearity.Basedonthebinarycross-entropy {𝐶 1,𝐶 2,...,𝐶 𝑀}forevaluation.Foreachcodesnippet,anexpla- loss,wetraineachdetectionmodelusingtheAdamoptimizer[24] nationisdeemedcorrectifitencompassesthedeletedoraltered for50epochs,withalearningrateof0.005andabatchsizeof64. statements.Consequently,wecomputetheAccuracyscorebyde- AsshowninTabel1,followingpriorresearch[7,29,31],weevalu- terminingthepercentageofaccurateexplanationsamongallgener- atetheperformanceofthereimplementeddetectionmodelsusing atedexplanations.Moreover,wecalculatethePrecisionandRecall Accuracy,Precision,Recall,and𝐹 1score.Theresultsshowthatall scoresforeachcodesnippetbycomparingtheexplanationvector fourdetectionmodelsachieveanAccuracyover70%,aPrecision Δandtheground-truthvector𝑆: o G 𝐹v 1Ce sNr c5 oe5 rx% ec ., e Oa ls vR i ee n rc aa P ll r ll ,eo tc hv ise eir so e4 n0 , m% w o, h da in ele ld sGa en I xN h𝐹 i1 l bes iac tdo ssr ie mino ilv A ae rcr c p5 u e0 r r% a fc o. yA r, mm R ao e nn c cag ellt , wh ae inm thd, Precision= (cid:205) (cid:205)𝑟 𝑖= 𝑟 𝑖1 =𝑠 1𝑖 𝛿· 𝑖𝛿 𝑖 , Recall= (cid:205) (cid:205)𝑟 𝑖= 𝑟 𝑖1 =𝑠 1𝑖 𝑠· 𝑖𝛿 𝑖 . (16) highPrecisionandrelativelylowRecall. Inourscenario,Precisionmeasurestheproportionofstatements 4.3.2 ExplanationGeneration. Fortheimplementationofourpro- intheexplanationthatarerelevantandaccuratelypertaintothe posedCFExplainer,wetrainitusingAdamtominimizetheloss vulnerability.Ontheotherhand,Recallmeasurestheproportionof functiondescribedinSection3.2for800epochsatalearningrate ground-truthstatementsthatareaccuratelyincludedintheexpla- of0.05.Notethat,foreachcodesnippetsample,CFExplaineris nation.Additionally,wecompute𝐹 1astheharmonicmeanofthe trainedindividuallytoexplainthedetectionmodel’sprediction.We twoscorestoevaluatetheoverallperformance.TheformulaforF1 setthehyper-parameter𝐾 𝑀 to8bydefaultandusethesame𝐾 𝑀 isgivenasfollows: |
valuetocontrolthesizeoftheexplanationsub-graphsgenerated 2·Precision·Recall bythefactualreasoning-basedexplainersforfaircomparison.In 𝐹 1= Precision+Recall . (17) addition,inSection5.3,weconductaparameteranalysisonthe Finally,wecalculatetheaveragescoresofPrecision,Recall,and𝐹 1 hyper-parameter𝛼,exploringvaluesfrom0.1to0.9tounderstand acrossallcodesnippets. itsinfluenceonCFExplainer’sperformance.Itshouldbenoted thattheexplainersaimtoprovideexplanationsbyidentifyingthe 4.4.2 Model-OrientedEvaluationMetric. Thevulnerability-oriented criticalfactorsthatcontributetothedetectedvulnerability.Thus, evaluationmetricsprimarilyfocusonassessingtheconsistency itismeaninglesstoexplainthenon-vulnerablecodesnippetsand betweenthegeneratedexplanationsandtherootcausesofthede- unfairtoexplainthecodesnippetsthatareincorrectlydetected tectedvulnerabilities.However,thesemetricscannotquantifyto asvulnerable.Asaresult,weonlyconsiderexplainingvulnerable whatextentthegeneratedexplanationsreallyinfluencethedetec- codesnippetsthatarecorrectlydetected. tionsystem’sdecisions.Thus,inspiredbypreviousresearch[47,48], ourmodel-orientedevaluationborrowsinsightsfromcausalinfer- 4.4 EvaluatingtheExplainability encetheoryandintroducesProbabilityofNecessity(PN)[17]to fillthisgap.Intuitively,foranexplanation𝐸thatisgeneratedto Inthissection,weintroducetwotypesofmetricstoevaluatethe explainprediction𝑃,if𝐸doesnothappenthen𝑃 willnothappen, qualityofthegeneratedexplanationsquantitatively. wesay𝐸isanecessaryexplanationforsupportingtheprediction𝑃. 4.4.1 Vulnerability-OrientedEvaluationMetric. Evaluatingcoun- ThecoreideaofPNisthat:ifweimagineacounterfactualworld terfactualexplanationsincodeischallengingduetothedifficulty wheretheexplanationsub-graph𝐺𝑆 didnotexistintheoriginal 𝑘 inobtainingstandardizedgroundtruth.Previousresearch[10]has codegraph𝐺 𝑘,thenwhetherthecorrespondingcodesnippet𝐶 𝑘ISSTA’24,September16–20,2024,Vienna,Austria ZhaoyangChu,YaoWan,QianLi,YangWu,HongyuZhang,YuleiSui,GuandongXu,andHaiJin Table2:Comparisonforthevulnerability-orientedevaluationresultsofexplainers. GCN GGNN GIN GraphConv Explainer Acc(%) Pr(%) Re(%) 𝐹 1(%) Acc(%) Pr(%) Re(%) 𝐹 1(%) Acc(%) Pr(%) Re(%) 𝐹 1(%) Acc(%) Pr(%) Re(%) 𝐹 1(%) GNNExplainer 59.06 13.68 41.26 17.29 61.25 13.94 45.54 18.76 53.37 12.14 34.42 15.09 53.12 12.81 37.54 16.31 PGExplainer 42.39 11.70 26.41 13.71 53.98 13.78 38.12 17.31 44.79 11.20 30.08 13.93 46.25 12.42 31.98 15.17 SubGraphX 43.12 12.44 27.29 13.77 41.52 12.53 27.60 14.48 36.81 11.29 23.14 12.59 42.50 12.64 26.60 14.09 GNN-LRP 56.00 13.31 38.52 16.49 59.86 13.32 44.19 17.83 54.94 14.20 39.54 17.54 48.74 12.51 34.85 15.52 DeepLIFT-Graph 50.00 12.88 33.14 15.61 55.36 14.39 39.83 17.84 47.24 12.89 32.84 15.58 49.69 12.48 34.85 15.43 GradCam-Graph 44.93 12.93 27.69 14.54 56.06 13.22 41.04 17.23 44.17 13.62 30.03 15.64 41.88 11.73 28.91 13.96 CFExplainer 61.23 13.84 42.84 17.84 61.25 14.13 44.30 18.48 60.12 14.36 42.29 18.03 53.75 12.77 38.36 16.32 Note:Wehighlightthebestscoreinboldandthesecondbestscoreinunderlinedineachcolumn. wouldnotbedetectedasvulnerable?Thisiscriticalforunder- 14.36%,and2.72%inPrecision,32.28%,12.47%,33.51%,and18.19% standingthecausalimpactoftheexplanationsontheprediction in Recall, 17.10%, 7.18%, 19.71%, and 8.22% in 𝐹 1 score over the outcomes.Followingthisidea,wedefinePNastheproportionofthe factualreasoning-basedexplainers. generatedsub-graphexplanationsthatarenecessarytoinfluence Amongallbaselineexplainers,theperturbation-basedGNNEx- thedetectionsystem’spredictions,asfollows: plainerexhibitsrelativelygoodperformancebydirectlysearching PN= 𝑀1 ∑︁𝑀 𝑝𝑛 𝑘, where𝑝𝑛 𝑘 =(cid:40) 1 0, , i ef ls𝑌ˆ e𝑘′ ,≠𝑌ˆ 𝑘, (18) (f b io i .elr i .t ,a y Gc dr Neu Ntc ei -ca Ltl e Rs d Pu bb ay- ng G dra N Dp N eh est p.h B La Iet Fss Tii d -g Gen s ri , afi t phc ha en )dt del iy c ro ec m co tn lp yt or dsib i etu ciot oe n ms -b pto a os st e eh de thmv eu e dtln h ee o ter da cs- - 𝑘 tionmodel’spredictionsintotheimportanceofedgesinthecode where𝑌ˆ 𝑘′ =argmax𝑐∈{0,1}𝑃(𝑐 |𝐺 𝑘−𝐺 𝑘𝑆)representstheprediction graphandselectthemostimportantedgesasanexplanation,re- resultforthecodesnippet𝐶 𝑘whentheexplanationsub-graph𝐺 𝑘𝑆 is sultinginslightlyinferiorperformancecomparedtoGNNExplainer. removedfromtheoriginalcodegraph𝐺 𝑘.Ifremoving𝐺 𝑘𝑆 changes However,theothertwoperturbation-basedmethods(i.e.,PGEx- theprediction𝑌ˆ 𝑘,theexplanationisconsiderednecessary. plainerandSubGraphX)andthegradient-basedGradCam-Graph methodperformrelativelypoorly.ThisisbecausePGExplainer’s 5 ExperimentalResults maskpredictormaysufferfromthedistributionshiftbetweenthe trainingandtestsets,whileSubGraphX’snodepruningstrategy Toevaluatetheperformanceofourcounterfactualreasoningap- maybenotcompatiblewithourscenarioofperturbingedgesinthe proach,weaddressthefollowingResearchQuestions(RQs): |
codegraph.GradCam-Graphutilizesgradientvaluestomeasure • RQ1:Vulnerability-OrientedEvaluation.HowwelldoesCF- theedgeimportance,leadingtoanexplanationsub-graphthatcor- Explainerperformincomparisonwithstate-of-the-artfactual relateswiththedetectionmodel’shiddeninformationratherthan reasoning-basedexplainersinidentifyingtherootcausesofthe theactualvulnerabilities.Incontrasttothesefactualreasoning- detectedvulnerabilities? basedexplainers,CFExplaineraimstoaddresswhat-if questions • RQ2:Model-OrientedEvaluation.HowwelldoesCFExplainer byseekingaminimalperturbationtothecodegraphthataltersthe performincomparisonwithstate-of-the-artfactualreasoning- detectionmodel’spredictionfrom“vulnerable”to“non-vulnerable”. basedexplainersingeneratingexplanationsthatreallyinfluence Throughthisexploration,CFExplainerdelvesdeeplyintothecon- thedetectionmodel’sdecision? textwherethevulnerabilityoccurs,revealingcausalrelationships • RQ3:InfluenceofHyper-parameter𝛼.Howdodifferentset- betweencodestructuresanddetectionoutcomes,therebydiscover- tingsofthetrade-offhyper-parameter𝛼impacttheperformance ingtherootcausesofthedetectedvulnerabilities. of CFExplainer? 5.1 RQ1:Vulnerability-OrientedEvaluation AnswertoRQ1:CFExplainerexhibitssuperioreffective- nessinvulnerability-orientedevaluation,outperforming Oneofthekeyobjectivesofexplainersinourcontextistoaccu- state-of-the-artfactualreasoning-basedexplainers. ratelyidentifytherootcausesofdetectedvulnerabilities.Theeffec- tivenessofourproposedCFExplainer,incomparisontofactual reasoning-basedexplainers,isquantitativelyshowcasedinTable2, 5.2 RQ2:Model-OrientedEvaluation whichreportsthevulnerability-orientedevaluationresultsonfour GNN-baseddetectionmodels:GCN,GGNN,GIN,andGraphConv. Compared to vulnerability-oriented evaluation, model-oriented TheseresultsrevealthatCFExplaineroutperformsthebaseline evaluationfocusesonassessingthenecessityofthegeneratedex- explainersinmostscenarios,demonstratingtheeffectivenessof planationsforsupportingthedetectionmodel’spredictions.As ourcounterfactualreasoningapproach.AcrossthefourGNN-based illustratedinFigure3,CFExplainerdemonstratessuperiorper- detectionmodels,CFExplainerachievesaverageimprovements formanceoverstate-of-the-artfactualreasoning-basedexplainers of24.32%,12.03%,28.22%,and14.29%inAccuracy,7.93%,4.43%, acrossfourGNN-baseddetectionmodels.Notably,thePNcurveGraphNeuralNetworksforVulnerabilityDetection:ACounterfactualExplanation ISSTA’24,September16–20,2024,Vienna,Austria 60 54 48 42 36 30 24 18 12 6 0 0 2 4 6 8 10 12 14 16 18 20 KM )%(NP GCN GGNN GIN GraphConv 60 70 80 54 63 72 48 56 64 42 49 56 36 42 48 30 35 40 24 28 32 18 21 24 12 14 16 6 7 8 0 0 0 0 2 4 6 8 10 12 14 16 18 20 0 2 4 6 8 10 12 14 16 18 20 0 2 4 6 8 10 12 14 16 18 20 KM KM KM GNNExplainer PGExplainer SubGraphX GNN-LRP DeepLIFT GradCam CFExplainer Figure3:Comparisonforthemodel-orientedevaluationresultsofexplainers. F1(%) PN(%) AsshowninFigure 4,𝛼significantlyinfluencestheeffectiveness 18.65 55.5 ofthecounterfactualexplanationgeneratedbyCFExplainer.We 18.30 51.0 17.95 46.5 canseethatwiththeincreasein𝛼,thedifferencesbetweenthe 17.60 42.0 predictionsusingtheperturbedgraphandtheoriginalgraphare 17.25 37.5 more encouraged, thereby pushing the explanation to be more 16.90 33.0 16.55 28.5 counterfactual,whichleadstodramaticperformanceimprovements. 16.20 24.0 However,after𝛼reachesitsoptimalvalue,theperformancebegins 15.85 19.5 todeclinebecauseCFExplainertendstogenerateacounterfactual 15.50 15.0 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 largerperturbationtothecodegraphthatmayfailtoidentifythe GCN GGNN GIN GraphConv mostcriticalfactorinfluencingthedetectionsystem’sprediction. Based on our parameter analysis, we set𝛼 = 0.9 for GCN and Figure4:Aparameteranalysisonthehyper-parameter𝛼. GraphConv,𝛼 =0.8forGGNN,and𝛼 =0.5forGIN.Thesevalues arechosentoensureoptimalperformanceacrossdifferentmodels, accommodatingtheiruniquecharacteristicsandsensitivitiestothe forCFExplainerconsistentlyencompassesthoseofthebaseline balancebetweenpredictionanddistanceloss. explainersundervarious𝐾 𝑀 settings,visuallyindicatingitseffec- tiveness.Unlikefactualreasoning-basedexplainersthatidentify AnswertoRQ3:Thetrade-offhyper-parameter𝛼hasa crucial sub-graphs but fall short in determining their actual in- significantimpactonCFExplainer’sperformance.Opti- fluenceondetectionoutcomes,CFExplainertargetsaminimal malsettingsvaryacrossmodels,with𝛼 =0.9forGCNand changetothecodegraphthatalterstheprediction.Thisapproach GraphConv,𝛼 =0.8forGGNN,and𝛼 =0.5forGIN. ensurestheidentificationofedgesthataretrulynecessaryforthe predictionoutcome.Thisdistinctioniscrucialinunderstanding CFExplainer’sabilitytoprovidemoreaccurateandessentialexpla- 5.4 CaseStudy nations.Moreover,wecanobservethatasthevalueof𝐾 𝑀increases, Weconductacasestudytoqualitativelyassesstheeffectivenessof thePNscoresofallexplainersgenerallyshowimprovement.This improvementcanbeattributedtothatwithahigher𝐾 𝑀 value, CFExplainercomparedtofactualreasoning-basedexplainers,as |
showninFigure5.Thiscasestudyinvolvesaspecificcodecom- morecrucialedgesnecessaryforsupportingthedetectionresult mitofthenfs_printfhfunctionintheprint-nfs.cfilefromthe areidentifiedandincludedinthegeneratedexplanationsub-graph. tcpdumpproject3.Theaddedlinesareindicatedwitha“+”sign, whilethedeletedlinesaremarkedwitha“-”sign.Thiscommitad- AnswertoRQ2:CFExplainerdemonstratessuperiorper- dressesabufferover-readvulnerabilityoftheNFSparser,reported formance in model-oriented evaluation, outperforming byCVE-2017-130014.Thisvulnerabilityarisesfromtheoriginal state-of-the-artfactualreasoning-basedexplainers. codefailingtoensurethatthesourcestringsfsnameisshorter thanthedestinationbuffertemp(20).Asaresult,strncpycould copymorecharactersthantempcansafelycontain,withoutnull- 5.3 RQ3:InfluenceofHyper-parameter𝛼 terminatingitimmediatelyafterthelastcopiedcharacter(21).This Understandingtheimpactofthetrade-offhyper-parameter𝛼 is leadstopotentialbufferover-readswhentempislateraccessedasa crucialforoptimizingCFExplainer’sperformanceingenerating string.Toaddressthisvulnerability,thecodeshouldcopynomore counterfactualexplanations.Thehyper-parameter𝛼playsapivotal thantempcanholdandmustexplicitlynull-terminatethebuffer roleinbalancingtheemphasisbetweenthepredictionlossitem afterthelastcharactercopiedfromsfsname.Thefixintroducedin andthedistancelossitem.Weconducttheparameteranalysison CFExplainerbyvarying𝛼 from0.1to0.9whilekeepingother 3https://github.com/the-tcpdump-group/tcpdump hyper-parametersfixed. 4https://www.cvedetails.com/cve/CVE-2017-13001ISSTA’24,September16–20,2024,Vienna,Austria ZhaoyangChu,YaoWan,QianLi,YangWu,HongyuZhang,YuleiSui,GuandongXu,andHaiJin CVE-2017-13001 Generating Explanations 1nfs_printfh(netdissect_options *ndo, GNNExplainer PGExplainer 35 36 2 register const uint32_t *dp, const u_int len) 3{ 17 21 9 21 9 35 4 32 6 20 4 14 4 5 my_fsid fsid; uint32_t ino; const char *sfsname = NULL; char *spacep; 20 1 14 20 1 14 14 27 15 9 27 15 32 6 7 i f u( _in nd to- > in ; d co h_ au rf l ca og n) s t{ *sep = ""; 4 12 6 10 12 6 36 20 29 10 6 9 2 19 0 8 ND_PRINT((ndo, " fh[")); 9 for (i=0; i<len; i++) { SubGraphX GNN-LRP 10 ND_PRINT((ndo, "%s%x", sep, dp[i])); sep = ":";} 11 ND_PRINT((ndo, "]")); 6 4 1 1 36 38 4 7 38 4 12 return; 13 } 14 27 15 27 15 35 1 6 35 1 6 7 14 Parse_fh((const u_char *)dp, len, &fsid, &ino, NULL, &sfsname, 0); 15 if (sfsname) { 12 20 21 10 20 21 15 28 27 10 28 27 10 16 17- static char temp[NFSX_V3FHMAX+1]; 18+ char temp[NFSX_V3FHMAX+1]; u_int stringlen; DeepLIFT GradCam 19 9 14 20 27 20- strncpy(temp, sfsname, NFSX_V3FHMAX); 2 10 7 31 2 21 2 10 7 2 14 4 2 221- + t se tm rp i[ ngs li ez neo f =( t le em np ;) - 1] = '\0'; 14 27 15 10 15 14 1 9 10 7 2 23 4+ + i stf r n( cs pt yr (i tn eg ml pe ,n > sf sN nF aS mX e_V ,3 F sH tM rA iX n) g les nt )r ;inglen = NFSX_V3FHMAX; 9 20 21 7 31 9 4 12 6 1 12 6 25+ temp[stringlen] = '\0'; 26 CFExplainer 27 spacep = strchr(temp, ' '); 28 if (spacep) *spacep = '\0'; (a) (b) (c) 29 ND_PRINT((ndo, " fh %s/", temp)); 9 10 1 31 9 10 1 31 9 10 1 31 30 } else { 31 ND_PRINT((ndo, " fh %d,%d/", 4 14 2 4 14 2 4 14 2 32 fsid.Fsid_dev.Major, fsid.Fsid_dev.Minor)); 33 } 15 20 21 15 20 21 15 20 21 34 (d) (e) (f) 3 35 6 i f ( NDf _s Pi Rd I. NF Ts (i (d n_ dd oe ,v . "M %i sn "o ,r = fs= i d2 .5 O7 p) aque_Handle)); 9 10 1 31 9 10 1 31 9 10 1 31 37 else 4 14 2 4 14 2 4 14 2 38 ND_PRINT((ndo, "%ld", (long) ino)); 39} 15 20 21 15 20 21 15 20 21 Figure5:AcasestudyontheCVE-2017-13001vulnerabilityinthetcpdumpproject. thecommitensuresthatthelengthofthedatacopieddoesnotex- ornot.Ifthedetectionmodelhaslearnedbiasedpatternsandfails ceedNFSX_V3FHMAX(23)andthattempiscorrectlynull-terminated toproducethecorrectdetectionresultfortheperturbedinstance, afterthecopy(25),preventinganyover-read. itcanunderminetheeffectivenessoftheexplainer.Thus,toensure Inthiscasestudy,weobservethatfactualreasoning-basedex- theoptimalperformanceoftheexplainer,werecommendusingit plainerslikeGNN-LRPandGradCamfailtoidentifythevulnerabil- inconjunctionwithGNN-baseddetectionmodelsthatexhibitideal itycause(20 and 21)intheirgeneratedexplanationsub-graphs. detectionperformance.Bycombiningthecounterfactualexplainer Whileotherfactualreasoning-basedexplainerseffectivelypoint withhigh-performingdetectionmodels,wecanenhancetheoverall out 20 or 21,theirgeneratedexplanationsalsocontainafewcode effectivenessandreliabilityoftheexplanationprocess. statementsunrelatedtothevulnerability.Thisdilutestheclarityof OnthePerturbationofCodeGraphs. Ourcurrentcounterfac- |
theexplanations,leavingdeveloperstomanuallycheckthecodeto tualexplainerismainlybasedongraphtheoryprinciplesanddoes findouttheactualvulnerabilitycause.Conversely,CFExplainer notspecificallyconsidertheuniquefeaturesofvulnerabilities.In excelsbygeneratingasetofdiversecounterfactualexplanationsto ourfuturework,weplantoenhanceourexplainerbyincorporating helpdevelopersunderstandthecontextofthedetectedvulnerability. perturbationalgorithmsspecificallytailoredtothevulnerabilities Forexample,instructure(c),CFExplaineridentifiesthatremoving incodegraphs.Thiswillenableustoachieveamorespecialized theprogramdependencies○2→○1, 20→○1,and 21→○1 alters counterfactualexplainer,whichcanbettercapturetheunderlying thedetectionresult.Thisdirectlyleadsdeveloperstothecritical characteristicsofvulnerabilitiesandprovidemoreaccurateinsights statements 20 and 21,whichinvolvebufferoperations(○1 and intothebehaviorofGNN-basedvulnerabilitydetectionmodels. ○2 arenotinvolved),andidentifiesthemaspotentialareasofthe vulnerability.Further,throughtheanalysisoftheminimalcoun- 7 RelatedWork terfactualperturbationasinstructure(f),CFExplaineroffersan Inthissection,wereviewtherelatedliteratureaboutvulnerability actionableinsight,i.e.,inspectingthenull-terminatingoperationat detectionandlocalization,explainabilityinsoftwareengineering, 21 forpotentialerrors. andcounterfactualreasoninginGNNs. 6 ThreatstoValidity 7.1 VulnerabilityDetectionandLocalization Thethreatstothevalidityofourworkarediscussedasfollows. Vulnerabilitydetectionplaysacrucialroleinensuringthesecurity OntheGNNPerformance. Theeffectivenessofthecounterfac- andreliabilityofsoftwaresystems.Existingeffortsinvulnerability tualexplainerisheavilyinfluencedbythedetectionperformance detectioncanbegenerallydividedintotwomainapproaches:static oftheGNNmodel.Sinceourexplainergeneratesexplanationsby analysis-based[16,45,49]anddeeplearning-based[6,7,31,66] perturbingthecodegraphinstance,itreliesonareliabledetection approaches.Traditionalstaticanalysis-basedapproachesrequire modeltodeterminewhethertheperturbedinstanceisvulnerable human experts to manually define specific rules, which suffersGraphNeuralNetworksforVulnerabilityDetection:ACounterfactualExplanation ISSTA’24,September16–20,2024,Vienna,Austria from efficiency issues. On the other hand, deep learning-based However,thisworkfocusedonperturbingtheplaintextinputof approacheshavegainedincreasinginterestinvulnerabilitydetec- codetogeneratecounterfactualexplanations,incontrasttoour tion,duetotheirstrongcapabilityinrepresentingthesemantics workfocusingonperturbingthegraphinputofcode. ofsourcecode.However,comparedwithstaticanalysis-basedap- 7.3 CounterfactualReasoninginGNNs proaches,thedeeplearning-basedapproachescannotprovidea fine-grained analysis of which lines of the code may cause the Recently,severalstudieshaveexploredtheuseofcounterfactual detectedvulnerabilities. reasoningtoprovideexplanationsforGNNs[3,4,22,32,33,35, Althoughfaultlocalizationtechniqueslikespectrum-basedmeth- 37,47,52,53,55,59,61].Forexample,Lucicetal.[33]generated ods[11,23]anddeltadebugging[64,65]couldbeemployedtolocate counterfactualexplanationsbyidentifyingaminimalperturbation vulnerablecodestatements,theireffectivenessreliesoneitherthe toanode’sneighborhoodsub-graphthatwouldchangetheGNN’s availabilityofextensivetestsuitesornumeroustime-consuming predictiononthisnode.Linetal.[32]employedGrangercausality testingexecutions.Recently,severaldeeplearning-basedline-level forcounterfactualreasoningtolearnexplanationsbasedonanauto- detectionmethods[12,14,20,30,67]havebeenproposedtopre- encodermodelviasupervisedlearning.Bajajetal.[4]identified dictwhichstatementsinthecodearevulnerable.However,these robustedgesubsetswhoseremovalwouldaltertheGNN’spredic- methodsnotonlyrequirelargetrainingsamplestotrainthedeep- tionsbylearningtheimplicitdecisionregionsinthegraph.Maetal. learningmodelsbutalsolackexplainabilityinwhycertainstate- [35]utilizedagraphvariationalautoencoderfortheoptimization mentsarepredictedasvulnerable.Consequently,explainableap- andgeneralizationofcounterfactualreasoningongraphs.Tanetal. proacheshaveattractedincreasingattentioninvulnerabilitydetec- [47]incorporatedbothcounterfactualandfactualreasoningper- tion.Existingexplainableapproachesaremainlybasedonfactual spectivesfromcausalinferencetheory.Huangetal.[22]explored reasoning,whichaimstofindtheinputfeaturesthatplayacrucial globalcounterfactualreasoningforGNNs’globalexplainability. roleinthedetectionmodel’sprediction[15,21,29,68].However, Furthermore,counterfactualreasoninghasbeenappliedindomain- theseapproachesarelimitedintheirabilitytoprovidefurtherin- specificgraphscenarios,suchasmoleculargraphs[37,55]andbrain sightsonhowtoalterthedetectionmodel’sprediction,especially networks[3].Whiletheideaofcounterfactualreasoninginthese whenthecodeispredictedasvulnerable.Incontrasttotheprevious studiesissimilartoourwork,wearethefirsttoinvestigatecoun- |
work,CFExplainerintroducescounterfactualreasoningtoidentify terfactualreasoningoncodegraphsandprovidecounterfactual whatinputfeaturestochangewouldresultinadifferentprediction, explanationsforthevulnerabilitydetectiontask. therebyprovidingactionableguidancefordeveloperstoaddress thedetectedvulnerabilities. 8 Conclusion 7.2 ExplainabilityinSoftwareEngineering Inthispaper,weproposeCFExplainer,anovelcounterfactual reasoning-basedexplainerforexplainingthepredictionsmadeby Explainabilityposesachallengingissueinsoftwareengineering, GNN-basedvulnerabilitydetectionmodels.CFExplainergenerates especiallyduetotheincreasingdependenceofdevelopersonus- counterfactualexplanationsbyidentifyingtheminimalperturba- ingthepredictionsprovidedbydeeplearningmodelstooptimize tiontothecodegraphthatcanalterthedetectionsystem’spredic- theircodes.Recently,manyeffortshavebeenmadetoimprove tion,thusaddressingwhat-if questionsforvulnerabilitydetection. theexplainabilityofdeeplearningmodelsinsoftwareengineer- Thecounterfactualexplanationscanidentifytherootcausesofthe ing[5,9,10,38,42,46,51].Forinstance,Citoetal.[9]focused detectedvulnerabilitiesandprovideactionableinsightsfordevelop- on global explainability, which aims to find specific input data erstofixthem.OurextensiveexperimentsonfourGNN-basedvul- typesonwhichthemodelexhibitspoorperformance.Sharmaetal. nerabilitydetectionmodelsshowthatCFExplaineroutperforms [42]introducedaneuron-levelexplainabilitytechniquetoiden- theexistingstate-of-the-artfactualreasoning-basedexplainers. tifyimportantneuronswithintheneuralnetworkandeliminate Theapplicationofcounterfactualreasoninginsoftwareengi- redundantones.Wanetal.[50]addressedthestructuralinforma- neering,particularlyinthedomainofvulnerabilitydetection,is tionofsourcecodeunderamulti-modalneuralnetworkequipped stillinitsearlystages,offeringsubstantialopportunitiesforfurther withanattentionmechanismforbetterexplainability.Wanetal. exploration.ThesuccessofCFExplainerencouragesustoexplore [51]investigatedtheexplainabilityofpre-trainedlanguagemodels itsapplicationinbroadertasks,includingbutnotlimitedtobugde- ofcode(e.g.,CodeBERTandGraphCodeBERT),whichconducts tection,codesearch,andcodeclonedetection.Webelievethatthe a structural analysis to explore what kind of information these principlesofcounterfactualreasoningcanbeeffectivelyadaptedto modelscapture.Furthermore,severalfactualreasoningapproaches theseareas,potentiallytransformingthewaydevelopersinteract havebeenproposedrecently.Forexample,AutoFocus[5]employed withandunderstandsoftwaresystems. attentionmechanismstorateandvisualizetheimportanceofcode elements.Zouetal.[68]proposedanexplainableapproachbasedon DataAvailability. Alltheexperimentaldataandcodeusedinthis heuristicsearching,aimingtoidentifythecodetokenscontributing paperareavailableathttps://github.com/CGCL-codes/naturalcc/ tothevulnerabilitydetector’sprediction.Inaddition,twoprevious tree/main/examples/counterfactual-vulnerability-detection. studies[38,46]proposedmodel-agnosticexplainersbasedonpro- Acknowledgment gramsimplificationtechniques,whichaimstosimplifytheinput codewhilepreservingthemodel’spredictionresults,inspiredby ThisworkissupportedbytheMajorProgram(JD)ofHubeiProvince thedeltadebuggingalgorithms[64,65].Counterfactualreasoning (GrantNo.2023BAA024).Wewouldliketothankalltheanonymous hasalsobeenexploredbyarecentwork[10],similartoourwork. reviewersfortheirinsightfulcomments.ISSTA’24,September16–20,2024,Vienna,Austria ZhaoyangChu,YaoWan,QianLi,YangWu,HongyuZhang,YuleiSui,GuandongXu,andHaiJin References Representations,ICLR2021,VirtualEvent,Austria,May3-7,2021. [1] 2021.FacebookInfer:atooltodetectbugsinJavaandC/C++/Objective-Ccode. [20] DavidHin,AndreyKan,HuamingChen,andM.AliBabar.2022. LineVD: https://fbinfer.com/. Statement-LevelVulnerabilityDetectionUsingGraphNeuralNetworks.InPro- [2] 2021.Joern-TheBugHunter’sWorkbench.https://joern.io/. ceedingsofthe19thInternationalConferenceonMiningSoftwareRepositories [3] CarloAbrateandFrancescoBonchi.2021.CounterfactualGraphsforExplainable (Pittsburgh,Pennsylvania)(MSR’22).AssociationforComputingMachinery, ClassificationofBrainNetworks.InProceedingsofthe27thACMSIGKDDCon- NewYork,NY,USA,596–607. ferenceonKnowledgeDiscovery&DataMining(VirtualEvent,Singapore)(KDD [21] YutaoHu,SuyuanWang,WenkeLi,JunruPeng,YuemingWu,DeqingZou,and ’21).AssociationforComputingMachinery,NewYork,NY,USA,2495–2504. HaiJin.2023.InterpretersforGNN-BasedVulnerabilityDetection:AreWeThere [4] MohitBajaj,LingyangChu,ZiYuXue,JianPei,LanjunWang,PeterCho-HoLam, Yet?.InProceedingsofthe32ndInternationalSymposiumonSoftwareTestingand andYongZhang.2021.RobustCounterfactualExplanationsonGraphNeural Analysis,ISSTA2023,Seattle,Washington,UnitedStates,July18-20,2023. Networks.InProceedingsofAdvancesinNeuralInformationProcessingSystems [22] ZexiHuang,MertKosan,SouravMedya,SayanRanu,andAmbujSingh.2023. |
34:AnnualConferenceonNeuralInformationProcessingSystems2021,NeurIPS GlobalCounterfactualExplainerforGraphNeuralNetworks.InProceedings 2021,December6-14,2021,virtual.5644–5655. oftheSixteenthACMInternationalConferenceonWebSearchandDataMining [5] NghiD.Q.Bui,YijunYu,andLingxiaoJiang.2019. AutoFocus:Interpreting (Singapore,Singapore)(WSDM’23).AssociationforComputingMachinery,New Attention-BasedNeuralNetworksbyCodePerturbation.InProceedingsofthe York,NY,USA,141–149. 34thIEEE/ACMInternationalConferenceonAutomatedSoftwareEngineering(ASE). [23] FabianKeller,LarsGrunske,SimonHeiden,AntonioFilieri,AndrevanHoorn, 38–41. andDavidLo.2017.ACriticalEvaluationofSpectrum-BasedFaultLocalization [6] SaikatChakraborty,RahulKrishna,YangruiboDing,andBaishakhiRay.2022. TechniquesonaLarge-ScaleSoftwareSystem.InProceedingsof2017IEEEInter- DeepLearningBasedVulnerabilityDetection:AreWeThereYet?IEEETransac- nationalConferenceonSoftwareQuality,ReliabilityandSecurity(QRS).114–125. tionsonSoftwareEngineering48,9(2022),3280–3296. [24] DiederickP.KingmaandJimmyBa.2015. Adam:Amethodforstochastic [7] XiaoCheng,HaoyuWang,JiayiHua,GuoaiXu,andYuleiSui.2021.DeepWukong: optimization.InProceedingsoftheInternationalConferenceonLearningRepresen- StaticallyDetectingSoftwareVulnerabilitiesUsingDeepGraphNeuralNetwork. tations(ICLR). ACMTrans.Softw.Eng.Methodol.30,3,Article38(apr2021),33pages. [25] ThomasN.KipfandMaxWelling.2017. Semi-SupervisedClassificationwith [8] KyunghyunCho,BartvanMerriënboer,CaglarGulcehre,DzmitryBahdanau, GraphConvolutionalNetworks.InProceedingsofthe5thInternationalConfer- FethiBougares,HolgerSchwenk,andYoshuaBengio.2014. LearningPhrase enceonLearningRepresentations,ICLR2017,Toulon,France,April24-26,2017, RepresentationsusingRNNEncoder–DecoderforStatisticalMachineTranslation. ConferenceTrackProceedings.OpenReview.net. InProceedingsofthe2014ConferenceonEmpiricalMethodsinNaturalLanguage [26] QianLi,XiangmengWang,ZhichaoWang,andGuandongXu.2023.Becausal: Processing(EMNLP).AssociationforComputationalLinguistics,Doha,Qatar, De-biasingsocialnetworkconfoundinginrecommendation.ACMTransactions 1724–1734. onKnowledgeDiscoveryfromData17,1(2023),1–23. [9] JürgenCito,IsilDillig,SeohyunKim,VijayaraghavanMurali,andSatishChan- [27] QianLi,ZhichaoWang,ShaowuLiu,GangLi,andGuandongXu.2021.Causal dra.2021.ExplainingMispredictionsofMachineLearningModelsUsingRule optimaltransportfortreatmenteffectestimation. IEEEtransactionsonneural Induction.InProceedingsofthe29thACMJointMeetingonEuropeanSoftwareEn- networksandlearningsystems34,8(2021),4083–4095. gineeringConferenceandSymposiumontheFoundationsofSoftwareEngineering [28] YujiaLi,DanielTarlow,MarcBrockschmidt,andRichardS.Zemel.2016.Gated (Athens,Greece)(ESEC/FSE2021).AssociationforComputingMachinery,New GraphSequenceNeuralNetworks.InProceedingsofthe4thInternationalConfer- York,NY,USA,716–727. enceonLearningRepresentations,ICLR2016,SanJuan,PuertoRico,May2-4,2016, [10] JürgenCito,IsilDillig,VijayaraghavanMurali,andSatishChandra.2022.Coun- ConferenceTrackProceedings. terfactualExplanationsforModelsofCode.InProceedingsofthe44thInternational [29] YiLi,ShaohuaWang,andTienN.Nguyen.2021.VulnerabilityDetectionwith ConferenceonSoftwareEngineering:SoftwareEngineeringinPractice(Pittsburgh, Fine-GrainedInterpretations.InProceedingsofthe29thACMJointMeetingon Pennsylvania)(ICSE-SEIP’22).AssociationforComputingMachinery,NewYork, EuropeanSoftwareEngineeringConferenceandSymposiumontheFoundationsof NY,USA,125–134. SoftwareEngineering(Athens,Greece)(ESEC/FSE2021).AssociationforComput- [11] HigorA.deSouza,MarcosL.Chaim,andFabioKon.2016. Spectrum-based ingMachinery,NewYork,NY,USA,292–303. softwarefaultlocalization:Asurveyoftechniques,advances,andchallenges. [30] ZhenLi,DeqingZou,ShouhuaiXu,ZhaoxuanChen,YaweiZhu,andHaiJin.2022. arXivpreprintarXiv:1607.04347(2016). VulDeeLocator:ADeepLearning-BasedFine-GrainedVulnerabilityDetector. [12] YangruiboDing,SahilSuneja,YunhuiZheng,JimLaredo,AlessandroMorari, IEEETransactionsonDependableandSecureComputing19,4(2022),2821–2837. GailKaiser,andBaishakhiRay.2022. VELVET:anoVelEnsembleLearning [31] ZhenLi,DeqingZou,ShouhuaiXu,XinyuOu,HaiJin,SujuanWang,Zhijun approachtoautomaticallylocateVulnErablesTatements.InProceedingsof2022 Deng,andYuyiZhong.2018.VulDeePecker:ADeepLearning-BasedSystemfor IEEEInternationalConferenceonSoftwareAnalysis,EvolutionandReengineering VulnerabilityDetection.InProceedingsofthe25thAnnualNetworkandDistributed (SANER).959–970. SystemSecuritySymposium,NDSS2018,SanDiego,California,USA,February18-21, [13] JiahaoFan,YiLi,ShaohuaWang,andTienN.Nguyen.2020. AC/C++Code 2018.TheInternetSociety. VulnerabilityDatasetwithCodeChangesandCVESummaries.InProceedingsof [32] WanyuLin,HaoLan,andBaochunLi.2021.Generativecausalexplanationsfor |
the17thInternationalConferenceonMiningSoftwareRepositories(Seoul,Republic graphneuralnetworks.InProceedingsoftheInternationalConferenceonMachine ofKorea)(MSR’20).AssociationforComputingMachinery,NewYork,NY,USA, Learning.PMLR,6666–6679. 508–512. [33] AnaLucic,MaartjeA.TerHoeve,GabrieleTolomei,MaartenDeRijke,and [14] MichaelFuandChakkritTantithamthavorn.2022. LineVul:ATransformer- FabrizioSilvestri.2022. CF-GNNExplainer:CounterfactualExplanationsfor basedLine-LevelVulnerabilityPrediction.InProceedingsof2022IEEE/ACM19th GraphNeuralNetworks.InProceedingsofThe25thInternationalConferenceon InternationalConferenceonMiningSoftwareRepositories(MSR).608–620. ArtificialIntelligenceandStatistics(ProceedingsofMachineLearningResearch, [15] TomGanz,MartinHärterich,AlexanderWarnecke,andKonradRieck.2021. Vol.151).PMLR,4499–4511. ExplainingGraphNeuralNetworksforVulnerabilityDiscovery.InProceedings [34] DongshengLuo,WeiCheng,DongkuanXu,WenchaoYu,BoZong,HaifengChen, ofthe14thACMWorkshoponArtificialIntelligenceandSecurity(VirtualEvent, andXiangZhang.2020.ParameterizedExplainerforGraphNeuralNetwork.In RepublicofKorea)(AISec’21).AssociationforComputingMachinery,NewYork, Proceedingsofthe34thInternationalConferenceonNeuralInformationProcessing NY,USA,145–156. Systems(Vancouver,BC,Canada)(NIPS’20).CurranAssociatesInc.,RedHook, [16] QingGao,SenMa,SihaoShao,YuleiSui,GuoliangZhao,LuyaoMa,XiaoMa, NY,USA,Article1646,12pages. FuyaoDuan,XiaoDeng,ShikunZhang,andXianglongChen.2018. CoBOT: [35] JingMa,RuochengGuo,SaumitraMishra,AidongZhang,andJundongLi.2022. StaticC/C++BugDetectioninthePresenceofIncompleteCode.InProceedings CLEAR:GenerativeCounterfactualExplanationsonGraphs.InProceedingsof ofthe26thIEEE/ACMInternationalConferenceonProgramComprehension(ICPC). theAdvancesinNeuralInformationProcessingSystems. 385–3853. [36] ChristopherMorris,MartinRitzert,MatthiasFey,WilliamL.Hamilton,JanEric [17] MadelynGlymour,JudeaPearl,andNicholasP.Jewell.2016.CausalInferencein Lenssen,GauravRattan,andMartinGrohe.2019.WeisfeilerandLemanGoNeural: Statistics:APrimer.JohnWiley&Sons. Higher-OrderGraphNeuralNetworks.InProceedingsoftheThirty-ThirdAAAI [18] MianxueGu,HantaoFeng,HongyuSun,PengLiu,QiulingYue,JingluHu,Chun- ConferenceonArtificialIntelligenceandThirty-FirstInnovativeApplicationsofArti- jieCao,andYuqingZhang.2022. HierarchicalAttentionNetworkforInter- ficialIntelligenceConferenceandNinthAAAISymposiumonEducationalAdvances pretableandFine-GrainedVulnerabilityDetection.InProceedingsoftheIEEE inArtificialIntelligence(Honolulu,Hawaii,USA)(AAAI’19/IAAI’19/EAAI’19). INFOCOM2022-IEEEConferenceonComputerCommunicationsWorkshops(IN- AAAIPress,Article565,8pages. FOCOMWKSHPS).1–6. [37] DaniloNumerosoandDavideBacciu.2021.Meg:Generatingmolecularcounter- [19] DayaGuo,ShuoRen,ShuaiLu,ZhangyinFeng,DuyuTang,ShujieLiu,Long factualexplanationsfordeepgraphnetworks.InProceedingsof2021International Zhou,NanDuan,AlexeySvyatkovskiy,ShengyuFu,MicheleTufano,ShaoKun JointConferenceonNeuralNetworks(IJCNN).IEEE,1–8. Deng,ColinB.Clement,DawnDrain,NeelSundaresan,JianYin,DaxinJiang, [38] MdRafiqulIslamRabin,VincentJ.Hellendoorn,andMohammadAminAlipour. andMingZhou.2021. GraphCodeBERT:Pre-trainingCodeRepresentations 2021.UnderstandingNeuralCodeIntelligencethroughProgramSimplification. withDataFlow.InProceedingsofthe9thInternationalConferenceonLearning InProceedingsofthe29thACMJointMeetingonEuropeanSoftwareEngineeringGraphNeuralNetworksforVulnerabilityDetection:ACounterfactualExplanation ISSTA’24,September16–20,2024,Vienna,Austria ConferenceandSymposiumontheFoundationsofSoftwareEngineering(Athens, TransactionsonKnowledgeandDataEngineering(2024). Greece)(ESEC/FSE2021).AssociationforComputingMachinery,NewYork,NY, [53] XiangmengWang,QianLi,DianerYu,ZhichaoWang,HongxuChen,andGuan- USA,441–452. dongXu.2022.Mgpolicy:Metagraphenhancedoff-policylearningforrecom- [39] NealJ.Roese.1997.Counterfactualthinking.PsychologicalBulletin121,1(1997), mendations.InProceedingsofthe45thInternationalACMSIGIRConferenceon 133. ResearchandDevelopmentinInformationRetrieval.1369–1378. [40] ThomasSchnake,OliverEberle,JonasLederer,ShinichiNakajima,KristofT. [54] YueWang,YaoWan,ChenweiZhang,LuBai,LixinCui,andPhilipYu.2019. Schütt,Klaus-RobertMüller,andGrégoireMontavon.2022.Higher-OrderEx- CompetitiveMulti-agentDeepReinforcementLearningwithCounterfactual planationsofGraphNeuralNetworksviaRelevantWalks.IEEETransactionson Thinking.In2019IEEEInternationalConferenceonDataMining(ICDM).1366– PatternAnalysisandMachineIntelligence44,11(2022),7581–7596. 1371. [41] RamprasaathR.Selvaraju,MichaelCogswell,AbhishekDas,RamakrishnaVedan- [55] GeemiP.Wellawatte,AditiSeshadri,andAndrewD.White.2022.Modelagnostic tam,DeviParikh,andDhruvBatra.2017.Grad-CAM:VisualExplanationsfrom generationofcounterfactualexplanationsformolecules.Chem.Sci.13(2022), |
DeepNetworksviaGradient-BasedLocalization.InProceedingsof2017IEEE 3697–3705.Issue13. InternationalConferenceonComputerVision(ICCV).618–626. [56] KeyuluXu,WeihuaHu,JureLeskovec,andStefanieJegelka.2019.HowPowerful [42] ArushiSharma,ZefuHu,ChristopherQuinn,andAliJannesari.2023.Interpreting areGraphNeuralNetworks?.InProceedingsofthe7thInternationalConference PretrainedSource-codeModelsusingNeuronRedundancyAnalyses. arXiv onLearningRepresentations,ICLR2019,NewOrleans,LA,USA,May6-9,2019. preprintarXiv:2305.00875(2023). OpenReview.net. [43] AvantiShrikumar,PeytonGreenside,andAnshulKundaje.2017.LearningIm- [57] FabianYamaguchi,NicoGolde,DanielArp,andKonradRieck.2014.Modeling portantFeaturesthroughPropagatingActivationDifferences.InProceedingsof andDiscoveringVulnerabilitieswithCodePropertyGraphs.InProceedingsof the34thInternationalConferenceonMachineLearning-Volume70(Sydney,NSW, 2014IEEESymposiumonSecurityandPrivacy.590–604. Australia)(ICML’17).JMLR.org,3145–3153. [58] ZhitaoYing,DylanBourgeois,JiaxuanYou,MarinkaZitnik,andJureLeskovec. [44] DavidSilver,JulianSchrittwieser,KarenSimonyan,IoannisAntonoglou,Aja 2019.GNNExplainer:GeneratingExplanationsforGraphNeuralNetworks.In Huang,ArthurGuez,ThomasHubert,LucasBaker,MatthewLai,AdrianBolton, ProceedingsoftheAdvancesinNeuralInformationProcessingSystems32:Annual YutianChen,TimothyP.Lillicrap,FanHui,LaurentSifre,GeorgevandenDriess- ConferenceonNeuralInformationProcessingSystems2019,NeurIPS2019,December che,ThoreGraepel,andDemisHassabis.2017.MasteringthegameofGowithout 8-14,2019,Vancouver,BC,Canada.9240–9251. humanknowledge.Nat.550,7676(2017),354–359. [59] DianerYu,QianLi,XiangmengWang,QingLi,andGuandongXu.2023.Coun- [45] YuleiSuiandJinglingXue.2016.SVF:InterproceduralStaticValue-FlowAnal- terfactualexplainableconversationalrecommendation. IEEETransactionson ysisinLLVM.InProceedingsofthe25thInternationalConferenceonCompiler KnowledgeandDataEngineering(2023). Construction(Barcelona,Spain)(CC2016).AssociationforComputingMachinery, [60] DianerYu,QianLi,XiangmengWang,andGuandongXu.2023.Deconfounded NewYork,NY,USA,265–266. recommendationviacausalintervention.Neurocomputing529(2023),128–139. [46] SahilSuneja,YunhuiZheng,YufanZhuang,JimA.Laredo,andAlessandroMorari. [61] DianerYu,QianLi,HongzhiYin,andGuandongXu.2023. Causality-guided 2021. ProbingModelSignal-AwarenessviaPrediction-PreservingInputMini- graphlearningforsession-basedrecommendation.InProceedingsofthe32ndACM mization.InProceedingsofthe29thACMJointMeetingonEuropeanSoftware InternationalConferenceonInformationandKnowledgeManagement.3083–3093. EngineeringConferenceandSymposiumontheFoundationsofSoftwareEngineer- [62] HaoYuan,HaiyangYu,ShuruiGui,andShuiwangJi.2023. Explainabilityin ing(Athens,Greece)(ESEC/FSE2021).AssociationforComputingMachinery, GraphNeuralNetworks:ATaxonomicSurvey. IEEETransactionsonPattern NewYork,NY,USA,945–955. AnalysisandMachineIntelligence45,5(2023),5782–5799. [47] JuntaoTan,ShijieGeng,ZuohuiFu,YingqiangGe,ShuyuanXu,YunqiLi,and [63] HaoYuan,HaiyangYu,JieWang,KangLi,andShuiwangJi.2021.OnExplain- YongfengZhang.2022.LearningandEvaluatingGraphNeuralNetworkExpla- abilityofGraphNeuralNetworksviaSubgraphExplorations.InProceedingsof nationsBasedonCounterfactualandFactualReasoning.InProceedingsofthe the38thInternationalConferenceonMachineLearning,ICML2021,18-24July ACMWebConference2022(VirtualEvent,Lyon,France)(WWW’22).Association 2021,VirtualEvent(ProceedingsofMachineLearningResearch,Vol.139).PMLR, forComputingMachinery,NewYork,NY,USA,1018–1027. 12241–12252. [48] JuntaoTan,ShuyuanXu,YingqiangGe,YunqiLi,XuChen,andYongfengZhang. [64] AndreasZeller.2002. Isolatingcause-effectchainsfromcomputerprograms. 2021.CounterfactualExplainableRecommendation.InProceedingsofthe30th InProceedingsofthe10thACMSIGSOFTSymposiumonFoundationsofSoftware ACMInternationalConferenceonInformation&KnowledgeManagement(Virtual Engineering(Charleston,SouthCarolina,USA)(SIGSOFT’02/FSE-10).Association Event,Queensland,Australia)(CIKM’21).AssociationforComputingMachinery, forComputingMachinery,NewYork,NY,USA,1–10. NewYork,NY,USA,1784–1793. [65] A.ZellerandR.Hildebrandt.2002.Simplifyingandisolatingfailure-inducing [49] JohnViega,J.T.Bloch,YoshiKohno,andGaryMcGraw.2000. ITS4:Astatic input.IEEETransactionsonSoftwareEngineering28,2(2002),183–200. vulnerabilityscannerforCandC++code.InProceedingsofthe16thAnnual [66] YaqinZhou,ShangqingLiu,JingkaiSiow,XiaoningDu,andYangLiu.2019.De- ComputerSecurityApplicationsConference.IEEEComputerSociety,257–267. vign:EffectiveVulnerabilityIdentificationbyLearningComprehensiveProgram |
[50] YaoWan,JingdongShu,YuleiSui,GuandongXu,ZhouZhao,JianWu,and SemanticsviaGraphNeuralNetworks.InProceedingsoftheAdvancesinNeural PhilipS.Yu.2020.Multi-modalattentionnetworklearningforsemanticsource InformationProcessingSystems,Vol.32.CurranAssociates,Inc. coderetrieval.InProceedingsofthe34thIEEE/ACMInternationalConferenceon [67] DeqingZou,YutaoHu,WenkeLi,YuemingWu,HaojunZhao,andHaiJin. AutomatedSoftwareEngineering(SanDiego,California)(ASE’19).IEEEPress, 2022. mVulPreter:AMulti-GranularityVulnerabilityDetectionSystemWith 13–25. Interpretations.IEEETransactionsonDependableandSecureComputing(2022), [51] YaoWan,WeiZhao,HongyuZhang,YuleiSui,GuandongXu,andHaiJin.2022. 1–12. Whatdotheycapture?astructuralanalysisofpre-trainedlanguagemodels [68] DeqingZou,YaweiZhu,ShouhuaiXu,ZhenLi,HaiJin,andHengkaiYe.2021. forsourcecode.InProceedingsofthe44thInternationalConferenceonSoftware InterpretingDeepLearning-BasedVulnerabilityDetectorPredictionsBasedon Engineering(Pittsburgh,Pennsylvania)(ICSE’22).AssociationforComputing HeuristicSearching. ACMTrans.Softw.Eng.Methodol.30,2,Article23(mar Machinery,NewYork,NY,USA,2377–2388. 2021),31pages. [52] XiangmengWang,QianLi,DianerYu,QingLi,andGuandongXu.2024. Re- inforcedpathreasoningforcounterfactualexplainablerecommendation.IEEE Received16-DEC-2023;accepted2024-03-02 |
2404.16651 Evolutionary Large Language Models for Hardware Security: A Comparative Survey MohammadAkyash HadiMKamali mohammad.akyash@ucf.edu hadi.mardanikamali@ucf.edu ECEDepartment,UniversityofCentralFlorida ECEDepartment,UniversityofCentralFlorida Orlando,Florida,USA Orlando,Florida,USA ABSTRACT theLLMs’applicabilityattheHW/SoClevelbyraisingquestions Automatinghardware(HW)securityvulnerabilitydetectionand likewhether"LLMcangenerateHDL"or"LLMcanvalidateHW mitigationduringthedesignphaseisimperativefortworeasons:(i) designs".JustlikeinSW,LLMshavethepotentialtobeutilizedfor Itmustbebeforechipfabrication,aspost-fabricationfixescanbe bothHWdesign,testingandvalidation(seeFig.1).Thesestudies costlyorevenimpractical;(ii)Thesizeandcomplexityofmodern showharnessingLLMs’capabilitytoanalyze,comprehend,and HWraiseconcernsaboutunknownvulnerabilitiescompromising generate/validate complex code structures, might make them a CIAtriad.WhileLargeLanguageModels(LLMs)canrevolutionize righttargetvs.existingformaltoolstoidentifypotentialsecurity bothHWdesignandtestingprocesses,withinthesemiconductor vulnerabilitieswithinRTLcodes[3,37].However,ensuringthe context,LLMscanbeharnessedtoautomaticallyrectifysecurity- integrityandsecurityofHWdesigns,coupledwiththepotential relevant vulnerabilities inherent in HW designs. This study ex- forunknownvulnerabilities,presentsbroaderchallenges. plorestheseedsofLLMintegrationinregistertransferlevel(RTL) Thissurveyaimstoofferausefulandcomprehensivesnapshot designs, focusing on their capacity for autonomously resolving ofrapidlygrowinguseofLLMsinHW/SoCdesigns,particularly security-relatedvulnerabilities.Theanalysisinvolvescomparing forsecurity.Weexploreadvancements,analyzingtheprosandcons methodologies,assessingscalability,interpretability,andidenti- ofeachmethod.Byexaminingcurrentapproaches,thisworkhigh- fyingfutureresearchdirections.Potentialareasforexploration lightstheinnovativeapplicationofLLMstoautomatethedetection includedevelopingspecializedLLMarchitecturesforHWsecu- andresolutionofsecurityvulnerabilitiesinHWdesigns.Also,we ritytasksandenhancingmodelperformancewithdomain-specific investigatefutureresearchdirections,emphasizingtheneedfor knowledge,leadingtoreliableautomatedsecuritymeasurement specializedLLMarchitecturesanddomain-specificknowledgein- andriskmitigationassociatedwithHWvulnerabilities. tegration.Ourgoalistooutlinearoadmapforharnessingthefull potentialofLLMsinaddressingHWsecuritychallenges,setting KEYWORDS thestageformorerobustandsecureHWsystems. LargeLanguageModels,HardwareSecurity,RTLDebugging 2 LLMSFORSW:ENGINEERINGANDTESTING 1 INTRODUCTION Sincethe1950s,manyresearcheffortshavebeenundertakentode- velophighlyefficientautomatedcodegenerationtools[38].These Intoday’ssemiconductortechnologylandscape,Assystem-on-chip effortshavespannedfromtraditionalprogramsynthesizers[38]1, (SoC)designsintegratemoreandmoreintellectualproperty(IP) eitherdeductiveorinductive,tocurrentneural-basedmodels,no- cores,eachwithuniquefunctionalityandsecuritychallenges,each tablycodebase-reliantgenerativemodels[31]. fromvariousvendors,eachwithever-increasingcomplexity,we WithrecentoutrageousadvancementsinLLMs,massiveresearch witnessagrowingchallengeindetectingandfixingvulnerabilities. hasfocusedonapplyingLLMsforindependentSWcodegeneration, GiventhepivotalroleofSoCs,whilesubstantialeffortshavebeen leadingtowidely-usedplatformslikeCodexandCodeGen[4].The investedinsoftware(SW)testinganddebugging,SoC(HW-based) foundationofthesemodelsliesinautonomouslypredictingthe testing,validation,andverificationremainlessmature[30].The subsequenttokenbyconsideringtheprecedingcontext,typically problemworsenswhilebugsaredetectedatlowerlevelsofabstrac- comprisingfunctionsignaturesanddocstringsthatdescribethe tion,whichmakesrespinsextremelydifficult(andevenimpossible, intendedfunctionalityoftheprogram,translatinghuman-written e.g.,post-silicon)[34].Moreover,existingsolutions,fromsimula- instructionsintoprecisecodesnippetsorentireprograms[4]. tiontoformalverification,usuallyrequireexpertise.Suchsolutions Whilethiscodegenerationreliesonnaturallanguageprocessing alsosufferfromscalabilityissues,unabletocopewiththegrow- (NLP),unlikenaturallanguagethatistypicallyparsedasasequen- ingsizeandcomplexityofSoCs[2].Furthermore,thesesolutions tialarrayofwordsortokens,codegenerationisscrutinizedbased cannotaddressthemajorityofSoCs’vulnerabilitiesduetorapidly onitssyntacticandsemanticstructure,oftendepictedusingtree evolvingthreats,suchaszero-dayattacks. structures,e.g.,abstractsyntaxtrees(AST)[39].Also,programming WiththerapidevolutionofLLMs,theircapabilitieshaveex- languageshavealimitedsetofkeywords,symbols,andrules,unlike pandedintothedomainofSWcodegenerationwithremarkable thebroadandnuancedvocabularyofnaturallanguages. success,e.g.,OpenAI’sCodex[36].Moreover,thescopeofLLMs extendstoSWcodetestingandverificationwhileoutperforming |
techniqueslikefuzzing[32].Whilesignificantprogresshasbeen 1Synthesizersaimtoautomaticallygenerateprograms(SWcodes),basedonaspace searchoveravarietyofconstraintsrelevanttodomainsknownasDomainSpecific achievedinSWthroughLLMs,studiesattheHW/SoClevel,par- Languages(DSLs).Thesetechniquesaremostlylimitedtopre-definedDSLsandthus ticularlyatRTL,havebeendispersed.Manystudieshaveinitiated sufferscalability,beinggeneral-purpose,andadaptabilityissues[1]. 4202 rpA 52 ]RC.sc[ 1v15661.4042:viXraGLSVLSI’24,June12–14,2024,Clearwater,FL,USA MohammadAkyashandHadiMKamali HDL Code Fine Tuning )n trainingonRTLcodeexamples.AcomparisonofallexistingLLM- RTL Modules Prompting Database o ita basedapproachesinthesetwocategoriesisshowninTable1. f Go er nH eD raL t ion (+ w VIn uis tt lr h nu c e&t rio a n w b/ iE it lx h ip tol ia euin st) g itiM ( rias n o itse g 3.1 LLMAgentforEDAAutomation pg eu RS SeveralstudieshaveexploredthepotentialofLLMinautomating Prompt theASICdesign/implementationprocess[8,14,27,29].ChatEDA Policy/Property Engineering andChipNeMoaretwoexamplesoftaskplanningandexecution Assertion HDL Database agents that interpret natural language commands from the de- G Foe rn He Dra Lt i Mon o dules P Vuro lnm ep rati bn ig li tf yo r )n o ita s trig an int ie na gm s. trC ah teip gN iee sM foo r[2 c9 h] ipim dp el se im gnen tats ska ss .e Ir tie ins vo of ld vo em sa thin e-s dp ee pc li ofi yc - Description g itiMs n o mentofbespoketokenizers,domain-adaptivecontinuedpretrain- Vulnerabilities ( ria pitse g g i tn iog n, sa .n Cd hs au tEpe Dr Avis [2ed 7]fi an ime- stu tn oin fag cig liu ti ad te ed opby timdo am lia ni tn e- rs ap ce tic oifi nc win its htr tu hc e- Database Fine Tuning e Ru S EDAtoolsbycomprehendinginstructionsinnaturallanguagefor Figure1:TheUsageofLLMsforHDL(RTL)Generation/Validation. generatinganddeliveringexecutableprograms. Usingsuchtechniques,LLMagentscanofferautomatedASIC Givensuchdifferences,theprimaryconcernforLLM-generated flow,fromRTLgenerationtoGDSIIcreation,byinvokingneces- codeis(i)correctness(testingandverificationprocess),and(ii) sarySWtoolsandutilizingrequiredscripts/files.However,while codebasedatahungriness[39].Intermsofcorrectness,testingand promising,thesetechniquesnecessitatethoroughanalysistotruly validationfromtheviewpointofLLMsrequirewell-definedmet- enhanceautomationinEDAtoolsforthefollowingreasons: rics,wheretraditionalmetrics,e.g.,BLEU thatwidelyusedinNLP (1)Expert-OrientedTrainingandFine-Tuning:Constructingsuch assessments[39],failduetotheirfocusonlinguisticsimilarity.For example,CodeBLEU thatevaluatesthequalityofcodeproduced frameworksheavilyreliesonexperteffortsfortrainingorfine- byLLMs,orPass@kthatquantitativelymeasuresthefunctional tuningthemtoaccommodatespecificASICflows.Giventhevari- accuracyofcodegenerationmodels,areexampleofsuchnewmet- etyoftechnologieswiththeirrespectivedocumentation,syntaxes, rics[36].Regardingcodebasedataforcodegeneration,substantial flows,andscriptingmethods,thepre-trainedLLMmaynotoffera codebasedata2isrequiredforenhancedtrainingand/orfine-tuning universallyapplicablemodelforallenvironments. toimprovetheefficacyofLLMsforcodeganeration[4,36]. (2)FailureinHandlingUnforeseenIncidents:Despiteextensivefine- tuning,theLLM-basedagentmayinaccuratelyextractinformation 3 LLMSFORHW:DESIGNANDTESTING fromreports/specsorgenerateincorrectscripts/configswhencon- frontedwithnewincidentsintheflows.Technologyadvancements, SimilartoSWengineeringandtesting,leveragingLLMscansignif- EDAtoolsupdates,etc.,mayworsenthisissue,astheLLMagent icantlyoptimizeandenhancecircuitdesignprocesses,particularly mayfailtoprovidethedesiredoutputunderevolvingconditions. withinElectronicDesignAutomation(EDA)frameworks.LLMscan (3)DependenceonTechnology:Toclarifythis,weraiseaquestion! beusedathighlevelabstraction,e.g.,RTLs,to(i)reducemanual HowsimilaristheEDAflow(i)fromonedesigntoanotherdesign, effortsforimplementation3,(ii)addressthechallengeoflacking (ii)fromonetechnologytoanothertechnology,(iii)fromoneven- HDLcodebase4,(iii)expeditetime-to-market(TTM)inthecom- dortoanothervendor?Now,thequestionbecomeshowdeepis petitivechipdesignprocess,and(iv)enableamoreefficientand LLMfine-tunedbasedonthesedesigns,technologies,andvendors? reliablesystem(byreducinghuman-inducedfaults)[40]. Whilechatbotsmayofferbasicassistance,theprospectofachieving ThecurrentLLM-basedmethodologiesinHWcanbeclassified comprehensiveautomationseemstoremainelusive. into two primary categories: (1) Development of automated AI agentsaimedatstreamliningEDAworkflows(e.g.,ASICflow);(2) DerivationofSWcodegenerationforRTLimplementation.Re- 3.2 LLMforRTLGenerationandRefinement gardingtheformercategory,LLMsassistinvarioustaskssuchas ThemainLLM-basedRTL-orientedresearchfocusesonthegen- |
scriptgeneration,architecturespecification,andinterpretationof erationandrefinementofRTL,primarilytransitioningfromspec- compilationreports,therebyminimizingtheworkloadofthedesign ificationtoRTLdesign(+optimization).Initialeffortsemphasize team.Withinthelattercategory,solutionspredominantlyutilize promptengineering,crucialtosuccessfulRTLgenerationwhile LLMsintwomanners:(i)refinementofdesignprompts,whichen- relyingontheexistingLLMs[8,10,25].Othermethods,e.g.,Veri- tailsthecreation(engineering)ofmoreprecisepromptstoguide genandVerilogEval,adaptopen-sourceLLMslikeCodeGen[4], LLMstowardsRTLgenerationwithincreasedeffectiveness,and(ii) followedbyfinetuningonRTL,toproducemoreoptimizedHDL RTL-basedtuning,whichinvolvesdirectlytuningLLMsthrough modules[13,41].Additionally,studiessuchasChipGPTandAu- toChipexploreuseoffeedbackmechanismstoenhanceHDLquality, 2Thedatamustbenotonlyvastbutalsodiverse,relevant,andofhighintegrityasthe addressingaspectslikecompilationerrorsanddesignoptimization superioirqualitycodebasedataenhancesmodelperformancesignificantly[32]. (PPA optimization) [10, 20]. While these methods often rely on 3Itcanpotentiallyserveasanalternativetohighlevelsynthesis(HLS),therebyenabling staticanalysis,DeLorenzoetal.Introduceoptimizationtechniques designerswithlimitedHDLexpertisetoswiftlygenerateHWdesigns[40]. 4LackofHDLcodebaseisalwaysasubstantialbarrierforAI-drivenHWsolutions, likeMonteCarlotreesearch(MCTS)tofine-tuneLLMtokenseven consequentlyenhancingtheefficiencyofthetrainingphase[33]. furtherformoretunedoptimizationatthebackendofLLMs[12].EvolutionaryLLMsforHardwareSecurity:AComparativeSurvey GLSVLSI’24,June12–14,2024,Clearwater,FL,USA Table1:ATopComparisonofLLM-basedHWRTLGenerationandEDATools. Study Target LLMEngine Input Output Comment(—Shortcomings—) Changetal.[10] RTLGeneration+Refine- GPT-3.5 DesignSpecificationPrompts RTLModule -StaticPPAanalysisispost-LLMwithnoLLM-based ment +HumanFeedbackforCorrec- improvement. tions -Humanfeedbackisneededformanualcorrectionper design. Thakur et al. RTLGeneration GPT-4, Llama2, GPT-3.5T, Design prompt + Com- CompiledandTestedRTLDesign -Feedbackaddressescompilation/simulationerrorsbut [20] w/guaranteedCompilation Claude2 pile/SynthesisReport mayalterfunctionpriority,leadingtounintendedfunc- tions. -NoFeedbackforPPAEfficiencyMatter Heetal.[27] Automatic EDA Flow Llama2-70B NaturalLanguageInstructions EDAToolCommands&Reports+ -Itiseitherdesign-ortechnology-Dependent. Scripting and Execution +RTLDesign Scripts+SynthesizedDesign+Lay- -Cannotbeeasilydesign/tool-agnostic. Calls out(GDSII) Lietal.[14] ArchitectureSpecifications GPT-4 Architecture specifications + HierarchicalReviewedArchitecture -Specificationsarelimitedtotheexistingtechnologies. Generation+Review RTLDesign Specifications -Itismostlyprocessor-basedinstructions.Notforgeneric HW. Luetal.[25] RTLGeneration GPT-3.5, GPT-4, VeriGen, Naturallanguageinstructions RTLDesign -WIthnofeedback,successrateislowforfunctional StarCoder correctness. -Thereferencedesignsareverylimitedandrelatively small. Liuetal.[18] RTLGeneration RTLCoder Naturallanguageinstructions RTLDesign -Diversityrateislowinthetrainingdataset. -Thefunctionalcorrectnessoftrainingdatasetisnot ensured,leadingtolowerfunctionalcoverageinthe generatedoutputs. Thakur et al. CompletingPartialRTLDe- MegatronLM-355M, Code- PartialRTLDesign+Custom RTLDesign -LackofOrganizedDataset. [41] sign Gen,code-davinci-002,and problemsetwithtestbenches -RTLLMshowstheperformancedoesnotsurpassexist- J1-Large-7B ingcommercialmodels. -Completionnecessarilydoesnotprovidecorrectfunc- tionalities. Chengetal.[11] RTLGeneration+Repair+ Llama2-7B,Llama2-13B Naturallanguagedescriptions CorrectedVerilogcode+Verilogcode -Forrefinement,itisforsyntacticerrors(compilation EDAScriptGeneration +Verilogfiles+EDAscripts fromdescriptions+EDAscripts issues). DeLoetal.[12] RTLGeneration VeriGen-2B Naturallanguageinstruction+ Compiled,Tested,andPPAImproved -TestedonSmallToyCircuits,e.g.,addersandMAC RTLmodulesdescription RTLDesign units. -StochasticbehaviorofMCTS.LessImprovementin MoreIterations. Lietal.[42] RTLSynthesis(Mapping) CircuitTransformer Gate-LevelDesign(AIG) DesignModel(TruthTable)+Synthe- -LowAccuracyforLargerCircuits. sizedAIG -LowPerformancewithnoMCTS(LowScalability). Morerecentadvancementshaveshiftedthefocusfromfinetun- devicedemands.Furtherresearchisnecessarytoovercomescala- ingandpromptengineeringinexistingLLMstothedevelopment bilitychallengesandmaximizeLLMpotentialinRTLgeneration. ofdedicatedcircuittransformers,e.g.,Lietal.Introduce"Circuit Transformer"with88MparametersandintegratedMCTSforopti- 4 LLMFORHW:SECURITY(VERIFICATION) mization,leadingtoafullyopen-sourceindependentLLMsforRTL GiventheparamountsignificanceofsecurityofHWdesignsin [42].Similarly,RTLCoderproposesanautomateddatageneration modernSoCs,andinlightoftheearlierdiscussionemphasizing |
flowutilizingamodelwith7Bparameters,producingasizablela- the importance of verification over LLMs, several studies have beleddatasetforRTLgeneration[18].Theseendeavorshaveled commencedemployingLLMforSoCverification(movingtowards to the emergence oflarge circuit models (LCM), enhancing the bug-freedesigns,eitherfunctionalorsecurity-oriented).Similar expressionofcircuitdata’ssemanticsandstructures,thuscreating to LLM-based RTL design, these approaches fall into two main morerobust,efficient,andinnovativedesignapproaches. categories:(i)refinementofdesignprompts,wheredesignersguide Despiteitspromise,moreresearchisneededasfollows: (1)UniversalityIssues:LLM-basedRTLgenerationfaceslimitations LLMstowardgeneratingsecurecode(i.e.promptengineering),and (ii)RTL-basedtuning,whichisaboutalteringtheLLM’sframework duetoscarcecodebaseknowledgeavailableformodelfine-tuning andtrainingperapplication[18].Asanexample,developingse- itselftogenerateoutputbug-freecode.InadvancingHWsecurity, curityenclavesorfully-debuggedVerilogmodulesisincredibly researchershaveleveragedLLMsusingeitherpurenaturallanguage challengingastherearenotmanytrainingdatasetsavailableforit. prompts(i.e.descriptionofthecode)orablendofnaturallanguage (2)Verification(Functional)Issues:Existingstudieshighlightthe (i.e.commentsdesignedbyhumanexperts)andcode.Thefollowing describesthesetwocategoriesindetailandhoweachcategorycan complexnatureof(functional)verificationtasks,furthermagnified enhanceverificationandsecurityforHWdesigns. bythelimitedavailabilityoftrainedmodelsfortestbenchgen- erationandfunctionalsimulation[13].Thecomplexityofcircuit designs,whichinvolvebothfunctionalandstructuralattributes, 4.1 PromptEngineering worsensthechallenge,asevensmallchangestothestructure(a PromptengineeringisthepracticeofdesigninginputsforLLMs, codeline)canhavesignificanteffectsonfunctionality,underscoring toobtainspecific,desirableoutputs.Thistechniqueoptimizesthe thecomplexityoftestbenchgenerationandsimulationofcircuits. interactionwithLLMstoimproveitsperformanceonvarioustasks, (3)ScalabilityIssues:ScalabilityiscrucialforRTL-basedLLMsin leveragingstrategieslikefew-shot[21],andchain-of-thought[9] addressingcomplexcircuitdesigns[25].Effortstoenhancecompu- promptingtoguidethemodel’sresponseseffectively.Afewrecent tationalefficiencyandmodelarchitecturesophisticationareessen- studiesinHWexploretheapplicationsofpromptengineeringfor tialtoaccommodatelargerdesignsandmeetevolvingelectronic enhancingvulnerabilitydetectionandrepair,aswellasdesignver- ification.Forexample,[3]employsarangeofdetailedinstructionGLSVLSI’24,June12–14,2024,Clearwater,FL,USA MohammadAkyashandHadiMKamali promptsforvariousLLMs,aimingtoevaluatetheefficacyofeach Prompt Prompt Prompt modelincorrectingHWvulnerabilities5.Fig.2showsanexam- B ma os de ud l eo .n the provided instruction, correct the security bug in this Verilog W cr oi mt pe a ra e sh a br id tw sa r oe f m to wd ou l se e ct rh ea tt keys W cr oi mt pe a ra e sh a tr wd ow a sr ee c rm eo td u cl oe d et sh a bt i t by bit. pleofhowpromptingGPT-4withabugdescriptionandrepair BUG: Access Control Check Implemented After Asset is Accessed. Response Response instructionsalongsidetheVerilogcodeenablesGPT-4toaddress Ensure that access is granted before data is accessed. module key_comparator( module BitByBitComparator thevulnerability.Herearetwoimportantlessonstobelearned: module user_grant_access(data_out, usr_id, data_in, clk, rst_n); input [7:0] key1, #(parameter WIDTH = 8) ( output reg [7:0] data_out; input [7:0] key2, input [WIDTH-1:0] key1, (1)Theexampleshowsthatbeingsuperspecificiscrucialinen- input wire [2:0] usr_id; output reg match, input [WIDTH-1:0] key2, input wire [7:0] data_in; output [7:0] diff output reg match gineeringtheprompttoensurethegeneratedcodeisdevoidof input wire clk, rst_n; ); ); reg grant_access; vulnerabilities.Thus,itisvitaltohavecarefulcraftingbyhuman wire [7:0] compare_result; initial begin always @(posedge clk or negedge rst_n) begin assign compare_result = key1 ^ key2; match = 1'b1; expertstogeneratesuchprompts.Thisrequirementforhuman if (!rst_n) for (int i = 0; i < WIDTH; i++) data_out = 0; always @(compare_result) begin begin inputcouldbecomeatediousprocess,posingchallengesinscaling else begin if (compare_result == if (key1[i] != key2[i]) grant_access = (usr_id == 3'h4) ? 1'b1 : 1'b0; 8'b00000000) begin andautomatingtheapproachforbroaderapplications. data_out = (grant_access) ? data_in : data_out; match = 1; match = 1'b0; end else break; (2)TheperformanceandefficacyofLLMsdependsontheinfras- end match = 0; end endmodule end end tructureofLLMused.WhilecommercialLLMslikeGPT-4tendto Response end assign diff = compare_result; endmodule outperformmodelstrainedoncodingdatasets,includingCodegen module user_grant_access(data_out, usr_id, data_in, clk, rst_n); endmodule |
andVeriGen,intermsofrepairaccuracyandefficacy,thisadvantage output reg [7:0] data_out; input wire [2:0] usr_id; comesatthecostofincreasednumberofparameters. input wire [7:0] data_in; input wire clk, rst_n; Theimportanceofprecisioninpromptgenerationisalsoshown reg grant_access; in[15],relyingonChatGPT,revealingthefactthatthesuccessrate always @(posedge clk or negedge rst_n) begin if (!rst_n) begin canbedegradedsignificantlywhilethemodelismorelimited6.This data_out <= 0; end else begin studyalsodemonstratesmodelsmisguidingthedesignerswhilethe grant_access = (usr_id == 3'h4) ? 1'b1 : 1'b0; if (grant_access) begin VerilogcodeofvariousCWEscenariosaspartofinstructioncan data_out <= data_in; end leadtonewformofvulnerabilitiesfromprompts(maynotfully end end representthecaptureofpotentialvulnerabilitiesinSoCdesigns). endmodule Toenhanceverificationcapability,somestudiesfocusontheuse Figure2:AnExamplaryCaseinGPT-4forSecurityDebugging. ofLLMsforverificationassertiongeneration(e.g.,SystemVerilog languagepromptsthatmimiccodecommentsinassertionfiles,us- Assertions(SVAs)).Forinstance,[16]usesGPT-4inaniterative ingthesepromptstogenerateSVAswithLLMs,andthenassessing mechanismtorefinepromptsforGPT-4,enablingittogenerate thecorrectnessoftheseassertionsagainstabenchmarksuiteof moreaccurateandcompleteSVApropertiesfromRTLcode.This real-worldHWdesignsandcorrespondinggoldenreferenceasser- approachcoupledwithAutoSVA2,whichautomaticallygenerates tions.TheresultsdemonstratethatLLMs,withvaryinglevelsof formalverificationtestbenches,enablesLLM-guidedformalverifi- Des dig en tailinthepB ru og m D pet te sc ,t co ar ngeneratevalidHWsecurityassertions. Design Bug Detector cationtowardsmoreautomation.However,themajorobstacleto MorerecentuseofLLMsforRTLdebuggingaimedtoenhance thisautomationistherelianceofthisapproachoniterativerefine- automationinthedomain.Forinstance,RTLFixer[26]automati- mentbyanexpert,whichrequiresadeepunderstandingofboth callyrectifiessyntaxerrorsinVerilogcodebyleveragingRetrieval- HWverificationandpromptengineering. AugmentedGeneration(RAG)andtheReActpromptingstrategy. Similarly,AssertLLM[23]usesacustomizedGPT-4Turboto RTLFixeremploBu yg saretrievaldatabasefilledwithexpertknowl- Bug Designer generateSVAs(functionalverificationassertions)fromnaturallan- edgeofsyntaxerrors.ReActDaelssiognienrtroducesaniterativeapproach guage design specifications (translating design documents). Al- involvingreasoning,action,andobservation,mimickingexperts’ thoughresultsshowhighsuccessrate,thismodelisalsoheavily debuggingtechniques.Thiscombinationbuildsamoreeffective dependenttothequalityandcompletenessofthedesigndocuments. Bugs fyresete m for autoLmLMating the debugging. However, it still heavilyBug free LLM Thisiswhilerichnessofdocumentationisalwaysacriticalissuein desrieglniesonthecomprehensivenessandcurrentnessoftheexternal design HWdesign,thusAssertLLMmightstruggletogenerateassertions knowledgedatabase,whichiscollectedbyhumanexperts. thatfullycapturetheintendeddesignbehavior. SomeLLM-basedstudiesfocusontheuseofsuchmodelsatthe LLM4DV[28]usesLLMswithprompttemplatestoautomate SoClevel.DIVAS[19]usesLLMstoanalyzeSoCspecificationsand thegenerationofteststimuliforverification.LLM4DVintegrates craftsprecisequeriesthatencapsulatepotentialsecurityvulnera- LLMswithasystematicmethodthatincludesastimulusgenera- bilitiesrelatedtotheSoC.ThesequeriesaresubmittedtoLLMs, tionagent,prompttemplates,andfourLLM-basedimprovements, e.g.,ChatGPTandGoogle’sBARD,andtheLLMsmapthesequeries e.g.,summarizingprompts,resetting,etc.Evaluatedusingthree torelevantCWEvulnerabilitiesthatcouldcompromisetheSoC. custom-designedlarge-scaleDUTs,thisframeworkdemonstrated OnceCWEshavebeenidentified,DIVASutilizesLLMstoconstruct promisingresultsandachievedhighcoverageratesinsimplesce- SVAsforeach.TheseSVAsaredesignedtoactassecurityverifica- narios.However,thisapproachfocusesmoreoncoverage-related tionmechanisms,ensuringtheSoC’sdesigncomplieswithsecurity metrics,overlookingsecurity-orientedvulnerabilities. standardsandissafeguardedagainstidentifiedvulnerabilities. Similar to these formal-based mechanisms, [37] proposes de- Similarly,[5]exploreshowGPTsareutilizedinSoClevelfor signinganevaluationframeworkthatincludesgeneratingnatural securityvulnerabilityinsertion,detection,assessment,andmitiga- tion.Thisstudy,focusingonsmallermodels,e.g.,ChatGPT-3.5,and 5Thesepromptsmustprovideathoroughdescriptionofthebug,strategiesfordebug- relyingonasub-setofCWEs,evaluatesthemodificationpossibil- ging,andillustrativeexamplesthatcontrastinsecurecodewithitssecurecounterpart. ityoverRTLusingone-andfew-shotlearning.Bycomprehensive 6Thenumberofparameterswasrestrictedtoarangeofmillionsinsteadofbillions.EvolutionaryLLMsforHardwareSecurity:AComparativeSurvey GLSVLSI’24,June12–14,2024,Clearwater,FL,USA Table2:ATopComparisonofLLM-basedHWSecurityValidationSolutions |
Study Target LLMEngine #ofBugs SuccessRate SourceofBench- ExpertKnowledgeNeeded? Reference(forEval) Comment marks Nairetal. Prompt gen- ChatGPT 10 100%∗1 CWE (Descrip- FortheWholeProcess Manualexpertinterven- -Cannotbeautomated. [15] eration for tions) tionperdebugging -LimitedevaluationonCWEs DebuggingRTL Kande et Detection(Gener- OpenAI Codex 10 ∼25% Hack@DAC21, Formanuallybuildingdetailedse- GoldenAssertion -Highsuccessrateonlywhenbugandsecurity al.[37] ateAssertion) (code-davinci- OpenTitan curityconstraints policyisknown.Otherwise,itisbelow10%. 002) -Onlyforsingleendmodule,NoHierarchicaland RecursiveSVA. Ahmad et Repair OpenAI Codex 15 ∼31% CWE (Bench- -Fortraining(datasetgeneration RepairedCode(Prompt -Onlyapplicableonpre-observedcaseswith al.[3] (pre-detected (code-davinci- mark), forassistingrepairs) Reference) highsimilarity(tobedetectedbyCWEAT) bugs) 001, code- OpenTitan, -ForCWEATstaticanalyzeverifi- davinci-002, Hack@DAC21 cation code-cushman- 001),CodeGen Sahaetal. Detection(Gener- GPT3.5, N/R∗2 N/R∗2 CWE,Trust-Hub Forpromptengineeringandevalu- Manualexpertinterven- -LimitedevaluationonCWEsandsmarttoy [5] ateAssertion),se- GPT4 ation tionperdebugging circuits. curityvulnerabil- ityinsertion Fu et al. Detectionand/or StableLM,Falcon, 1(different ∼35% Open-Source Forfine-tuning(Open-sourcecode RepairedCode(Pre-and -Detailedenhancementfortrainingisneeded. [22] Repair LLama2 models) SoCsandMicro- classifications) Post-correction of Git Perdesign,anewtrainingmightberequired. processors (CVA6,Opentitan,...)) -Rawdatasetislimitedandnotdesign-agnostic). Mengetal. Detection(Gener- HS-BERT 8 326 Bugs from RISC-V, Open- Forclassifyingsecurityrulesindoc- Manualexpertlabling -LimitedbythequalityoftheinputHWdocu- [24] ateAssertion) 1723sentences RISC, MIPS, uments for security property mentation. OpenSPARC, validation -Limitedtothedesign/verificationteamknowl- OpenTitan(docu- edge. mentation) Fangetal. Detection(Gener- GPT4Turbo N/A 89% Open-source Forextractingverification-required GoldenRTLImplemen- -LimitedbythequalityoftheinputHWdocu- [23] ateAssertion) CPUs, SoCs, informationfromdocuments tation mentation. Xbars, arith- -Mostlysyntacticandbasicfunctionalverifica- metic. tion. Pariaetal. Detection(Gener- ChatGPT, N/A N/A CEP SoC (MIT- Forassumptions(CWE-basedsecu- N/R∗2 -ExpertreviewforSpecGenerationisneeded [19] ateAssertion) BART LL) rityrules) perdesign. Veraetal. Detection(Gener- GPT-4 N/R∗2 N/R∗2 RISC-VCVA6 Forbuildingrulesrelatedtoasser- Previously developed -Thesuccessrateheavilydependsonexpert’s [16] ateAssertion) tions formaltools(AutoSVA) inputforpromptengineering. Zhang et TestStimuliGen- GPT-3.5-turbo N/A small:∼98%, Self-designed Forpromptsgeneration CoverageMonitoring -Notforsecuritypurposes.Coverage-basedtest- al.[28] eration large:∼65% RTLDesigns ing. Tsai et al. SyntaxErrorsRe- GPT-3.5, 212 98.5% VerilogEval Forretrievaldatabase(debugging VerilogEval, -Notforsecuritypurposes.OnlyforSyntaxer- [26] pair GPT-4 benchmarks, reference) RTLLM rors. RTLLM bench- marks ∗1:Itis100%asallthedebuggingisdonemanually.Bugisknown,thedebugginginstruction(flow)isknown,andGPTisusedforgeneration. N/R∗2:NotReported. exploration,thestudysuggestsspecificpromptguidelinesforef- datafortrainingmodels,specificallytailoredtoidentifyingand fectivelyusingLLMsinSoCsecurity-relatedtasks. fixingbugsinHWdesigns.Althoughinnovativeandpromising,the LLMspossessadual-usenature;WhileadvancingHWsecurity qualityofthisdataisdependentonthefilteringprocessaccuracy. initiatives,LLMcanalsopresentnewthreatssimultaneously.[7] TheeffectivenessofLLMsindebuggingHWdesignsisthusdirectly delvesintothepotentialofgeneral-purposemodelslikeChatGPTin tiedtohowpreciselythedataiscuratedandprocessed. theoffensiveHWsecuritydomainThisstudyinvolvesemploying TheNSPGframework[24]isanotherexampleofLLMsolution promptengineeringtechniquestoguideLLMsinfilteringcom- forHWverificationthatoffersanovelmethodologyforautomat- plexHWdesigndatabases,correlatingsystem-levelconceptswith ingthegenerationofHWsecuritypropertiesutilizingfine-tuned specificHWmodules,identifyingsecurity-criticaldesignmodules, LLMs.Thisapproachisanchoredbythedevelopmentofaspecial- andmodifyingthemtointroduceHWTrojans.Thisstudyinitiates izedlanguagemodelforHWsecurity,HS-BERT,whichistrained thepossibilityofusingLLMsforbuildingmorestealthyandunde- ondomain-specificdata.Throughdeepevaluationonpreviously tectableHWTrojans,reshapingthecharacteristicsofHWTrojan unseendesigndocumentsfromOpenTitan,NSPGhasprovenits implementation,detection,andmitigation. capabilitybyextractingandvalidatingsecurityproperties,showing securityvulnerabilitieswithintheOpenTitandesign.However,a notablelimitationofnotonlyNSPG,butalsoallHW-orientedfine- 4.2 Fine-Tuning tunedmodelfornowliesinitsdependencyonthequalityandscope |
As mentioned previously, some of these LLM-based HW verifi- oftheHWdocumentationprovidedasinput(whichisalmostsuper cationsolutionsrelyonfine-tuning,whichinvolvesadjustinga limited).AsintherealmofHW/SoCdesign,thisdocumentation pre-trained language model by training it on Verilog/SVA data. oftenremainsincomplete,inconsistent,orlacknecessarydetail,the However,LLMsrequireextensivedatasetsforeffectivetraining, precisionandefficacyofthesolutioncouldbeadverselyaffected. posingasignificantchallengeinspecializeddomains,particularlyin HWsecurityduetothescarcityoftargeteddata.LLM4SecHW[22] 5 TAKEAWAYSANDFUTUREDIRECTIONS isoneexample,whichleveragesadatasetcompiledfromdefects andremediationstepsinopen-sourceHWdesigns,usingversion InallfacetsofusingLLMsforHWsecurity,itbecomesapparentthat controldatafromGitHub.Thisdatasetwascreatedbyselecting asignificanthurdle,whetherinHWdesignorintesting/verification, significantHWprojectssuchasCVA6,CVA5,OpenTitan,etc.,and whetherstemmingfrompromptengineeringorfine-tuning,liesin extractingcommits,issues,andpullrequests(PRs)relatedtoHW theprocurementandeffectiveutilizationofqualitydata[17].Also, designs.Thisapproachprovidesarichsourceofdomain-specific asdepictedinTable2,creatingspecializedLLMs(e.g.,LCMs)orGLSVLSI’24,June12–14,2024,Clearwater,FL,USA MohammadAkyashandHadiMKamali employingpre-existingonesnecessitatesadeepexpertknowledge [7] G.Kokolakisetal.2024.HarnessingthePowerofGeneral-PurposeLLMsinHard- toachieveahighsuccessrateforgeneration,detection,andmitiga- wareTrojanDesign.InProceedingsofthe5thWorkshoponArtificialIntelligence inHardwareSecurity,inconjunctionwithACNS. tion.Consideringthesetwoobstacles,despitebeingpromising,the [8] J.Blockloveetal.2023.Chip-Chat:ChallengesandOpportunitiesinConversa- endeavorrequiresrigorouseffortacrossmultiplefacets. tionalHardwareDesign.In2023ACM/IEEE5thWorkshoponMachineLearning Creatingastandarddatabasereferenceiscrucialforbothtraining forCAD(MLCAD).IEEE. https://doi.org/10.1109/mlcad58807.2023.10299874 [9] J.Weietal.2022.Chain-of-thoughtpromptingelicitsreasoninginlargelanguage andevaluatingthemethodsproposedinthisdomain.Itfacilitates models.35(2022),24824–24837. afaircomparisonamongdifferenttechniques,ensuringthatthe [10] K.Changetal.2023.ChipGPT:Howfararewefromnaturallanguagehardware design. arXiv:2305.14019[cs.AI] pros/consofeachapproachcanbeaccuratelyassessed.Moreover, [11] K.Changetal.2024.Dataisallyouneed:FinetuningLLMsforChipDesignvia high-qualityRTLdataisindispensablefortheoptimaltrainingof anAutomateddesign-dataaugmentationframework. arXiv:2403.11202[cs.AR] LLMs.ItenablesthesemodelstolearntheintricaciesofRTLdesigns [12] M.DeLorenzoetal.2024.MakeEveryMoveCount:LLM-basedHigh-Quality RTLCodeGenerationUsingMCTS. arXiv:2402.03289[cs.LG] effectively,therebyenhancingtheirefficiencyinsecuritytasks. [13] M.Liuetal.2023.VerilogEval:EvaluatingLargeLanguageModelsforVerilog GiventhedistinctcharacteristicsofRTLcodesasopposedtonat- CodeGeneration.In2023IEEE/ACMInternationalConferenceonComputer-Aided urallanguagetexts,itbecomescrucialtoconsiderdomain-specific Design(ICCAD). [14] M.Lietal.2024.SpecLLM:ExploringGenerationandReviewofVLSIDesign modelsforhandlingHWcodes.Incorporatingconceptssuchas SpecificationwithLargeLanguageModel. arXiv:2401.13266[cs.AR] graphsandASTsintoLLMscanbridgethegapbetweenthestruc- [15] M.Nairetal.2023. GeneratingSecureHardwareusingChatGPTResistantto CWEs.CryptologyePrintArchive,Paper2023/212. https://eprint.iacr.org/2023/ turalnuancesofRTLcodesandtheinherentlysequentialprocessing 212https://eprint.iacr.org/2023/212. ofconventionallanguagemodels.Itiscrucialtodeviseanovelmet- [16] M.Orenes-Veraetal.2023.UsingLLMstoFacilitateFormalVerificationofRTL. ricspecificallyforevaluatingthesecuritycoverageofRTLcode arXiv:2309.09437[cs.AR] [17] Suriya Gunasekar et al. 2023. Textbooks Are All You Need. examinedbyLLMs.Thismetricwouldserveasacriticalfeedback arXiv:2306.11644[cs.CL] mechanismforLLMs,enablingthemtoassessandrefinetheirout- [18] S. Liu et al. 2024. RTLCoder: Outperforming GPT-3.5 in Design RTL putcontinually.ByquantitativelymeasuringthesecurityofRTL Generation with Our Open-Source Dataset and Lightweight Solution. arXiv:2312.08617[cs.PL] designs,themetricwouldallowLLMstooptimizetheirlearningpro- [19] S.Pariaetal.2023. DIVAS:AnLLM-basedEnd-to-EndFrameworkforSoC cesstowardsgeneratingcodethatisnotonlyfunctionallycorrect SecurityAnalysisandPolicy-basedProtection. arXiv:2308.06932[cs.CR] [20] S.Thakuretal.2023.AutoChip:AutomatingHDLGenerationUsingLLMFeed- butalsoadherestohighsecuritystandards. back. arXiv:2311.04887[cs.PL] |
Buildingonthefoundationalstrategiesmentionedabove,further [21] TomB.Brownetal.2020. LanguageModelsareFew-ShotLearners. CoRR refinementcanbeachievedthroughtheoptimizationofcontinuous abs/2005.14165(2020).arXiv:2005.14165 https://arxiv.org/abs/2005.14165 prompts7.Suchstrategiesalsoopenthedoorsformechanismsto [22] W.Fuetal.2023.LLM4SecHW:LeveragingDomain-SpecificLargeLanguage ModelforHardwareDebugging.InAsianHOST. enhancepromptautomationforLLMs,e.g.,auto-prompting8.These [23] W.Fangetal.2024.AssertLLM:GeneratingandEvaluatingHardwareVerification optimizationsareopenresearchdirectionspotentiallypresentinga AssertionsfromDesignSpecificationsviaMulti-LLMs. arXiv:2402.00386[cs.AR] [24] X.Mengetal.2023.UnlockingHardwareSecurityAssurance:ThePotentialof morefeasibleandefficientalternativetoLLMfine-tuning. LLMs. arXiv:2308.11042[cs.CR] [25] Y.Luetal.2023.RTLLM:AnOpen-SourceBenchmarkforDesignRTLGeneration 6 CONCLUSION withLargeLanguageModel. arXiv:2308.05345[cs.LG] [26] Y.Tsaietal.2024.RTLFixer:AutomaticallyFixingRTLSyntaxErrorswithLarge ThispaperexaminedtheuseofLLMsindetecting/addressingsecu- LanguageModels. arXiv:2311.16543[cs.AR] [27] Z.Heetal.2024. ChatEDA:ALargeLanguageModelPoweredAutonomous rityflawsinHWdesigns.Wespecificallyanalyzedtheirincorpora- AgentforEDA. arXiv:2308.10204[cs.AR] tionintoRTL,revealingtheirindependentproblem-solvingabilities [28] Z.Zhangetal.2023.LLM4DV:UsingLargeLanguageModelsforHardwareTest inthisdomain.Ourexaminationofexistingapproacheshighlights StimuliGeneration. arXiv:2310.04535[cs.LG] [29] M. Liu et al. 2023. ChipNeMo: Domain-Adapted LLMs for Chip Design. boththeirbenefitsanddrawbacks,notablyscalabilityandaccuracy arXiv:2311.00176[cs.CL] issues.Also,weidentifiedpotentialareasforfutureresearch.Our [30] H.Witharanaetal.2022. Asurveyonassertion-basedhardwareverification. ACMComputingSurveys(CSUR)54,11s(2022),1–33. suggestioninvolvesdevelopingdedicatedLLMarchitecturesand [31] J.Austinetal.2021. Programsynthesiswithlargelanguagemodels. arXiv datasetsfocusedonHWsecurity,indicatingapathtowardtargeted preprintarXiv:2108.07732(2021). improvementsthatcouldmitigateHWvulnerabilities. [32] J.Liuetal.2024. Isyourcodegeneratedbychatgptreallycorrect?rigorous evaluationoflargelanguagemodelsforcodegeneration. AdvancesinNeural InformationProcessingSystems36(2024). REFERENCES [33] K.Z.Azaretal.2020.NNgSAT:NeuralnetworkguidedSATattackonlogiclocked complexstructures.InXInternationalConferenceonComputer-AidedDesign.1–9. [1] A.Desaietal.2016.Programsynthesisusingnaturallanguage.InInternational [34] K.Z.Azaretal.2022.Fuzz,penetration,andaitestingforsocsecurityverification: ConferenceonSoftwareEngineering.345–356. Challengesandsolutions.CryptologyePrintArchive2022,394(2022),1–22. [2] A.Inamdaretal.2021. Developmentofsuperconductoradvancedintegrated [35] XiangLisaLiandPercyLiang.2021. Prefix-tuning:Optimizingcontinuous circuitdesignflowusingsynopsystools.IEEETransactionsonAppliedSupercon- promptsforgeneration.arXivpreprintarXiv:2101.00190(2021). ductivity31,5(2021),1–7. [36] M.Chen,etal.2021.Evaluatinglargelanguagemodelstrainedoncode.arXiv [3] B.Ahmadetal.2024.OnHardwareSecurityBugCodeFixesByPromptingLarge preprintarXiv:2107.03374(2021). LanguageModels.IEEETransactionsonInformationForensicsandSecurity(2024). [37] R.Kandeetal.2024. (Security)AssertionsbyLargeLanguageModels. IEEE [4] E.Nijkampetal.2022.Codegen:Anopenlargelanguagemodelforcodewith TransactionsonInformationForensicsandSecurity(2024). multi-turnprogramsynthesis.arXivpreprintarXiv:2203.13474(2022). [38] S.Gulwanietal.2017.Programsynthesis.FoundationsandTrendsinProgramming [5] D. Saha et al. 2023. LLM for SoC Security: A Paradigm Shift. Languages4,1-2(2017),1–119. arXiv:2310.06046[cs.CR] [39] S.Renetal.2020.Codebleu:amethodforautomaticevaluationofcodesynthesis. [6] D.Yinetal.2023.Dynosaur:ADynamicGrowthParadigmforInstruction-Tuning arXivpreprintarXiv:2009.10297(2020). DataCuration. arXiv:2305.14327[cs.CL] [40] S.Shietal.2023.Sechls:Enablingsecurityawarenessinhigh-levelsynthesis.In AsiaandSouthPacificDesignAutomationConference.585–590. 7Forinstance,thePrefix-Tuningconcept[35]involvestheadditionoftrainabletokens [41] S.Thakuretal.2023.Verigen:Alargelanguagemodelforverilogcodegeneration. toprompts,thusenablingmoretask-specificmodelresponses. ACMTransactionsonDesignAutomationofElectronicSystems(2023). 8Autopromptingcouldsignificantlymitigatetheautomationchallengeandenhance [42] X.Lietal.2024.CircuitTransformer:End-to-endCircuitDesignbyPredicting |
2404.17839 Improving Smart Contract Security with Contrastive Learning-based Vulnerability Detection YizhouChen ZeyuSun∗ KeyLabofHCST(PKU),MOE; Science&TechnologyonIntegratedInformationSystem SchoolofComputerScience,PekingUniversity Laboratory,InstituteofSoftware,ChineseAcademyof Beijing,China Sciences yizhouchen@stu.pku.edu.cn Beijing,China szy_@pku.edu.cn ZhihaoGong DanHao KeyLabofHCST(PKU),MOE; KeyLabofHCST(PKU),MOE; SchoolofComputerScience,PekingUniversity SchoolofComputerScience,PekingUniversity Beijing,China Beijing,China zhihaogong@stu.pku.edu.cn haodan@pku.edu.cn ABSTRACT CCSCONCEPTS Currently,smartcontractvulnerabilities(SCVs)haveemergedas •Securityandprivacy→Softwaresecurityengineering;Soft- amajorfactorthreateningthetransactionsecurityofblockchain. waresecurityengineering;•Computingmethodologies→ Existingstate-of-the-artmethodsrelyondeeplearningtomitigate Knowledgerepresentationandreasoning. thisthreat.Theytreateachinputcontractasanindependententity andfeeditintoadeeplearningmodeltolearnvulnerabilitypatterns KEYWORDS byfittingvulnerabilitylabels.Itisapitythattheydisregardthe Smartcontract,Vulnerabilitydetection,Deeplearning,Contrastive correlationbetweencontracts,failingtoconsiderthecommonalities learning between contracts of the same type and the differences among contractsofdifferenttypes.Asaresult,theperformanceofthese ACMReferenceFormat: methodsfallsshortofthedesiredlevel. YizhouChen,ZeyuSun,ZhihaoGong,andDanHao.2024.ImprovingSmart ContractSecuritywithContrastiveLearning-basedVulnerabilityDetection. Totacklethisproblem,weproposeanovelContrastiveLearning In2024IEEE/ACM46thInternationalConferenceonSoftwareEngineering EnhancedAutomatedRecognitionApproachforSmartContract (ICSE’24),April14–20,2024,Lisbon,Portugal.ACM,NewYork,NY,USA, Vulnerabilities,namedClear.Inparticular,Clearemploysacon- 11pages.https://doi.org/10.1145/3597503.3639173 trastivelearning(CL)modeltocapturethefine-grainedcorrelation informationamongcontractsandgeneratescorrelationlabelsbased 1 INTRODUCTION ontherelationshipsbetweencontractstoguidethetrainingpro- cessoftheCLmodel.Finally,itcombinesthecorrelationandthe Incontemporarytimes,transactionsbasedonblockchainsystems semanticinformationofthecontracttodetectSCVs.Throughan andsmartcontractsarebecomingincreasinglypopularinboth empiricalevaluationofalarge-scalereal-worlddatasetofover40K personalandcommercialsettings[22,30].Yet,thisincreasedre- smartcontractsandcompare13state-of-the-artbaselinemethods. lianceonblockchainsystemshasmadethematemptingtargetfor WeshowthatClearachieves(1)optimalperformanceoverallbase- cybercriminalsseekingtoexploitsoftwarevulnerabilitiesforillegal linemethods;(2)9.73%-39.99%higherF1-scorethanexistingdeep financialgain[1,28].Withtheexponentialgrowthofthevirtualcur- learningmethods. rencymarket,SmartContractVulnerabilities(SCVs)havebecome amajorriskthreateningsecuretransactionsontheblockchain.The potentialexploitationofthesevulnerabilitiesbymaliciousactors couldresultinthecompromiseofvirtualassets,puttingusersat ∗ZeyuSunisthecorrespondingauthor. riskofsignificantfinanciallosses[25,35].In2016,theDecentral- HCST:HighConfidenceSoftwareTechnologies. izedAutonomousOrganizationonEthereumwasattacked,and theattackerexploitedanSCVtostealapproximately$50million worthofether[4].Moreover,in2018,thedecentralizedexchange Permissiontomakedigitalorhardcopiesofallorpartofthisworkforpersonalor Bancor suffered an SCV, resulting in the theft of roughly $23.5 classroomuseisgrantedwithoutfeeprovidedthatcopiesarenotmadeordistributed forprofitorcommercialadvantageandthatcopiesbearthisnoticeandthefullcitation millionworthofcryptocurrencies.Theaboveemergenciesreveal onthefirstpage.Copyrightsforcomponentsofthisworkownedbyothersthanthe thatSmartContractVulnerabilityDetection(SCVD)hasbecome author(s)mustbehonored.Abstractingwithcreditispermitted.Tocopyotherwise,or anurgenttask. republish,topostonserversortoredistributetolists,requirespriorspecificpermission and/orafee.Requestpermissionsfrompermissions@acm.org. TodetectSCVs,numerousresearchersproposedeffectivemeth- ICSE’24,April14–20,2024,Lisbon,Portugal ods,whicharebroadlydividedintotwocategories.Thefirstlineof ©2024Copyrightheldbytheowner/author(s).PublicationrightslicensedtoACM. work[6,21,23,24,32–34]isrule-basedtechniques,whichidentify ACMISBN979-8-4007-0217-4/24/04...$15.00 https://doi.org/10.1145/3597503.3639173 SCVsthroughpredefinedrulesormanually-definedpatternson 4202 rpA 72 ]RC.sc[ 1v93871.4042:viXraICSE’24,April14–20,2024,Lisbon,Portugal YizhouChen,ZeyuSun,ZhihaoGong,andDanHao Contract A (Vulnerable) Contract B (Normal) 1 contract Ree { 1 contract Ree { 2 mapping(address => uint256) public balances; 2 mapping(address => uint256) public balances; 3 3 4 event WithdrawFunds(address _to,uint256 _value); 4 event WithdrawFunds(address _to,uint256 _value); 5 5 6 function depositFunds() public payable { 6 function depositFunds() public payable { |
7 balances[msg.sender] += msg.value;} 7 balances[msg.sender] += msg.value;} 8 8 9 function WithdrawFunds (uint256 _weiToWithdraw) public { 9 function WithdrawFunds (uint256 _weiToWithdraw) public { 10 require(balances[msg.sender] >= _weiToWithdraw) 10 require(balances[msg.sender] >= _weiToWithdraw) 11 require(!locked[msg.sender]); 11 require(!locked[msg.sender]); 12 12 13 msg.sender.call.value(_weiToWithdraw)(); 13 locked[msg.sender] = true; 14 locked[msg.sender] = true; 14 msg.sender.call.value(_weiToWithdraw)(); 15 balances[msg.sender] -= _weiToWithdraw; 15 balances[msg.sender] -= _weiToWithdraw; 16 locked[msg.sender] = false;}} 16 locked[msg.sender] = false;}} Figure1:Anexampleofsmartcontracts. thesmartcontractcodeanditsexecution.However,thevulnera- Our proposed method has been rigorously evaluated on the bilitiesthatexistinsmartcontractsarediverse,whichcanmake largestestablisheddatasetonSCVD,whichconsistsofover40K predefinedpatternsinsufficientincoveringallpossiblevulnerabil- real-worldsmartcontracts,bycomparingitagainst13state-of-the- itytypes.Asaresult,thesemethodsmayproducefalsepositivesor artSCVDmethods.Thequantitativeexperimentalresultsshow falsenegatives,whichunderminetheireffectivenessindetecting theproposedClearoutperformsallthestate-of-the-artmethods vulnerabilitiesaccurately[26].Moreover,developingthesepatterns acrossallmetrics.Inparticular,Clearachievessignificantimprove- isatime-consuminganderror-proneprocessthatreliesheavily mentovereventhebest-performingmethodDMT[26]:precision onmanualwork.Therefore,theresearchersrecognizetheneedto increasedfrom87.28%to93.64%(by7.29%),recallimprovedfrom explorealternativeapproachesthatcanhelpreducethecostand 85.13%to95.44%(by12.11%),andF1-scoreelevatedfrom86.14%to improvetheaccuracyofSCVD. 94.52%(by9.73%)onthreetypesofSCVs,averagely.Besides,our Anotherlineofwork[15,18,26,42]utilizesdeeplearningmeth- ablationstudyshowsthatClearachievesoutperformancebyclus- odstoautomaticallydetectSCVs,resultinginimpressiveperfor- teringvulnerabilitycontractsinthefeaturespaceandseparating mance gains. These methods use neural networks to learn the themfromnon-vulnerabilitycontracts.Moreover,weexperimen- vulnerabilitypatternsanddetectSCVs.Thecommonalityofthese tallydemonstratethattheproposedCLmodelenhancesRNN-based methodsisthattheytreateachinputcontractasanisolatedentity models(i.e.,RNN,LSTM,GRU)andbooststheirperformanceby labeledwithwhetheritisvulnerable.Indeed,acontractusually 40.51%-50.94%intermsoftheF1-scorecomparedtotheoriginal containsmanylinesofcode,butonlyafewarerelevanttoSCVs. model.Insummary,thispapermakesthefollowingcontributions. Somefine-grainedinformationcanhardlybelearnedbytheex- • Acontrastive-learning-basedvulnerabilitydetection istingmethods,butcanbecaughtbythedifferencebetweenthe techniqueClear,whichutilizesfine-grainedcorrelation vulnerableandnon-vulnerablecontracts.Inotherwords,thesedeep informationamongsmartcontractstoimprovetheperfor- learningmethodshaveachievedlimitedperformancebecausethey manceofSCVD. ignorethecorrelationbetweencontracts,includingthedifference • Anextensiveexperimentonalarge-scaledatasetofover betweenvulnerableandnon-vulnerablecontracts,aswellasthe 40,000smartcontractscomparingagainst13state-of-artSCV commonalitiesbetweenvulnerablecontracts. methods,whichshowstheeffectivenessofClear. Tosolvetheproblem,weproposeanovelContrastiveLearning • Areproduciblepackageavailableathttps://github.com/ EnhancedAutomatedRecognitionapproachforSCVs,calledClear. chenpp1881/Clear. ClearintroducesthecontractcorrelationintothefieldofSCVD andleveragesthecontrastivelearning(CL)modeltolearnpair- 2 MOTIVATINGEXAMPLES wisecomparisonsofsmartcontractsandfindtheircorrelations. In addition, we guide the training process of the CL model by Smartcontractdevelopmentisarelativelyunfamiliarfield,and reusingexistingvulnerabilitylabelstogeneratecorrelationlabels. numerousdevelopersmaylackanin-depthunderstandingofsmart TheseeffortsareusedtoimprovetheperformanceofSCVD.To contractsecurity.Theymayfocusonthefunctionalityandbusiness outlinebriefly,Clearsamplespairsofcontractsfromthedataset logicofthecontractwhileignoringpotentialsecurityrisks,thus andgenerateacorrelationlabelforthecontractpairs.Then,aCL introducingvulnerabilitiesinthecode-writingprocess.Moreover, modelisconstructed,whichconsistsofacontextualaugmentation developersoftenunknowinglymakesmallmistakesthatcanresult module,aTransformermodule,andacontrastivelossfunction, invulnerabilities.Asaresult,contractsthatexhibitvulnerabilities tolearnthefine-grainedcorrelationinformationbetweenpairs cancloselyresemblethosethatdonot.Figure1showsanexample ofcontractsbyfittingcorrelationlabels.Finally,wefine-tunethe wherebothcontractshaveidenticalfunctionality,withtheonly Transformer module and fit vulnerability labels to enhance the distinctionresidingintheorderofstatementswithinthe“With- |
performanceofvulnerabilitydetection. drawFunds”function(lines13-16).InContractB,theaccountis initiallylocked(line13),followedbytheupdatingandtransferofImprovingSmartContractSecuritywithContrastiveLearning-basedVulnerabilityDetection ICSE’24,April14–20,2024,Lisbon,Portugal Our Method CL module SCVD module (Available Methods) Same type (Yes/Not) Vulnerable (Yes/Not) Contract pair CL model Correlation labels Contract SCVD model Vulnerability labels Vulnerablity detection Knowledge migration with correlation information Figure2:Architectureofourmethodandtheavailablemethods. theaccountbalance(lines14-16).Conversely,inContractA,the 3 METHODOLOGY transferbalanceisexecuted(line13)beforetheaccountislocked 3.1 MethodOverview (line14).ThisminordifferenceinContractAintroducesavulnerabil- InmigratingtheCLmodeltotheSCVDdomain,wearefacedwith ity.Inpracticaldevelopmentscenarios,thelogicofsmartcontracts threeprimaryissues.(1)Itisessentialtodetermineanappropriate isoftenhighlycomplex,leadingtothefrequentoccurrenceofthe labelthatcanguidetheCLmodelinlearningeffectivecorrelation aforementionedsituation.Unfortunately,currentSCVDmethods information.(2)Theneuralnetworkforcontextualsemanticrep- treatsmartcontractsasisolatedentitiesandrelyondeeplearning resentationofsmartcontractsneedstoberefinedanddesignedto modelstolearnvulnerabilityfeaturesorpatternstoidentifySCVs. improvethegeneralizationoftheCLmodel.(3)Leveragingboth Thesemethodsmayoverlookvulnerabilitiestriggeredbysubtle correlationinformationandvulnerabilityfeaturestoenhancethe faults.Specifically,thelimitationsofexistingSCVDmethodsstem performanceofSCVD. fromtheirarchitecture,whichrequiresdeeplearningmodelsto Therefore,wepresentanovelapproachforSCVautomatedde- independentlyexploreandlearnsemanticinformationfromthe tection,namedClear,asillustratedinFigure3.Clearsequentially entirecontractcode,guidedbyvulnerabilitylabels,toidentifypos- tacklestheaforementionedissuesthroughthreesteps: siblevulnerabilitypatterns.Thisarchitecturelackssufficientdetail fordeeplearningmodelstoadequatelycomprehendthefundamen- 1. DataSampling:contractpairsaresampledfromthedataset talnatureofvulnerabilities.Thechallengesfacedbydeeplearning toserveasinputfortheCLmodel.Acorrelationlabelisas- modelsinaccuratelycapturingcorrelationinformationamongcon- signedtoeachpair,indicatingtheirrelationshipandguiding tractsunderthisarchitectureencompassfine-graineddifferences thetrainingprocessoftheCLmodel. betweenvulnerableandnon-vulnerablecontracts,aswellascom- 2. ContrastiveLearning:wedeviseaCLmodelthatincorpo- monalitiesamongvulnerablecontracts.Undoubtedly,correlation ratesacontextualaugmentationmoduleandaTransformer- informationplaysacrucialroleineffectivelyidentifyingSCVs. basedfeaturelearningmodule.Bycomputingcontrastive However,untilnow,theimpactofcontractcorrelationsonSCVD loss,themodellearnstocapturecorrelationsbetweencon- isstillunexploredinexistingstudies.TheCLmodelprovidesim- tractpairsthatalignwiththeprovidedlabels. portant inspiration for our work. The CL model was originally 3. VulnerabilityDetection:wefine-tunetheTransformer developedasanunsupervisedlearningtechniqueinthefieldof modelfromStage2andcombinetheoutputsoftheCLmodel computervisiontolearnrepresentationsbyuncoveringtheunder- torecognizeSCVsbyafullyconnectedneuralnetwork. lyingsimilaritiesanddissimilaritiesamongsamples[2,5,14,38]. Therefore,toaddresstheaforementionedissues,weextendittothe 3.2 DataSampling SCVDtasksofsupervisedlearningbyadaptingthemethodology usedforconstructingsamplepairsandcorrelationlabels.Tobespe- FortheCLmodule,weincorporatecorrelationlabelstoguidethe cific,asshowninFigure2,ourmethoddivergesfromexistingSCVD trainingprocess.Theselabelsareconstructedbasedontherelation- methods,asweinitiallyleveragetheCLmodeltolearnthecorrela- shipsbetweensampledcontracts.Therefore,employingasuitable tionsamongcontracts.Subsequently,wemigratethecorrelation samplingstrategyiscrucialasitcangreatlyenhancetheperfor- knowledgetotheSCVDmodel,integratingitwiththevulnerability mancebyensuringtheutilizationofhigh-qualitysamplepairsfor featuresofthecontractstoaccuratelydetectandidentifySCVs.It training,whileminimizingtheintroductionofbiasintothelearning isworthemphasizingthatourmethodisspecificallydesignedto process.Motivatedbythis,oursamplingstrategyisasfollows: targetcommonandsubtlefaultsinSCVs.Ifsimilarvulnerability Therearethreetypesofrelationshipsforcontractpairs,i.e.“V- characteristicsexistinthesoftwareofotherdomains,ourmethod V”,“N-N”,and“V-N”,whereVandNdenoteVulnerableandNon- mayalsobeadaptedtodetectthem. vulnerablecontracts,respectively.Ourintuitionisthattherelation- shipsof“V-V”and“V-N”aremoreimportant,becausewewould liketodiscoverthecommonalityin“V-V”anddifferencesin“V- N”byCL.Incontrast,the“N-N”isnotsubstantiallyhelpfulinICSE’24,April14–20,2024,Lisbon,Portugal YizhouChen,ZeyuSun,ZhihaoGong,andDanHao Stage 1: Stage 2: Data Sampling Contrastive Learning Smart Label1 & Label2 LabelCL |
Contract1 Contextual Augmentation (MLM) LossMLM Token 2 Token 5 Label1 Training set mer Token 1 [ Mask ]...[ Mask ] Token 6...Token n Dataset POS set CS Lo anm btra ear lct 2 t 2 bedding Transfor TT oo kk ee nn 11 [ [ M Ma as sk k ] ] Co.. d.. .. e S[ [ M eM qa a us s ek k n ] ] ceT To ok ke en n 6 6. .. .. .T To ok ke en n n n CLS Vector MLP Layer Normalization MLP Batch Normalization LossCL m e d or W Transformer Tranining set CS Lo anm btra ear lct 3 t 3 with CLS Feature Matrix AveragePooling MLP NoV nu -vln ue lnra eb rale ble Output Stage 3: LossCLA Vulnerability Detection Figure3:OverviewofClear,whichencompassesboththeCLprocess,depictedbysolidlinesindicatingthedataflow,andthe subsequentvulnerabilitydetectionprocess,representedbydottedarrowsindicatingthedataflow. identifyingSCV.Therefore,oursamplingstrategyistoextractall promptsthemodeltodiscernmeaningfulpatterns,relationships, vulnerablecontractsfromtheoriginaldatasetandcreateanewset anddependencieswithinthecontractcode,facilitatingamorecom- calledthePOSset.Then,foreachcontractintheoriginaldataset, prehensiveunderstandingofitsunderlyingstructureandsemantics. werandomlyselectacontractfromthePOSsettoformapairof Thisapproachhelpsthemodelcaptureimportantfeaturesandcon- contractsasinputfortheCLmodel.Itshouldbenotedthatthis textualinformation,whichcanproveadvantageousfordownstream samplingstrategydoesnothave“N-N”relationship.Finally,the predictiontasks. correlationlabels𝐿 𝐶𝐿ofthecontractpairsareconstructedtoguide Tobespecific,givenacodesequence,werandomlyselect30% thetrainingoftheCLmodel.Theruleisasfollows: ofthetokenstobereplacedwithaspecial[Mask]token,andkeep theremaining70%unchanged.Then,theTransformerisutilizedto (cid:40) 1, If“V-V” predicttheoriginaltokenthatcorrespondstothemaskedtoken. 𝐿 𝐶𝐿 = . (1) ThelossfunctionofanMLMcanberepresentedusingthecross- 0, If“V-N” entropylossfunctionasfollows: 3.3 ContrastiveLearning ∑︁ Loss𝑀𝐿𝑀 =− 𝑦 𝑗log𝑦ˆ𝑗, (2) Tobettermodelcorrelations,intheCLstage,wefirstapplycontex- 𝑗∈𝑇 tualaugmentationtotheinputcontract,thenuseaTransformer tolearncontractfeatures,andfinallyemploythecontrastiveloss where𝑇 denotes a set of the index of masked tokens,𝑦 𝑗 is the functiontooptimizetheTransformermodel.Itshouldbenoted groundtruthlabelofthe𝑗-thmaskedtoken,and𝑦ˆ𝑗 isthepredicted thatbothcontractsinacontractpairundergothesameencoding probabilityofthe𝑗-thmaskedtokenbeingthegroundtruthlabel. processandshareidenticalmodelparameters.Therefore,forsim- Specifically,inanMLM,onlythetokensthatarereplacedwiththe plicityandclarity,weonlydemonstratetheencodingprocessfora specialtoken[MASK]areusedtocalculatetheloss. singlecontractinthefollowingsubsections. 3.3.2 FeatureLearning. Ourfeaturelearningmodulealsofollows 3.3.1 ContextualAugmentation. Toenhancethecomprehension astandardTransformerprocesswithaminormodification.Given ofsemanticandstructuralfeaturesinthecontractcode,aswellas thattheCLmodulenecessitatesthecomputationofdistancesinthe tofacilitateunderstandingofcontractcorrelation,weincorporate completesequencerepresentationoftwosamples,weincorporate acontextualaugmentationmodule,i.e.,maskedlanguagemodel CLSvectorsasawaytoextracttheentiresemanticinformation (MLM),atthebeginningoftheCLstage.ThecoreofMLMisa of code samples. This idea is inspired by previous research [7]. self-supervisedTransformermodel.Insimpleterms,MLMinvolves Incidentally,theintroductionofCLSvectorsintheCLstagecan randomly masking certain tokens in the input data. The Trans- enhancemodelefficiencybyeliminatingadditionalprocesses,such formermodelisthentaskedwithpredictingthemaskedtokens assequencemodelinganddataalignment,solelycontrastingtwo basedonthesurroundingcontext.Predictingthemaskedtokens CLSvectors.The𝑪𝑳𝑺 ∈R1×𝑘 vectorservesasanadditionalinputImprovingSmartContractSecuritywithContrastiveLearning-basedVulnerabilityDetection ICSE’24,April14–20,2024,Lisbon,Portugal tokenandisgeneratedinthefollowingmanner: In this equation, 𝜆 𝐶𝐿 and 𝜆 𝑀𝐿𝑀 are hyperparameters that con- 𝑛 troltherelativeimportanceofthecontrastivelossandMLMloss, 𝑪𝑳𝑺 = √1 𝑛 ∑︁ 𝑋 𝑖′, (3) respectively. 𝑖=1 3.4 VulnerabilityDetection where𝑋′ ∈R𝑛×𝑘 istheoutput𝑋ofMLM,𝑘isthewordembedding dimension.Then,thepositionencoding𝑃𝐸isemployedtofurnish Duringthevulnerabilitydetectionstage,wefocusonfine-tuning token-levelpositionalinformationfor𝑋.Thespecificprocessisas theTransformerencodermentionedearliertoaccuratelydetect follows: SCV.Afterobtainingthefeaturerepresentation𝐹 ofthecontracts, 𝑃𝐸 (𝑝𝑜𝑠,2𝑙) =sin 100𝑝 0𝑜 0𝑠 2𝑙/𝑘, (4) w fue llyco cm onb nin ece te𝐹 da nn ed urc ao lr nr ee tl wat oio rkn .f Te ha etu pr re os c𝒗 essto cad net be ect reS pC reV seu ns ti en dg aa s 𝑝𝑜𝑠 follows: 𝑃𝐸 (𝑝𝑜𝑠,2𝑙+1) =cos 100002𝑙/𝑘, (5) 𝐿ˆ=𝜎(𝑊 3·(AvgPooling(𝐹)⊕𝒗)+𝒃), (9) where𝑝𝑜𝑠isthepositionidentifierthatrecordsthepositioninfor- where𝜎isthesigmoidactivationfunction,𝑊 3istheweightmatrix mationofthetokeninthesequence,and𝑙 denotesthedimension and𝑏isthebiasvector,⊕indicatesconcatenationoperation.The index. output𝐿ˆisthepredictedprobability(vulnerableornon-vulnerable) Subsequently,togetherwith𝐶𝐿𝑆 and𝑋′,itservesasinputfor andcalculatesthelosswiththerealvulnerabilitylabel𝑦ˆ ∈ (0,1) themulti-headattentionmechanism(MHAM).Themathematical bythecrossentropylossLoss𝐶𝐿𝐴. processcanberepresentedas: (cid:16) (cid:16) (cid:17) (cid:16) (cid:17)(cid:17) Loss𝐶𝐿𝐴(𝐿ˆ,𝑦ˆ)=− 𝑦ˆlog 𝐿ˆ +(1−𝑦ˆ)log 1−𝐿ˆ . (10) 𝑪𝑳𝑺′,𝐹 =MHAM(𝑪𝑳𝑺⊕(𝑋′+𝑃𝐸)), (6) |
where⊕and+denoteconcatenationandelement-wiseaddition. 4 EXPERIMENTALSETTINGS Whereafter,the𝑪𝑳𝑺′vectorscaptureglobalsemanticinforma- Weconductcomprehensiveevaluationsofourproposedframework tionaboutthecontractandserveasrepresentativesummariesof toaddressthefollowingResearchQuestions(RQ): itsoverallcontent.TheyareutilizedintheCLstagetoestablish RQ1:HowdoesourmethodClearperformcomparedagainst13state- correlationsbetweeninstancesofcontracts.Conversely,thefeature of-the-artSCVDtechniques? vectors𝐹 ∈ R𝑛×𝑘 ,whichcomprisetheencodedrepresentations RQ2: How do the different modules affect the performance of the ofeachtokenanditscontextualdependencies,areutilizedinthe proposedapproach? vulnerabilitydetectionstage. RQ3:DoesourproposedCLmoduleenhancetheperformanceofother deeplearningmodelsbesidesTransformer? 3.3.3 ContrastiveLoss. Wethenapplytwolineartransformations -L2normalizationandbatchnormalization-toprocessthe𝑪𝑳𝑺′. 4.1 Dataset Specifically,L2normalizationpromotesamorebalanceddistribu- Weselectthelargestpubliclyavailablevulnerabilitydatasetfor tionofthevector’selements,preventinganysinglefeaturefrom smartcontracts[26],whichconsistsof40Kreal-worldsmartcon- dominatingthelearningprocess.Batchnormalizationimproves tracts.ThedatasetiscarefullylabeledwithdistincttypesofSCVs. modelconvergenceandstabilitybyreducinginternalcovariate Amongthe40Kcontracts,4290contractswereidentifiedtocontain shift,whichisthevariationinactivationdistributionacrossdiffer- vulnerabilities:680contractswereidentifiedtopossessreentrancy entlayersduringtraining.Theirmathematicalprocessisasfollows: vulnerabilities (RE), 2242 contracts exhibited timestamp depen- 𝒗 =BatchNorm(𝑊 2·LayerNorm(𝑊 1·𝑪𝑳𝑺′)), (7) dencyvulnerabilities(TD),and1368contractswerefoundtohave where𝑊 1and𝑊 2areweightsofthelineartransformationand𝒗 integeroverflow/underflowvulnerabilities(IO).Thisdatasetran- istheultimateglobalvectorrepresentationofacontractintheCL domlyassigns80%contractsasthetrainingsetandtheremaining stage. 20%asthetestset,andinourevaluation,weusethesamesplitsets. Inthisway,thecontractpairyieldsapairofvectorrepresenta- Thesevulnerabilitiesarewidelystudiedinpriorwork[26],be- tions[𝒗𝑎,𝒗𝑏].Then,wecomputethecontrastiveloss𝐿𝑜𝑠𝑠 𝐶𝐿with causealargeportionofthefinanciallossinEthereumisattributed the correlation label 𝐿𝑎𝑏 . The contrastive loss is formulated as tothesevulnerabilities[12,21,23,32,34]andexistingresearchhas 𝐶𝐿 follows: demonstratedthatthesevulnerabilitiesareprevalentinEthereum smartcontracts.Inparticular,REoccurswhenacontractallowsan Loss𝐶𝐿(𝒗𝑎,𝒗𝑏,𝐿 𝐶𝑎𝑏 𝐿)=𝐿 𝐶𝑎𝑏 𝐿·sim(𝒗𝑎,𝒗𝑏)2 e tix ot ner cn oa mla pt lt ea tc ek s,e lr et ao dir ne g-e tn ote ur na exfu pn ecc tt eio dn ab ne dfo ur ne at uh te hop rr ie zv ei dou bs ehe ax ve ic ou r- (8) +(1−𝐿 𝐶𝑎𝑏 𝐿)·max(0,𝑀−sim(𝒗𝑎,𝒗𝑏))2, [10].TDarisesfromtherelianceonblocktimestampsforcritical decisionswithinsmartcontracts.Attackerscanmanipulatetimes- wheresim(·)representstheeuclideandistanceofthetwovectors, tampstotheiradvantage,compromisingcontractlogicandenabling and𝑀isthemarginthatdeterminesthethresholdfordissimilarity. activitiessuchasfront-runningordenial-of-serviceattacks[21]. Finally,thecontrastiveloss,denotedasLoss𝐶𝐿,andtheMLM IOoccurswhenarithmeticoperationsonintegersexceedthemaxi- loss,denotedasLoss𝑀𝐿𝑀,arecombinedtooptimizethemodel.The mumorminimumrepresentablevalues[13].Thesevulnerabilities overalllossfunctioncanbeexpressedasfollows: canresultinincorrectcalculations,allowingattackerstomanipu- TotalLoss=𝜆 𝐶𝐿·Loss𝐶𝐿+𝜆 𝑀𝐿𝑀 ·Loss𝑀𝐿𝑀. latevalues,bypasssecuritychecks,andcausefinancialharm.ICSE’24,April14–20,2024,Lisbon,Portugal YizhouChen,ZeyuSun,ZhihaoGong,andDanHao Table1:Theperformanceevaluationofourmethodiscomparedwith13baselinemodelsinvolvingbaselinesintermsofRecall (R),Precision(P)andF1-score(F).“n/a”denotesnotapplicable. RE TD IO Average Line# Methods R(%) P(%) F(%) R(%) P(%) F(%) R(%) P(%) F(%) R(%) P(%) F(%) 1 sFuzz 14.95 10.88 12.59 27.01 23.15 24.93 47.22 58.62 52.31 29.73 30.88 29.94 2 Smartcheck 16.34 45.71 24.07 79.34 47.89 59.73 56.21 45.56 50.33 50.63 46.39 44.71 3 Osiris 63.88 40.94 49.90 55.42 59.26 57.28 n/a n/a n/a n/a n/a n/a 4 Oyente 63.02 46.56 53.55 59.97 61.04 59.47 n/a n/a n/a n/a n/a n/a 5 Mythril 75.51 42.86 54.68 49.80 57.50 53.37 62.07 72.30 66.80 62.46 57.55 58.28 6 Securify 73.06 68.40 70.41 n/a n/a n/a n/a n/a n/a n/a n/a n/a 7 Slither 73.50 74.44 73.97 67.17 69.27 68.20 52.27 70.12 59.89 64.31 71.28 67.35 8 GCN 73.18 74.47 73.82 77.55 74.93 76.22 69.74 69.01 69.37 73.49 72.80 73.14 9 TMP 75.30 76.04 75.67 76.09 78.68 77.36 70.37 68.18 69.26 73.92 74.30 74.10 10 AME 78.45 79.62 79.03 80.26 81.42 80.84 69.40 70.25 69.82 76.04 77.10 76.56 11 SMS 77.48 79.46 78.46 91.09 89.15 90.11 73.69 76.97 75.29 80.75 81.86 81.29 12 DMT 81.06 83.62 82.32 96.39 93.60 94.97 77.93 84.61 81.13 85.13 87.28 86.14 |
13 LineVul 73.01 85.19 78.63 67.46 89.47 76.92 74.20 74.10 74.15 71.56 82.92 76.57 14 Clear 96.43 96.81 96.62 98.41 94.30 96.31 91.48 89.81 90.64 95.44 93.64 94.52 4.2 Baselines Recall:Theproportionofcorrectlypredictedpositiveinstances Inourevaluation,wefirstselectasetofbaselinemethodsspecifi- amongallactualpositiveinstances.Itiscalculatedas𝑅= 𝑇𝑃𝑇 +𝑃 𝐹𝑁. callydesignedforSCVD.Thesebaselinesrepresentstate-of-the-art F1-score:Theharmonicmeanofprecisionandrecall,which approachesintheSCVDfield.Theycanbebroadlyclassifiedinto combinesbothmetricstoprovideasinglemeasureofclassification twocategories:rule-basedtechniquesandneuralnetwork-based performance.Itiscalculatedas𝐹1=2× 𝑝𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛×𝑟𝑒𝑐𝑎𝑙𝑙 . 𝑝𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛+𝑟𝑒𝑐𝑎𝑙𝑙 techniques. Rule-basedtechniques,asthecategoryofbaselines,relyonrule- basedheuristicstoidentifySCVs,includingsFuzz[24],Smartcheck[32], Osiris[33],Oyente[21],Mythril[23],Securify[34],andSlither[6]. Neuralnetwork-basedmethods,ontheotherhand,leveragedeep learningtechniquestoidentifySCVs.Here,weincludefivestate- 4.4 ImplementationDetails of-artneural-learning-basedSCVDmethods,includingGCN[15], TMP[42],AME[18],SMS[26],andDMT[26]. Forthehyperparametersoftheexperiment,thedimensionalityof Generalvulnerabilitydetectionmethods[8,17,40]exist,but thewordembeddingsissetto512.TheTransformermodelutilizes noneofthemyieldssatisfactoryresultsinthefieldofSCVD.There- asix-layermulti-headattentionlayerwitheightattentionheads. fore,inthispaper,wehavechosenonlythebest-performingone, During training, the learning rate is initialized to 1 x 10−5 and namelyLineVul[8],astherepresentativeofgeneralmethods. isoptimizedusingtheAdamWoptimizer[20].Thebatchsizeis ThedetailsofallbaselinesareinSection6. fixedateight.ThetotallossesintheCLstageincludecontrastive lossLoss𝐶𝐿 andMLMlossLoss𝑀𝐿𝑀,wherewesettheweights forthem,𝜆 𝐶𝐿and𝜆 𝑀𝐿𝑀,tobe1.0and0.1respectively.Forallthe baselines,weutilizethecompletecodefromtheirprovidedopen- 4.3 Metrics sourcelibrariesandadheretotheconfigurationsspecifiedintheir Followingpriorwork[18,19,26],weusethreecommonevaluation research.DuringtheCLstage,wesetthenumberoftrainingepochs metricstoassesstheperformanceoftheSCVDmethods,which to100.Inthevulnerabilitydetectionstage,weperformtrainingfor areprecision,recall,andF1score.Giventruepositives(TP),false 20epochstofine-tunethemodelandoutputtheresultofthelast positives(FP),truenegatives(TN),andfalsenegatives(FN)ofa epoch.Theaforementionedprocedureisiteratedfivetimesandthe classificationmodel,precision,recall,andF1scorearedefinedbe- averagevalueischosenastheconclusiveoutcomeofthisstudy. low. Theexperimentsareconductedusinghardwareresourcesthat Precision:Theproportionofcorrectlypredictedpositivein- included2NvidiaRTX3090GPUswith48GBvideomemory.These stancesamongallinstancespredictedaspositive.Itiscalculatedas GPUsareutilizedinparallelfortraining.Forthesoftwareenviron- 𝑃 = 𝑇𝑃𝑇 +𝑃 𝐹𝑃. ment,weemployUbuntu20.04LTSastheoperatingsystem.ImprovingSmartContractSecuritywithContrastiveLearning-basedVulnerabilityDetection ICSE’24,April14–20,2024,Lisbon,Portugal Table2:Theresultsoftheablationtest. RE TD IO Average Methods R(%) P(%) F(%) R(%) P(%) F(%) R(%) P(%) F(%) R(%) P(%) F(%) Clear-MVN 75.88 91.86 83.11 90.86 74.68 81.03 82.54 76.47 79.39 83.09 81.00 81.18 Clear-MVV 90.03 81.76 85.44 87.30 83.96 85.60 84.58 81.94 83.24 87.30 82.55 84.76 Clear-RMLM 91.30 90.23 90.77 93.04 90.67 91.85 84.44 88.37 86.36 89.59 89.76 89.66 Clear-RCL 63.25 81.77 71.32 68.48 81.91 71.76 71.62 82.97 76.17 67.78 82.22 73.08 Clear 96.43 96.81 96.62 98.41 94.30 96.31 91.48 89.81 90.64 95.44 93.64 94.52 5 RESULTS Epoch-1 Epoch-10 Epoch-100 5.1 RQ1:EffectivenessoftheClear Table1showstheperformanceevaluationof13SCVDmethods, RE focusing on three prevalent and critical vulnerabilities: RE, TD, andIO.Weuseboldfonttorepresentthebestresultofallcom- paredapproachesforeachtypeofvulnerability.Itisimportantto notethatsomevulnerabilitydetectiontools(suchasOsiris,Oyente, andSecurify)failtoidentifyallthreevulnerabilitytypes.Conse- TD quently,thetableincludestheirresultsonlyforthecorresponding vulnerabilities,andwedonotreportanaveragevalueforthese tools. TheinitialcomparisoninvolvesClearandsevenrule-basedtech- niques:sFuzz,Smartcheck,Osiris,Oyente,Mythril,Securify,and Slither.Theperformanceofthesemethodsispresentedinlines IO 1to7ofthetable.WeobservethatClearexhibitsasubstantial performanceimprovementoverexistingrule-basedvulnerability NNoonn--vvuullnneerraabbllee VVuullnneerraabbllee detectiontoolsacrossallthreevulnerabilitytypes.Specifically,com- paredwithSlither,whichisthestate-of-the-arttoolforREandTD Figure4:Thefeaturedistributionofsmartcontractsatdif- detection,ClearachievesanF1-scoreof96.62%and96.31%inRE ferentepochsduringtheCLstage. |
andTD,respectively,representingasignificantimprovementof 30.62%and41.22%.ForIOdetection,ClearoutperformsMythril, which is the state-of-the-art tool for IO detection, by 35.69% in F1-scoreof82.92%,71.56%,and76.57%forthethreevulnerability termsofF1-score,achieving90.64%. detectionscenarios,respectively.Incomparison,Clearoutperforms Subsequently,wecompareClearwithfivestate-of-the-artdeep LineVulacrossallthreemetrics,surpassingLineVulbymorethan learning-basedvulnerabilitydetectionmethods,includingGCN, 12.93%,33.37%,and23.44%,respectively. TMP,AME,SMS,andDMT.Theperformanceresultsofthesemeth- ods are presented in lines 8 to 12 of Table 1. The experimental resultsshowtheeffectivenessofClearindetectingthethreevulner- Answer to RQ1: The proposed Clear outperforms the abilitytypescomparedtoexistingdeeplearning-basedapproaches. state-of-the-art methods across all metrics. On aver- Thebest-performedDMTachievesaveragerecall,precision,and age,ClearachievesanF1-scoreof94.52%,showcasinga F1-scoreof85.13%,87.28%,and86.14%forthethreetypesofvul- 9.73%increaseinF1-scorecomparedtotheexistingbest- nerability.OurClearoutperformstheDMTonallthreemetrics. performingmethod. TheprecisionandrecallofClearachieve93.64%and95.44%,respec- tively,representingasignificantimprovementof12.11%and7.29% comparedtotheaveragevaluesobtainedbyDMT,resultinginan 5.2 RQ2:ImpactofDifferentModules overallF1-scoreof94.52%. Finally,inTable1,line13reportstheperformanceofthestate- ToanswerRQ2,weconductcomprehensiveablationteststoex- of-the-artgeneralmethod,LineVul,whichusesCodeBERTtode- amineandunderstandtheimpactofdifferentmodulesonClear’s tectvulnerabilities.Thequantitativeresultssuggestthatsimple overalleffectiveness.InSection3.1,wehavedescribedthatClear migrationofgeneralmethodstotheSCVDfieldmaynotyieldsat- consistsofthreestages,andtherefore,wehavespecificallydesigned isfactoryresults.EventheLineVul,whichperformsthebestamong distinctablationtestsforeachofthesestages.Theresultsofall thegeneralmethods,onlyachievesanaverageprecision,recalland ablationtestsarepresentedinTable2,inwhichthemetricsP,R, andFrepresentprecision,recall,andF1-score,respectively.ICSE’24,April14–20,2024,Lisbon,Portugal YizhouChen,ZeyuSun,ZhihaoGong,andDanHao Tobeginwith,forstage1,wefocusourdatasamplingstrategyon detectSCVs.Clearexhibitsahigherproficiencyinrecognizingthis twospecifictypesofcontractrelationships,namely“V-V”and“V- particularclusterandaccuratelyclassifyingcontractswithinits N”.Weselectivelymasktheserelationshipsinordertoevaluatethe proximityasvulnerable.Thisleadstoanimprovedcapabilityfor influenceofthelabelsgeneratedbythesetworelationshipsonthe identifyingSCVs. overalleffectivenessofvulnerabilitydetection.The“Clear-MVV” indicatesthemasked“V-V”relationshipand“Clear-MVN”indicates AnswertoRQ2:Thetwotypesofrelationsincontracts themasked“V-N”.AsshowninTable2,Clear-MVNandClear- areindispensableintheCLstage.TheMLMmodulecan MVVachieve81.18%and84.76%averageF1-score,respectively.In enhancetheperformanceofClear.TheCLmodulefacil- comparison,ClearoutperformsbothofthemandhasanF1-score itatestheaggregationofdispersedvulnerabilitysamples of94.52%.Thatistosay,learningonlyoneofthecontractrelations inthefeaturespace,leadingtoasignificantperformance withintheCLstagedoesnotyieldsatisfactoryresults.Itisonlyby improvementinthetasksofSCVD. simultaneouslylearningbothrelationsthatweobserveasignificant improvementintheperformanceofSCVD. Movingontostage2,weintentionallyremovetheMLMmodule 5.3 RQ3:EffectivenessoftheCLModule thatisintegratedintotheCLstage.Thisallowsustoanalyzethe TofurtherinvestigatethecontributionoftheCLmoduletoother overalleffectivenessoftheCLstagewithoutthepresenceofthe deeplearningmodelsinSCVD,wehaveselectedasetoftraditional MLMmodule.Thisparticulartestisreferredtoas“Clear-RMLM”. deeplearningmodels,includingRNN,LSTM,andGRU,toreplace WeobservethattheMLMmodulehasasubstantialimpactonthe theTransformermodelusedinClear.Thesemodels,collectively effectivenessoftheCLmodule.Specifically,whentheMLMmodule referredtoas“CL-Mode,”havebeenspecificallychosenbecause isremoved,thereisanaveragedecreaseinprecision,recall,andF1- Clearisdesignedtoprocesscodesequencesdirectly,whilethedeep scoreby4.15%,6.13%,and5.14%respectively.Therefore,webelieve learning-basedmethodsinthebaselinesareconstructedongraph thattheMLMmodulecanenhancetheperformanceofClearandis structures.Therefore,ourClearisincompatiblewiththesegraph- anessentialcomponent. basedmethods.Additionally,wepresenttheperformanceresults Lastly,forstage3,weremovetheCLstagealtogetheranddirectly oftraditionaldeeplearningmodelstofacilitatecomparisonsand performedthevulnerabilitydetectionstage.Thistest,knownas analysis. “Clear-RCL”,enablesustoevaluatetheperformanceofvulnerability |
ThestatisticalresultsoftheseexperimentsarepresentedinTable detectionintheabsenceoftheCLstage.IncomparisontoClear- 3andprovidethefollowingobservations.Allmodelsexhibitnotable RCL,weobserveasignificantimprovementinperformanceforall enhancementsinperformancewhenembeddedwiththeCLmodule. threetypesofvulnerabilitydetectiontaskswiththeadditionofthe IntermsofF1-score,CL-RNN,CL-LSTM,andCL-GRUimproved CLmodule.TheF-scoreincreasedby35.47%forRE,34.21%forTD, 44.71%(from48.25%to69.83%),40.52%(from52.88%to74.30%)and and19.00%forIO.Thisnotableimprovementcanbeattributedto 40.54%(from55.11%to77.46%)overtheoriginalmodel,respectively, thesynergisticeffectsoftheCLstageitselfandouruniquesampling ontheaveragevalueofthreevulnerabilities. strategy.Specifically,theCLmodulefacilitatestheconvergenceof dispersedvulnerabilitysamplesinthefeaturespace,resultingin AnswertoRQ3:Theempiricalevidencessuggestthepo- increasedproximityamongthem.Byutilizingouruniquesampling tentialofcombiningtraditionaldeeplearningmodelswith strategy,wefurtherreinforcethecorrelationamongsamplesbe- theCLmoduleforSCVD,whichfacilitatesthemodelin longingtothesamevulnerabilitycategory,therebypromotingtheir acquiringfine-grainedfeatureinformationandenhancing clusteringbehavior.Thisprocessenablesthemodeltomoreeffort- theperformanceofSCVD. lesslyidentifyanddiscoverpotentialSCVs,leadingtoasignificant improvementintheperformanceofSCVD. Tosubstantiateourassertion,wethoroughlyexaminethederiva- 6 RELATEDWORK tion process of the sample distribution during the CL stage. In 6.1 SmartContractVulnerabilityDetection particular,weanalyzetheevolutionoftheoutputoftheCLstage (denotedas𝒗inEq.7)ateachepochandemployprincipalcompo- SCVDis animportantresearch probleminblockchain security, nentanalysis[29]toprojecteachoutputontoatwo-dimensional andnumerousscholarlyworkshavebeendedicatedtoexploring space.Subsequently,theseoutputsarevisualizedasscatterplots it.TheinitialapproachestodetectingSCVsinvolvedstaticanaly- anddisplayedinFigure4,wherethehorizontalandverticalaxes sisanddynamicexecutiontechniquesbasedonsomepredefined representlinearcombinationsofthevectorsvobtainedthrough rulesorpatterns.Forexample,Oyente[21]wasoneoftheearly PCA.Eachpointdenotesacontractsample,withpurpleindicating SCVdetectionmethodsthatutilizedsymbolicexecution.Itfocused vulnerabilitysamplesandyellowrepresentingnon-vulnerability on detecting vulnerabilities by analyzing the contract’s control samples.Thefigureclearlydepictstheprogressionofsmartcontract flowgraphbasedonsymbolicexecution.Securify[34]examined sampledistributionthroughouttheCLstageandyieldsthefollow- thecontract’sdependencygraphandextracteddetailedsemantic ingfinding.First,duringthetrainingprocessoftheCLmodule, informationfromthecodetoidentifycomplianceandsecurityvul- thesamplesofvulnerabilitycontractsexhibitatendencytocluster nerabilities.Mythril[23]wasastaticanalysistoolthatemployed together,whilebeingdistinctlyseparatedfromnon-vulnerability conceptanalysis,taintanalysis,andcontrolflowverificationto samples, indicating a clear distinction between the two groups. detect common SCVs. TeEther [16] analyzed the contract byte- Second,thisdistributionenhancestheabilitytodifferentiateand codeandsearchedforcriticalexecutionpathstoidentifySCVs.ImprovingSmartContractSecuritywithContrastiveLearning-basedVulnerabilityDetection ICSE’24,April14–20,2024,Lisbon,Portugal Table3:ResultsofRQ3. RE TD IO Average Methods R(%) P(%) F(%) R(%) P(%) F(%) R(%) P(%) F(%) R(%) P(%) F(%) RNN 33.60 37.78 35.56 46.40 69.05 55.50 49.46 58.72 53.70 43.15 55.18 48.25 CL-RNN 64.29 59.34 61.71 76.19 69.06 72.45 82.08 69.60 75.33 74.19(↑71.94%) 66.00(↑19.61%) 69.83(↑50.94%) LSTM 35.71 42.66 38.87 61.11 63.64 62.35 55.56 59.39 57.41 50.79 55.23 52.88 CL-LSTM 74.31 53.26 62.05 83.33 75.00 78.95 86.64 77.67 81.91 81.43(↑60.33%) 68.64(↑24.28%) 74.30(↑40.51%) GRU 50.20 35.88 41.85 61.11 63.64 62.35 67.38 55.95 61.14 59.56 51.82 55.11 CL-GRU 80.32 59.69 68.48 81.60 78.46 80.00 89.61 78.86 83.89 83.84(↑40.77%) 72.34(↑39.60%) 77.46(↑40.56%) Slither[6]wasastaticanalysisframeworkthatconvertedsmart 6.2 GeneralVulnerabilityDetection contractsourcecodeintoanintermediaterepresentationcalled Moreover,wealsoinvestigatesometraditionalvulnerabilitydetec- SlithIR.ItutilizedthisrepresentationtodetectSCVs.Osiris[33] tiontechniquesthatareusedtodetectvulnerabilitiesinJAVA,C, combinedsymbolicexecutionandtaintanalysistechniquestode- andC++programminglanguages.TheDevignmodel,proposed tectintegererrorsinsmartcontracts.SmartCheck[32],another byZhouetal.[40],isageneralizedgraphneuralnetwork-based staticprogramanalysistool,convertedSoliditysourcecodeinto approachfordetectingprogramvulnerabilities.Itseffectiveness |
XMLandcheckedforvulnerabilitiesbasedonpredefinedXPath wasvalidatedthroughexperimentsconductedonfourdifferent patterns.sFuzz[24]employedabranchdistance-drivenfuzzing large-scaleopen-sourceCprojects.Lietal.[17]introducedIVDe- techniquetoidentifyvulnerabilities.SMARTIAN[4]wasafuzzier, tect, an interpretable vulnerability detector that leverages deep which utilized lightweight dynamic data-flow analysis to guide learningtechniquestomodelprogramdependencygraphsforthe fuzzingbycollectingfeedbackbasedondataflow. purposeofdetectingvulnerabilities.Fuetal.[8]proposedLineVul, Withtheadvancementofdeeplearning,therehasbeenarisein aTransformer-basedapproachfordetectingvulnerabilitiesofthe researchapproachesthatharnessautomateddeeplearningmethods C/C++programatthelinelevel.Byutilizingpre-trainedmodelsto forsmartcontractvulnerabilitydetection.Forexample,SaferSC[31] learnfine-grainedcodesemanticinformation,LineVulhasproven wasthefirstvulnerabilitydetectionmethodtoutilizedeeplearn- tobethemosteffectiveandhighest-performingmethodavailable. ing.ItemployedaLongShort-TermMemory(LSTM)networkto Theaforementionedmethods,however,failtoyieldsatisfactory constructasequencemodelofEthereumopcode,providingacom- outcomesinthedomainofsmartcontractvulnerabilitydetection, prehensiverepresentationtodetectvulnerabilities.Morerecent evenwithLineVulbeingconsideredasthemosteffectiveapproach, deeplearningresearchinthisfieldemphasizestheuseofgraph thereexistsadiscernibleperformancedisparitywhencomparedto structures.DR-GCN[42]transformedsmartcontractsourcecode ourmethod. intoacontractgraphwithhighsemanticrepresentationandem- ployedaGraphConvolutionalNeuralNetwork(GCN)toconstruct 6.3 ContrastiveLearning avulnerabilitydetectionmodel.TMP[42]extendedtheapproach TheCLwasinitiallydevelopedasanunsupervisedlearningtech- ofDR-GCNbyconvertingcriticalfunctionsandvariablesintocore nique,aimingtolearnrepresentationsbyuncoveringunderlying nodeswithrichsemanticinformationwithinthecontractgraph.It similaritiesanddissimilaritiesamongsamples. alsoincorporatedtemporalinformationonedges.CGE[19]built Inthefieldofcomputervision,severalunsupervisedlearning uponTMPbyfurtherincorporatingexpertmodeinformation,in- methodshaveproposedCLtechniques.Notably,InstDisc[37]was tegratingthecontractgraphinformationwithexpertknowledge. anunsupervisedmethodwidelyusedincomputervisionforlearn- AME[18]aimedtocombinedeeplearningandexpertmodeinan ingtherepresentationofdata.Itsgoalwastomapsamplesfrom interpretablemanner,buildingupontheCGEapproach.DMT[26] thesameclasstosimilarrepresentationspaces,whilesamplesfrom proposedasingle-modalitystudentnetworkandacross-modality differentclassesweremappedtodifferentrepresentationspaces. mutuallearningframeworktoenhancesmartcontractvulnerabil- InvaSpread[39]proposedanend-to-endlearningmechanismthat itydetectiononbytecode.However,itisworthnotingthatallof couldperformpositiveandnegativesamplecomparisonswithin themethodsmentionedaboveprimarilyfocusondetectingvul- thesamemini-batch.MoCo[11]introducedacontrastivelearning nerabilitiesbylearningthesemanticknowledgeofcurrentinput methodbasedonadynamicdictionaryanddynamicnegativesam- contractsandignoringthecorrelationbetweencontracts.Indeed, plequeue,whichimprovedthequalityoffeaturerepresentationby theinter-contractcorrelationplaysacriticalroleinunderstand- constructingalargedynamicdictionarytoextendthepositivesam- ingtheoverallsecurityofsmartcontractecosystems.Ourmethod pleset.SimCLR[2]wasasimpleframeworkforcontrastlearning successfullyimprovestheperformanceofSCVDbyincorporating representations.SimCLRlearnedimagerepresentationbymaximiz- correlationinformation. ingthesimilaritybetweendifferentviewsofthesameimageand achievedsignificantperformanceimprovementsontaskssuchasICSE’24,April14–20,2024,Lisbon,Portugal YizhouChen,ZeyuSun,ZhihaoGong,andDanHao imageclassification,objectdetection,andsemanticsegmentation. securityhasbecomeevenmorecritical.However,theSCVshave SwAV[41]learnedimagerepresentationbyintroducingtheidea emergedasoneofthetopthreatstosecuretransactions.While ofclustering,assigningsamplestodifferentclusterclusters,and numerousmethodshavebeensuccessfulinmitigatingthisthreat, achievedimpressiveresultsontaskssuchasunsupervisedimage thediscriminativepowerofexistingmethodsonSCVsstillhasalot segmentation,objectdetection,andimageclassification.SimSiam ofroomforimprovement.Becausetheyfailtoexplorefine-grained [3]proposedasimpleframeworkforself-supervisedlearningto informationfromvulnerabilitylabelsandtakeintoaccountcorre- learntherepresentationofimagesorfeatures.Thecoreideawasto lationsamongcontracts.Toaddresstheseissues,weproposethe learntherepresentationoffeaturesbyminimizingtheEuclidean Clearmodel,whichleveragestheCLmodeltoeffectivelycapture distancebetweendifferentviewsofthesamesampleusinganau- inter-contractcorrelations.Byintroducingcorrelationlabels,the toencoder.Itsadvantageslayinitssimplicityandefficiency,without modelcanlearnfine-grainedcorrelationinformation.Tovalidate |
theneedtousecomplexcontrastlossfunctionsornegativesample theeffectivenessofourClear,weconductextensiveexperimentson miningstrategies. adatasetconsistingofover40,000real-worldsmartcontracts.These IntheNLPfield,ConSERT[27]utilizedvariousdataaugmenta- contractsareevaluatedagainststate-of-the-artdetectionmethods tiontechniquestoconstructpositivesamplepairs,suchascutoff, andcomparedtoClearforperformance.Theresultsdemonstrate shuffle,adversariallearning,anddropout.SimCSE[9]usedasim- thatourproposedClearoutperformsalldetectionmethods,thereby ple"dropouttwice"techniquetoconstructpositivesamplepairs improvingtheoveralleffectivenessofSCVD. forCL,achievinganewstate-of-the-artperformanceinunsuper- visedsemanticsimilaritytasks.ESimCSE[36]laterintroduceda ACKNOWLEDGMENTS momentumCLmethodtoconstructnegativesamplepairs.The ThisworkwassupportedbyNationalNaturalScienceFoundation R-DropmethodissimilartoSimCSE,applyingthe"dropouttwice" ofChinaunderGrantNos.62372005and62232001. techniquetosupervisedtasks. Notably,weemployCLforthefirsttimeintheSCVDdomain REFERENCES andutilizecorrelationlabelstoguidethetrainingoftheCLmodel, [1] SijieChen,HanningMi,JianPing,ZhengYan,ZeyuShen,XuezhiLiu,NingZhang, whichplayacrucialroleinfittingcorrelationfeaturesbythemodel QingXia,andChongqingKang.2022.Ablockchainconsensusmechanismthat andeffectivelyenhancetheperformanceofSCVD. usesProofofSolutiontooptimizeenergydispatchandtrading.NatureEnergy7, 6(2022),495–502. [2] TingChen,SimonKornblith,MohammadNorouzi,andGeoffreyHinton.2020.A 7 THREATSOFVALIDITY simpleframeworkforcontrastivelearningofvisualrepresentations.InInterna- tionalconferenceonmachinelearning.PMLR,1597–1607. Threatstoexternalvalidityarisefromthedatasetsandstudied [3] XinleiChenandKaimingHe.2021. Exploringsimplesiameserepresentation vulnerabilities.Tominimizetheformerthreat,weutilizethelargest learning.InProceedingsoftheIEEE/CVFconferenceoncomputervisionandpattern publiclyavailablevulnerabilitydatasetthatconsistsofsmartcon- recognition.15750–15758. [4] JaeseungChoi,DoyeonKim,SoominKim,GustavoGrieco,AlexGroce,and tractslabeledaseithervulnerableornon-vulnerable.Additionally, SangKilCha.2021. Smartian:Enhancingsmartcontractfuzzingwithstatic wefocusonevaluatingthestudiedvulnerabilitydetectionmeth- anddynamicdata-flowanalyses.In36thIEEE/ACMInternationalConferenceon AutomatedSoftwareEngineering.IEEE,227–239. odsonthethreemostsevereandcommonvulnerabilities,further [5] Ching-YaoChuang,JoshuaRobinson,Yen-ChenLin,AntonioTorralba,andSte- enhancingtheexternalvalidityofourresearch. fanieJegelka.2020.Debiasedcontrastivelearning.Advancesinneuralinformation Threatstointernalvaliditystemfromtheimplementationof processingsystems33(2020),8765–8775. [6] JosselinFeist,GustavoGrieco,andAlexGroce.2019. Slither:astaticanalysis Clearandthecomparedvulnerabilitydetectionmethods.Toaddress frameworkforsmartcontracts.InIEEE/ACM2ndInternationalWorkshopon thesethreats,weimplementClearusingthePyTorchframework EmergingTrendsinSoftwareEngineeringforBlockchain.IEEE,8–15. andleverageestablishedthird-partylibraries.Moreover,weutilize [7] ZhangyinFeng,DayaGuo,DuyuTang,NanDuan,XiaochengFeng,MingGong, LinjunShou,BingQin,TingLiu,DaxinJiang,etal.2020.Codebert:Apre-trained thereproduciblepackageofthecomparedmethods,ensuringafair modelforprogrammingandnaturallanguages.arXivpreprintarXiv:2002.08155 andstandardizedcomparison. (2020). [8] MichaelFuandChakkritTantithamthavorn.2022.Linevul:Atransformer-based Threatstoconstructvalidityarisefromthemetricsusedto line-levelvulnerabilityprediction.InProceedingsofthe19thInternationalConfer- measuretheperformanceofthestudiedvulnerabilitydetection enceonMiningSoftwareRepositories.608–620. methods.Tomitigatethesethreats,weemploywidelyaccepted [9] TianyuGao,XingchengYao,andDanqiChen.2021.Simcse:Simplecontrastive learningofsentenceembeddings.arXivpreprintarXiv:2104.08821(2021). evaluationmetrics,suchasprecision,recall,andF1-score.These [10] NevilleGrech,MichaelKong,AntonJurisevic,LexiBrent,BernhardScholz,and metricsprovideacomprehensiveassessmentoftheclassification YannisSmaragdakis.2018.Madmax:Survivingout-of-gasconditionsinethereum performance,ensuringtheconstructvalidityofourresearch.Be- smartcontracts. ProceedingsoftheACMonProgrammingLanguages2(2018), 1–27. sides,hardwaredevicessignificantlyinfluencethespeedofdetec- [11] KaimingHe,HaoqiFan,YuxinWu,SainingXie,andRossGirshick.2020.Mo- tionaswellasthreatenstructuralvalidity.Thethreatisaddressed mentumcontrastforunsupervisedvisualrepresentationlearning.InProceedings oftheIEEE/CVFconferenceoncomputervisionandpatternrecognition.9729–9738. byconductingallexperimentsinthispaperonthesamedevice, [12] AndreasIbingandAlexandraMai.2015.Afixed-pointalgorithmforautomated resultingintraditionaldetectiontoolstakingapproximately20to staticdetectionofinfiniteloops. IEEE16thInternationalSymposiumonHigh |
60secondstodetectasmartcontract,whiledeep-learning-based AssuranceSystemsEngineering(2015),44–51. [13] SukritKalra,SeepGoel,MohanDhawan,andSubodhSharma.2018. Zeus: detectionmethodsrequirelessthan1secondforthesametask. analyzingsafetyofsmartcontracts..InNdss.1–12. [14] PrannayKhosla,PiotrTeterwak,ChenWang,AaronSarna,YonglongTian,Phillip Isola,AaronMaschinot,CeLiu,andDilipKrishnan.2020.Supervisedcontrastive 8 CONCLUSION learning. Advancesinneuralinformationprocessingsystems33(2020),18661– 18673. Withtherapiddevelopmentofblockchaintechnology,transactions [15] ThomasNKipfandMaxWelling.2016.Semi-supervisedclassificationwithgraph basedonsmartcontractshavebecomeincreasinglyfrequentand convolutionalnetworks.arXivpreprintarXiv:1609.02907(2016).ImprovingSmartContractSecuritywithContrastiveLearning-basedVulnerabilityDetection ICSE’24,April14–20,2024,Lisbon,Portugal [16] JohannesKruppandChristianRossow.2018.{teEther}:Gnawingatethereum semanticsviagraphneuralnetworks.Advancesinneuralinformationprocessing toautomaticallyexploitsmartcontracts.In27thUSENIXSecuritySymposium. systems32(2019). 1317–1333. [41] ZhenglinZhu,YawangWang,XichuanZhou,LiuqingYang,GengMeng,and [17] YiLi,ShaohuaWang,andTienNNguyen.2021.Vulnerabilitydetectionwith ZeZhang.2020.SWAV:aweb-basedvisualizationbrowserforslidingwindow fine-grainedinterpretations.InProceedingsofthe29thACMJointMeetingon analysis.ScientificReports10,1(2020),149. EuropeanSoftwareEngineeringConferenceandSymposiumontheFoundationsof [42] YuanZhuang,ZhenguangLiu,PengQian,QiLiu,XiangWang,andQinmingHe. SoftwareEngineering.292–303. 2021. Smartcontractvulnerabilitydetectionusinggraphneuralnetworks.In [18] ZhenguangLiu,PengQian,XiangWang,LeiZhu,QinmingHe,andShoulingJi. ProceedingsoftheTwenty-NinthInternationalConferenceonInternationalJoint 2021.Smartcontractvulnerabilitydetection:frompureneuralnetworktointer- ConferencesonArtificialIntelligence.3283–3290. pretablegraphfeatureandexpertpatternfusion.arXivpreprintarXiv:2106.09282 (2021). [19] ZhenguangLiu,PengQian,XiaoyangWang,YuanZhuang,LinQiu,andXun Wang.2021. Combininggraphneuralnetworkswithexpertknowledgefor smartcontractvulnerabilitydetection.IEEETransactionsonKnowledgeandData Engineering(2021). [20] IlyaLoshchilovandFrankHutter.2018.Fixingweightdecayregularizationin adam.(2018). [21] LoiLuu,Duc-HiepChu,HrishiOlickel,PrateekSaxena,andAquinasHobor. 2016.Makingsmartcontractssmarter.InProceedingsofthe2016ACMSIGSAC conferenceoncomputerandcommunicationssecurity.254–269. [22] BenjaminMariano,YanjuChen,YuFeng,ShuvenduKLahiri,andIsilDillig. 2020.Demystifyingloopsinsmartcontracts.InProceedingsofthe35thIEEE/ACM InternationalConferenceonAutomatedSoftwareEngineering.262–274. [23] Bernhard Mueller. 2017. A framework for bug hunting on the ethereum blockchain.ConsenSys/mythril(2017). [24] TaiDNguyen,LongHPham,JunSun,YunLin,andQuangTranMinh.2020. sfuzz:Anefficientadaptivefuzzerforsoliditysmartcontracts.InProceedingsof theACM/IEEE42ndInternationalConferenceonSoftwareEngineering.778–788. [25] PurathaniPraitheeshan,LeiPan,JiangshanYu,JosephLiu,andRobinDoss.2019. Securityanalysismethodsonethereumsmartcontractvulnerabilities:asurvey. arXivpreprintarXiv:1908.08605(2019). [26] PengQian,ZhenguangLiu,YifangYin,andQinmingHe.2023.Cross-Modality MutualLearningforEnhancingSmartContractVulnerabilityDetectiononByte- code.InProceedingsoftheACMWebConference.2220–2229. [27] HefeiQiu,WeiDing,andPingChen.2021. ContrastiveLearningofSentence Representations.InProceedingsofthe18thInternationalConferenceonNatural LanguageProcessing. [28] MengRen,FuchenMa,ZijingYin,YingFu,HuizhongLi,WanliChang,andYu Jiang.2021. Makingsmartcontractdevelopmentmoresecureandeasier.In Proceedingsofthe29thACMJointMeetingonEuropeanSoftwareEngineering ConferenceandSymposiumontheFoundationsofSoftwareEngineering.1360– 1370. [29] SamRoweis.1997. EMalgorithmsforPCAandSPCA. Advancesinneural informationprocessingsystems10(1997). [30] AmritrajSingh,RezaMParizi,QiZhang,Kim-KwangRaymondChoo,andAli Dehghantanha.2020.Blockchainsmartcontractsformalization:Approachesand challengestoaddressvulnerabilities.Computers&Security88(2020),101654. [31] WesleyJoon-WieTann,XingJieHan,SouravSenGupta,andYew-SoonOng. 2018.Towardssafersmartcontracts:Asequencelearningapproachtodetecting securitythreats.arXivpreprintarXiv:1811.06632(2018). [32] SergeiTikhomirov,EkaterinaVoskresenskaya,IvanIvanitskiy,RamilTakhaviev, EvgenyMarchenko,andYaroslavAlexandrov.2018.Smartcheck:Staticanalysis ofethereumsmartcontracts.InProceedingsofthe1stinternationalworkshopon emergingtrendsinsoftwareengineeringforblockchain.9–16. [33] ChristofFerreiraTorres,JulianSchütte,andRaduState.2018.Osiris:Hunting forintegerbugsinethereumsmartcontracts.InProceedingsofthe34thannual computersecurityapplicationsconference.664–676. |
[34] PetarTsankov,AndreiDan,DanaDrachsler-Cohen,ArthurGervais,Florian Buenzli,andMartinVechev.2018.Securify:Practicalsecurityanalysisofsmart contracts.InProceedingsoftheACMSIGSACconferenceoncomputerandcommu- nicationssecurity.67–82. [35] ZhiyuanWan,XinXia,DavidLo,JiachiChen,XiapuLuo,andXiaohuYang. 2021.Smartcontractsecurity:Apractitioners’perspective.InIEEE/ACM43rd InternationalConferenceonSoftwareEngineering.IEEE,1410–1422. [36] XingWu,ChaochenGao,LiangjunZang,JizhongHan,ZhongyuanWang,and SonglinHu.2021.Esimcse:Enhancedsamplebuildingmethodforcontrastive learningofunsupervisedsentenceembedding.arXivpreprintarXiv:2109.04380 (2021). [37] ZhirongWu,YuanjunXiong,StellaXYu,andDahuaLin.2018.Unsupervised featurelearningvianon-parametricinstancediscrimination.InProceedingsof theIEEEconferenceoncomputervisionandpatternrecognition.3733–3742. [38] TeteXiao,XiaolongWang,AlexeiAEfros,andTrevorDarrell.2020.Whatshould notbecontrastiveincontrastivelearning.arXivpreprintarXiv:2008.05659(2020). [39] MangYe,XuZhang,PongCYuen,andShih-FuChang.2019. Unsupervised embeddinglearningviainvariantandspreadinginstancefeature.InProceedings oftheIEEE/CVFconferenceoncomputervisionandpatternrecognition.6210–6219. [40] YaqinZhou,ShangqingLiu,JingkaiSiow,XiaoningDu,andYangLiu.2019. Devign:Effectivevulnerabilityidentificationbylearningcomprehensiveprogram |
2404.18353 Do Neutral Prompts Produce Insecure Code? FormAI-v2 Dataset: Labelling Vulnerabilities in Code Generated by Large Language Models Norbert Tihanyi1*, Tamas Bisztray2, Mohamed Amine Ferrag1, Ridhi Jain1, Lucas C. Cordeiro3 1 Technology Innovation Institute (TII), Abu Dhabi, UAE. 2 University of Oslo, Oslo, Norway. 3University of Manchester, Manchester, UK. *Corresponding author(s). E-mail(s): norbert.tihanyi@tii.ae; Abstract This study provides a comparative analysis of state-of-the-art large language models (LLMs), analyzing how likely they generate vulnerabilities when writing simple C programs using a neutral zero-shot prompt. We address a significant gap in the literature concerning the security properties of code produced by thesemodelswithoutspecificdirectives.N.Tihanyietal.introducedtheFormAI dataset at PROMISE ’23, containing 112,000 GPT-3.5-generated C programs, with over 51.24% identified as vulnerable. We expand that work by introduc- ingtheFormAI-v2datasetcomprising265,000compilableCprogramsgenerated using various LLMs, including robust models such as Google’s GEMINI-pro, OpenAI’s GPT-4, and TII’s 180 billion-parameter Falcon, to Meta’s specialized 13billion-parameterCodeLLama2andvariousothercompactmodels.Eachpro- graminthedatasetislabelledbasedonthevulnerabilitiesdetectedinitssource codethroughformalverificationusingtheEfficientSMT-basedContext-Bounded Model Checker (ESBMC). This technique eliminates false positives by deliver- ingacounterexampleandensurestheexclusionoffalsenegativesbycompleting theverificationprocess.Ourstudyrevealsthatatleast63.47%ofthegenerated programsarevulnerable.Thedifferencesbetweenthemodelsareminor,asthey all display similar coding errors with slight variations. Our research highlights thatwhileLLMsofferpromisingcapabilitiesforcodegeneration,deployingtheir output in a production environment requires risk assessment and validation. Keywords:LargeLanguageModels,VulnerabilityClassification,FormalVerification, SoftwareSecurity,ArtificialIntelligence,Dataset 1 4202 rpA 92 ]RC.sc[ 1v35381.4042:viXra1 Introduction The introduction of Large Language Models (LLMs) is transforming software devel- opmentandprogramming[1–3].Everyday,developersandcomputerscientistsutilize various code creation and completion models to tackle different tasks [4, 5]. Research related to program synthesis using Generative Pre-trained Transformers (GPT) [6] is gaining significant traction, where initial studies indicate that the GPT models can generate syntactically correct yet vulnerable code [7]. A recent study conducted at Stanford University suggests that software engineers assisted by OpenAI’s codex- davinci-002model wereatahigherriskofintroducingsecurityflawsintotheircode[8]. As the usage of AI-based tools in coding continues to expand, understanding their potential to introduce software vulnerabilities becomes increasingly important. Given that LLMs are trained on data freely available on the internet, including potentially vulnerable code, there is a high risk that AI tools could replicate the same patterns. This leadsto the question: Is it safe to employ these models in real software projects? Asafirststeptowardsansweringthisquestion,N.Tihanyietal.publishedtheFor- mAI dataset [9] at the 19th International Conference on Predictive Models and Data Analytics in Software Engineering (PROMISE’23). This dataset stands out as the first and largest collection of AI-generated compilable C programs with vulnerability classification, featuring 112,000 samples. To guarantee the diversity of generated C codes, the authors developed a framework designed to produce a variety of programs thatcovermultiplecodingscenarios,efficientlyfacilitatingreal-worldbugs.Thestudy employed Bounded Model Checking (BMC), a technique within Formal Verification (FV),toevaluatethesecuritypropertiesofthedataset.Itrevealedthat51.24%ofthe C programs generated by GPT-3.5 were vulnerable. The original research discussed in [9] had three primary limitations that this comprehensive study aims to address: a. Firstly, it exclusively focused on OpenAI’s GPT-3.5 without evaluating other models. To bridge this gap, our study has been extended to encompass eight advanced LLMs, such as Google’s Gemini-pro [10], TII’s Falcon-180B [11], or Meta’s CodeLLama [12]. The models can be seen in Figure 1. b. Secondly, the initial findings (51.24%) may have been under-reported due to the inherent limitations of BMC, indicating that the actual percentage of vulnera- bilities could be higher. To address this issue, we transitioned our verification approach from bounded to unbounded verification, thereby enhancing the depth and accuracy of our security assessments [13–16]. c. Lastly,thecomplexityoftheprogramsgeneratedbyGPT-3.5wasnotthoroughly analyzed. To address these gaps, we have calculated the cyclomatic complexity CC [17] of each program generated by these LLMs to understand their intrinsic complexity better. This study answers the following research questions: 2FALCON-180B GPT-4 MISTRAL-7B Gemini Pro GPT-3.5 FormAI CodeLlama V2.0 13B Gemma 7b Llama2 13B Fig. 1: The eight LLM models used in this study for code security comparison. This is the first comprehensive study to compare vulnerabilities in code generated byvariousstate-of-the-artLLMs.Tosummarize,thispaperholdsthefollowingoriginal contributions: • We present the FormAI-v2 dataset, consisting of 265,000 compilable C pro- gram samples generated by eight different LLM models. Each C sample has been systematically labeled based on vulnerabilities identified through FV |
methods, particularly using the Efficient SMT-based Bounded Model Checker (ESBMC) [14–16] tool with an unbounded setting; • A detailed study to determine which LLMs produce code with the highest and the lowest number of vulnerabilities; • Weofferathoroughanalysisofthegeneratedprograms,includingdistributionsof Ckeywordsandthecyclomaticcomplexity.Thisanalysisenablesthecomparison of LLMs in terms of coding complexity; • We made the FormAI-v2 dataset available to the research community, including all generated C samples and classification results. The dataset can be accessed on our project website at https://github.com/FormAI-Dataset. The remaining sections of this paper are organized as follows: Section 2 pro- vides an in-depth discussion on the motivation behind our research, outlining the primary questions that drove our study. Section 3 presents a comprehensive overview of the literature related, highlighting significant previous studies and their findings. Section4introducestheconceptsofformalverification,focusingontheESBMCmod- ule. Section 5 details the methodology we adopted to develop and label our dataset. Section 6 thoroughly evaluates our research findings and discusses their implications. Section 7 explores the limitations and threats to the validity of our study and pro- posespotentialfutureresearchdirectionsthatcouldexpanduponourresults.Finally, Section 8 concludes the paper by summarising our contributions and addressing the research questions posed in this study. 32 Motivation LLMs are typically employed for simpler tasks, such as crafting a prime number gen- eration function or composing a basic program to sort an array, rather than tackling extensive projects involving thousands of lines of code [8]. The latest generation of LLMs can easily solve these simple tasks without facing any challenges. So far, the main area of interest in LLM-based code generation has been correctness. Datasets such as HumanEval [18] provide programming challenges to assess the performance of models in correctly solving various problems. For example, GPT-4 achieves a 67% success rate in solving tasks compared to 48.1% for GPT-3.5 [19]. On the contrary, even basic programs can pose challenges for the state-of-the-art LLM models regard- ing secure coding. To illustrate the issue, imagine a situation where a programmer asks GPT-4 the following: “Create a C program that prompts the user to input two numbers and then calculate their sum”. The code generated by GPT-4 is presented on the left in Figure 2, while the output from the formal verification tool ESBMC 7.5 is shown on the right. Fig. 2: Motivation example: GPT-4 produced code with security vulnerabilities, demonstrated through formal verification results Thissimplecodecontainsthreepotentialsecurityvulnerabilities.Itexhibitsaninteger overflow during the addition of the variables number1 and number2, as well as two buffer overflows through the scanf() functions that retrieve input from the user. In 32-bit computing architectures, integers are commonly stored as 4 bytes (32 bits), which results in a maximum integer value of 2147483647, equivalent to 231 −1. If one attempts to add 2147483647 + 1 using this small program, the result will be incorrect due to integer overflow. The incorrect result will be -2147483648 instead of the expected 2147483648. The addition exceeds the maximum representable value for a signed 32-bit integer 231 −1, causing the integer to wrap around and become negative due to the two’s complement representation. 4WhenGPT-4isrequestedtowriteasecureversionofthiscodeusingthefollowing prompt: “Create a C program that prompts the user to input two numbers and then calculates their sum. Be careful and avoid security vulnerabilities.“, it only attempts to fix entering non-integer inputs by adding the following code snippet (Fig. 3): Fig. 3: Response to a generic prompt requesting a secure version. Even after requesting a “secure” zero-shot prompt, all three original issues remain unresolved.DespitethesignificantadvancementsfromGPT-3.5—whichexhibitedthe same issue [9]—to GPT-4, our motivational example indicates that GPT-4 continues to produce code with vulnerabilities. Even if specifically requested in the prompt to avoid integer overflow in the program, the issue persists (Fig. 4): Fig. 4: Zero-shot prompt requesting a fix for integer overflow We want to emphasize that simply requesting a secure version is not an effective approach towards achieving a secure code for the following reason: Code completion tools such as GitHub Copilot 1 or Amazon Code Whisperer 2 suggest code snippets basedoncontextualanalysisandtrainingdata,whichhasalsobeenshowntoproduce vulnerable code [20]. In such scenarios, the ability to prompt is limited (it can be attemptedthroughcommentsinthecode).Inaddition,GitHubCopilotispoweredby avariantoftheGPT(GenerativePre-trainedTransformer)modelcalledCodex,which OpenAIdeveloped.Theunderlyingissuewillremainifthesemodelsarenottrainedto produce secure code. Based on this observation, we aim to conduct a comprehensive study involving various state-of-the-art models to address our research questions. 1https://github.com/features/copilot/ 2https://aws.amazon.com/codewhisperer/ 53 Related Work This section overviews automated vulnerability detection and notable existing datasets containing vulnerable code samples for various training and benchmarking purposes. 3.1 LLMs in Software Engineering In software engineering (SE), it is essential to guarantee three main aspects of the code: correctness, safety, and security of the programs created. Functionally correct code should yield the expected outcomes for each input it processes. Safety in code means constructing fail-safe systems, protecting against accidental or unexpected |
inputs that might produce logically correct but undesirable results. Software security involves fortifying the software against external threats and deliberate attacks [21]. In a comprehensive study, Anwar et al. [22] highlight important safety issues related to LLMs beyond SE, from the disruptive socioeconomic impacts and cybersecurity risks to ethical issues. Vassilka et al. [23] discuss the need for SE education to adapt to AI advancements and prepare future software engineers to effectively and ethically utilize these technologies in their careers. To assess correctness, datasets such as HumanEval [18] serve as a benchmark to measure the problem-solving abilities of AI models for problems related to lan- guagecomprehension,reasoning,algorithms,simplemathematics,coding,andlogical thinking. There are several other similar datasets, such as MBPP [24] to assess code synthesis capabilities on elementary Python challenges, or CodeXGLUE [25], to test code completion, translation, and understanding of different LLMs. Frameworks and techniques for turning prompts into executable code for SE are rapidly emerging, but the main focus is often functional correctness, omitting important security aspects [26–28]. There has been an arms race between researchers to excel in benchmarks using zero or few-shot frameworks [29, 30], multi-agent frameworks [31], fine-tuned models [32], and various other methods. As AI mod- els evolve, their problem-solving capabilities improve significantly. However, whether these advancements also enhance the safety and security properties of the code they generate remains largely unclear and under-researched. In [33], Lin et al. assessed different software process models to evaluate how these models affect code correctness (Pass@13). They also assessed the code quality of the AI-generatedcodebyrunningstaticcodecheckerstouncovercodesmells4.Thiswork hadaninterestingfinding:theproposedsoftwareprocessmodelsimprovedthequality of the generated code by significantly reducing code smells compared to what GPT- 3.5 outputs by itself. Code smells or bad coding practices will not outright introduce vulnerabilities. However, several small-scale studies point to the fact that LLMs neg- atively impact software development from a security perspective. In [34], the authors generated 21 small programs in five different languages: C, C++, Python, HTML, 3Thismetrichighlightsthemodel’sabilitytoproducecorrectandfunctionalcodeonitsfirsttrywithout anyrevisionsorcorrections. 4Codesmellsarepatternsincodethathintatpotentialproblems,makingmaintenanceharderbutnot necessarily causing immediate errors. They suggest areas where the code may need to be refactored for betterqualityandreliability. 6andJava.CombiningmanualverificationwithGPT-basedvulnerabilitydetection,the study found that only 5 of the 21 generated programs were initially secure. In [35], Pearce et al. conclude that the control group utilized GitHub’s Copilot to solve arithmetic operations accurately. This work highlights an important lesson: to accuratelymeasuretheroleofAItoolsincodegenerationorcompletion,itisessential to choose coding scenarios mirroring a diverse set of relevant real-world settings, thereby facilitating the occurrence of various vulnerabilities. This necessitates the creationofcodebasesreplicatingawiderangeofscenarios,whichisoneoftheprimary goalstheFormAIdatasetstrivestoachieve.ThesestudiesindicatethatAItools,and in particular ChatGPT, can produce code containing vulnerabilities as of today. Ma et al. [36] assessed the capabilities and limitations of ChatGPT for SE and provided initial insights into why the programs generated by language models are syntactically correct but potentially vulnerable. A study by Microsoft [37] found that GPTmodelsencounterdifficultieswhenaccuratelysolvingarithmeticoperations.This aligns with our findings in Section 2. In a comprehensive literature review, Hou et al. [38] examined LLMs’ application, effects, and possible limitations on SE. This study reveals that LLMs are extensively employed across software development, appearing in 229 papers for 24 distinct SE tasks, predominantly in code generation and program repair. It also identifies over 70 LLMs, classifying them into three architectural types: decoder-only, encoder-decoder, and encoder-only. Each architecture serves specific functions—encoder-only for in- depth understanding, encoder-decoder for combined understanding and generation, and decoder-only primarily for generation tasks. This work highlights an interesting gap: there are dozens of research papers aiming to perform vulnerability detection in source code using machine learning (ML) and LLMs [39–51], however, assessing software safety and security properties of LLM-generated code on a large-scale has not yet been performed apart from our original work [9] for C, and recently by [52] for PHP code. Both studies evaluated a single model in a zero-shot code generation scenario, while our current work also conducts a comparison of the performance of different models. In [53] Shumailov et al. highlighted a phenomenon known as “model collapse”. Their research demonstrated that integrating content generated by LLMs can lead to persistentflawsinsubsequentmodelswhenusingthegenerateddatafortraining.This hints that training ML algorithms only on purely AI-generated content is insufficient if one aims to prepare these models for detecting vulnerabilities in human-generated code. This is essentially due to using a dataset during the training phase, which is not diverse enough and misrepresents edge cases. This raises the question of whether the FromAI dataset is suitable for fine-tuning and ML purposes. It is important to notethattheAI-generatedcodeisjustonepartofthedataset.Mostimportantly,the |
vulnerabilitylabelingwasnotdonebyAIbutbytheESBMCformalverificationtool. This way, models trained on this dataset can essentially learn the skills of a formal verification tool (or at least try to achieve the best optimal outcomes). Theprogramsaregeneratedthroughadynamiczero-shotpromptingmethod,and the generated programs are not modified by any AI system afterward. While the primary goal of our paper is to investigate and compare the secure coding abilities of 7different LLM models, these conditions make the FormAI-v2 dataset suitable for ML purposes. On the other hand, AI models were trained on human-generated content; thus, the vulnerabilities produced have roots in incorrect code created by humans. Yet, as discussed in the next section, existing datasets notoriously include synthetic data(differentfromAI-generated),whichcanbeusefulforbenchmarkingvulnerability scanners but has questionable value for training purposes [54]. 3.2 Existing Databases for Vulnerable C Code We show how the FormAI-v2 dataset compares to seven widely studied datasets con- taining vulnerable code and the previous version of the dataset published in [9]. The examined datasets are: Big-Vul [55], Draper [56, 57], SARD [58], Juliet [59], Devign [60], REVEAL [61], DiverseVul [54], and FormAI-v1 [62]. Table 1 presents a comprehensive comparison of the datasets across various metrics. Some of this data is derived from review papers that evaluate these datasets [54, 63]. Table 1: Comparison of various datasets based on their labeling classifications. Datasets Diverse FormAI Big-Vul Draper SARD Juliet Devign REVEAL FormAI Vul v2 Specs Language C/C++ C/C++ Multi Multi C C/C++ C/C++ C C Syn+ Syn+ Source RW Syn RW RW RW AI AI RW RW Dataset size 189k 1,274k 101k 106k 28k 23k 379k 112k 150k Vul. 100% 5.62% 100% 100% 46.05% 9.85% 7.02% 51.24% 61% Snippets Multi. ✘ ✔ ✘ ✘ ✘ ✘ ✘ ✔ ✔ Vulns. Compilable ✘ ✘ ✔ ✔ ✘ ✘ ✘ ✔ ✔ Granularity Func Func Prog Prog Func Func Func Prog Prog CVE Class. Type CWE CWE CWE CVE CVE CWE CWE CWE CWE Avg. LOC. 30 29 114 125 112 32 44 79 82 Labelling P S B/S/M B M P P F F Method Legend: Multi:Multi-LanguageDataset,RW:RealWorld,Syn:Synthetic,AI:AI-generated, Func:Functionlevelgranularity,Prog:Programlevelgranularity, CVE:CommonVulnerabilitiesandExposures,CWE:CommonWeaknessEnumeration, P:GitHubCommitsPatchingaVulnerability,S:StaticAnalyzer, B:ByDesignVulnerable,F:FormalVerificationwithESBMC,M:ManualLabeling Big-Vul,Draper,Devign,REVEAL,andDiverseVulcomprisevulnerablereal-world functions from open-source applications. These five datasets do not include all the samples’ dependencies; therefore, they are non-compilable. SARD and Juliet contain synthetic, compilable programs. In their general composition, the programs contain a vulnerablefunction,itsequivalentpatchedfunction,andamainfunctioncallingthese functions.Alldatasetsindicatewhetheracodeisvulnerable,usingvariousvulnerabil- itylabelingmethodologiessuchasP,wherefunctionsareconsideredvulnerablebefore 8receiving GitHub commits that address detected vulnerabilities; M, which involves manual labeling; S, which uses static analyzers; and B, designated as by design vul- nerablewithouttheuseofavulnerabilityverificationtool.It’simportanttonotethat the size of these datasets can be misleading, as many include samples from languages other than the one primarily studied. For example, SARD includes not only C and C++ but also Java, PHP, and C#. Moreover, newly released sets often incorporate previous datasets or scrape the same GitHub repositories, making them redundant. For example, Draper contains C and C++ code from the SATE IV Juliet Test Suite, Debian Linux distribution, and public Git repositories. Since the open-source functionsfromDebianandGitHubwerenotlabeled,theauthorsusedasuiteofstatic analysis tools: Cppcheck and Flawfinder [56]. However, the paper does not mention if vulnerabilities were manually verified or if any confirmation has been performed to root out false positives. In [54], on top of creating DiverseVul, Chen et al. merged all datasets that were based on GitHub commits and removed duplicates, thus making the most comprehensive collection of GitHub commits containing vulnerable C and C++ code. 3.3 Vulnerability Scanning and Repair Software verification is crucial for ensuring software’s safety and security properties. It employs a variety of techniques, each with its strengths and limitations. These techniques include manual verification, static analysis, dynamic analysis, formal veri- fication, and increasingly, machine learning-based approaches such as those involving LLMs [36, 64–67]. Manual verification involves human-driven processes such as code reviews and manualtesting.Whilethesemethodseffectivelycatchcomplexerrorsthatautomated tools might miss, they are labor-intensive and not scalable to large codebases or frequent updates. Static analysis evaluates source code without executing it, using staticsymbolicexecution,dataflowanalysis,andcontrolflowanalysis.Stylechecking enforces coding standards for better readability and maintainability. These meth- ods collectively enhance software integrity. The drawbacks are that this method can miss vulnerabilities that manifest only during runtime interactions and often intro- duce false positive results. Dynamic analysis tests the software’s behavior during execution [68]. It includes fuzzing, automated testing, run-time verification, and pro- filing. This technique requires executable code and often significant setup to simulate different environments and may not cover all execution paths. |
FormalVerification(FV)usesmathematicalproofstoverifythecorrectnessofalgo- rithmsagainsttheirspecifications.Itisthemostrigorousformofsoftwareverification and is used in applications where reliability is critical, such as aerospace and medi- cal devices. However, FV can be time-consuming and requires specialized knowledge, limiting its widespread adoption [21]. Recent advancements include machine learning techniques, particularly LLMs, in various aspects of software verification [69]. LLMs can assist in automated code review by suggesting improvements, detecting vulnera- bilities, generating test cases, fixing bugs, and creating documentation. Despite their potential [70–76], LLMs, on their, own face limitations such as a lack of understand- ingofcodesemanticsanddifficultyinhandlinghighlydomain-specificknowledge[77], 9and they depend heavily on the quality and variety of the training data. Using LLMs as part of a framework to complement other techniques is, however, a promising area of research [7, 9, 78, 79]. An earlier work from 2022 examined the ability of various LLMs to fix vulnerabil- ities,wherethemodelsshowedpromisingresults,especiallywhencombined.Still,the authorsnotedthatsuchtoolsarenotreadytobeusedinaprogramrepairframework, wherefurtherresearchisnecessarytoincorporatebuglocalization.Theyfurtherhigh- lightedchallengesinthetool’sabilitytogeneratefunctionallycorrectcode[80].While LLMs struggle with detection by themselves, in [7], the authors demonstrated that GPT-3.5 could efficiently fix errors if the output of the ESBMC verifier is provided. Program repair is another emerging area where the application of LLMs is showing real promise, where in addition to fine-tuning strategies, the combination of LLMs withothertoolsappearstobeaneffectivemethod[41,81–97].In[98],theauthorscall forinnovationstoenhanceautomatedvulnerabilityrepair,particularlyfordeveloping more extensive training datasets to optimize LLMs. 3.4 Overview on Cyclomatic Complexity Diversity is vital in our dataset to ensure a comprehensive representation of real-life vulnerabilities, facilitating a fair comparison across different LLMs. We add a new metric in FormAI-v2 to measure code complexity: cyclomatic complexity (CC). This metric, introduced by McCabe [17], quantifies the maintainability and complexity of programs by measuring the number of linearly independent paths within a program’s control-flow graph (CFG) [99]. It’s calculated as V(G) for a graph G with n vertices, e edges, and p connected components. V(G)=e−n+2p. This metric, derived from the CFG – a directed graph of basic program blocks linkedbycontrolpaths–isanumericalindicatorofaprogram’sstructuralcomplexity. While complexity in functions increases due to multiple decision points, such as if- statementsandloopsimplyhighertestingandmaintenancedemands,criticsnotethat CC might not fully capture complexities in modern software that extensively uses polymorphism and multi-threading. As will be discussed in Section 6, our findings suggest that cyclomatic complexity (CC) can indicate whether a model is capable of generating well-maintained, well-structured code or if it tends to produce poorly designed programs with high cyclomatic complexity, where higher numbers imply worseoutcomesandlowernumbersarepreferable.Cyclomaticcomplexitycanprovide insightsintothepotentialdifficultyoftesting,maintaining,ortroubleshootingapiece of code and an indication of the likelihood of the code producing errors. Additionally, incorporatingCCasanewfeatureinthedatasetcouldenhanceclassificationaccuracy when used during the machine learning training phase. The next section will offer necessary insights into formal verification to clarify the methodology employed in developing the dataset. 104 Formal Verification (FV) and Bounded Model Checking (BMC) As manually labelling the entire dataset is unfeasible for such a large corpus of data,weemployaFormalVerification(FV)method,calledBoundedModelChecking (BMC), to accurately detect vulnerabilities. In contrast to traditional static analysis tools, which often yield a high incidence of false positives due to reliance on pattern recognition without mathematical grounding [100], BMC offers rigorous validation. The Efficient SMT-based Bounded Model Checker (ESBMC) [14] effectively mini- mizes false positives and negatives. This approach ensures a fair comparison between programs produced by different LLMs. 4.1 State Transition System A state transition system M = (S,T,S ) represents an abstract machine consisting 0 of a collection of states S, where S ⊆ S indicates the initial states, and T ⊆ S×S 0 specifies the transition relation, illustrating the potential state transitions within the system. Every state s ∈ S is characterized by the value of the program counter (pc) andthevaluesofallprogramvariables.Theinitialstates setstheprogram’sstarting 0 location. Transitions between states denoted as T = (s ,s ) ∈ T, between any two i i+1 states s and s , are associated with a logical formula T(s ,s ) that describes the i i+1 i i+1 constraintsontheprogramcounterandprogramvariablesrelevanttothattransition. 4.2 Bounded Model Checking BMCisemployedinFVtoascertainthecorrectnessofasystemuptoafinitenumber of steps. This approach models the system as a finite state transition system and methodically examines its state space to a predefined depth. Recent BMC modules arecapableofprocessingavarietyofprogramminglanguagessuchasC,C++,JAVA, or Kotlin [100–106]. The process starts with the program code from which a CFG is |
derived. In this CFG, nodes represent deterministic or nondeterministic assignments or conditional statements, while edges indicate potential changes in the program’s control flow. Essentially, each node is a block that encapsulates a set of instructions with a unique entry and exit point, and edges indicate potential transitions to other blocks. The CFG is then converted into Static Single Assignment (SSA) form and further into a State Transition System (STS), which a Satisfiability Modulo Theories (SMT) solver can interpret. The SMT solver checks if a given formula, representing theprogram’scorrectnesswithinaboundk,issatisfiable,indicatingtheexistenceofa counterexampletothepropertiesbeingverified.Ifnoerrorsarefoundandtheformula is unsatisfiable within the bound k, it suggests the program has no vulnerabilities within that limit. Thus, if the solver concludes satisfiability within a bound ≤ k, it confirms the presence of a vulnerability through a counterexample. Consider a program P under verification modeled as a finite STS, denoted by the triplet ST =(S,R,I), where S represents the set of states, R⊆S×S represents the set of transitions, and I ⊆S, including elements such as s ,...,s , marks the initial n m stateset.Inastatetransitionsystem,astatedenotedass∈S consistsoftheprogram 11counter value, referred to as pc, and the values of all program variables. The initial state, s , specifies the initial program location on the CFG. Every transition T = 0 (s ,s )∈R, connecting two states s and s , correlates with a logical expression i i+1 i i+1 T(s ,s ) that constrains the program counter (pc) and variable values pertinent to i i+1 the transition. In the context of BMC, the properties under examination are defined as follows: ϕ(s) represents a logical formula reflecting states that fulfill a safety or security cri- terion, whereas ψ(s) encodes a logical statement representing states that meet the completeness threshold, synonymous with program termination. Notably, ψ(s) incor- porates loop unwinding to avoid surpassing the program’s maximum loop iterations. Termination and error conditions are mutually exclusive, rendering ϕ(s)∧ψ(s) inher- entlyunsatisfiable.IfT(s ,s )∨ϕ(s)isunsatisfiable,statesisconsideredadeadlock i i+1 state. 4.2.1 The Bounded Model Checking Problem Based on this information, we can define the bounded model checking problem as BMC , which involves creating a logical statement. The truth of this statement Φ determines if the program P has a counterexample with a maximum length of k. The formula can only be satisfied if a counterexample fitting within the predetermined length restriction is present, i.e.: k−1 k ^ _ BMC (k)=I(s )∧ T(s ,s )∧ ¬ϕ(s ). (1) Φ 0 i i+1 i i=1 i=1 Herein, I denotes the initial state set of ST, and T(s ,s ) embodies the transi- i i+1 tion relation within ST between consecutive time steps i and i+1. Thus, the logical expression I(s )∧Vk−1T(s ,s ) depicts the execution pathways of ST spanning 0 i=1 i i+1 a length k, and BMC (k) can be satisfied if and only if for some i ≤ k there exists Φ a reachable state at time step i in which ϕ is violated. If BMC (k) is satisfied, it Φ implies a violation of ϕ, permitting an SMT solver to deduce a satisfying assignment from which the program variables’ values can be derived to assemble a counterexam- ple. By definition, a counterexample, or trace, for a violated property ϕ, is defined as a finite sequence of states s ,...,s , where s ,...,s ∈ S and T(s ,s ) holds for 0 k 0 k i i+1 0≤i<k. These counterexamples hold significant importance for us, as we explicitly seek out these violations to compare and determine which code generated by LLMs is “more secure”. Fewer violated properties indicate that the LLM can produce more secure code. In this context, it is important to note the influence of cyclomatic com- plexity. Merely experiencing fewer errors with an LLM does not necessarily indicate superiority; it could simply be generating less intricate problems. Therefore, assess- ing both property violations and cyclomatic complexity concurrently offers a more comprehensive understanding of an LLM’s proficiency in secure coding. IftheEquation(1)isunsatisfiable,itimplicatesnoerrorstateasreachablewithin k stepsorfewer,hencenosoftwarevulnerabilityexistswithintheboundk.Bysearch- ing for counterexamples within this bound, we can establish, based on mathematical proofs,whetheracounterexampleexistsandwhetherourprogramP containsasecu- rity vulnerability. This method detects security concerns such as buffer overflows and 12divisionbyzeroornulldereferencefailures.Itisworthnotingthatifaprogramisclas- sified as vulnerable, this assessment relies on counterexamples, effectively eliminating the possibility of false positives. Conversely, in situations where no counterexample exists,wecanconfidentlyassertthattheprogramisfreefromvulnerabilitiesuptothe boundk,ensuringtheabsenceoffalsenegatives.Byadoptingthisstrategy,weaimto classify each program by detecting violated properties in the generated code. While this method offers greater precision and improved outcomes compared to standard static analysis tools, it is also more time-consuming, demands significant resources, and is limited to identifying only a predefined set of vulnerabilities. 4.3 Efficient SMT-based Context-Bounded Model Checker Annually, the International Competition on Software Verification, known as SV- COMP, challenges various programs to detect bugs and ensure software safety. |
ESBMCexcelsinthiscompetitionbysolvingthehighestnumberofverificationtasks within a strict 10-30 second time limit per program, as evidenced in SV-COMP 2023 [107]. Given its performance, ESBMC was selected as our primary BMC tool. As a robust, open-source model checker for C/C++, Kotlin, and Solidity programs, ESBMCaddressesawiderangeofsafetypropertiesandprogramassertions,including out-of-bounds array access, illegal pointer dereference, integer overflows, and mem- ory leaks. It employs advanced verification techniques such as incremental BMC and k-induction, supported by state-of-the-art SMT and Constraint Programming (CP) solvers. ESBMC’s effectiveness in falsification and bug-finding is underscored by its multiple accolades in SV-COMP, including 6 gold, 4 silver, and 10 bronze medals. 4.3.1 Unbounded model checking using ESBMC Intheoriginalworkof[9],theboundedmodelchecking(BMC)problemwasaddressed with a bound set to k =1. Although we have observed that most vulnerabilities can bedetectedwiththesesettingsempirically,thenatureofBMCcansometimesleadto erroneousconclusions.Forexample,ifapropertyviolationoccursatlevelk =2,then BMC (1) will fail to detect the bug and report the verification as successful, giv- Φ ing a false impression. As a result, in the FormAI-v1 dataset, numerous samples were previously classified as ”NON-VULNERABLE up to bound k”. We have transitioned from bounded to unbounded model checking to capture more vulnerabilities or prove theirabsenceforeachsample.Thismeansthattheprogramisincrementallyunwound until a bug is found or until the completeness threshold is reached, i.e., states corre- sponding to the program terminating. This incremental BMC approach ensures that smaller problems are solved sequentially instead of guessing an upper bound for the verification. Applying these settings, we have successfully identified more vulnerabil- ities or prove that no vulnerability exists. Consequently, if the verification process is completed successfully, we can conclude that the program has no violated properties. While this approach requires significantly more computational power, it has proven effective in revealing a greater number of vulnerabilities or proving their absence, as we will demonstrate in Section 6. 135 Methodology and Dataset Creation The FormAI-v2 dataset consists of two main components: the AI-generated C pro- grams and their vulnerability labeling. In the data generation phase, we have created 265,000 samples. This paper implements a similar methodology introduced in the original work [9]. The main difference is that we employ eight different LLMs rather thansolelyGPT-3.5.Moreover,asdiscussedintheprevioussection,wehaveadopted a more expansive unbounded verification model. This section covers the dataset gen- eration methodology and the specific prompts for producing the C code samples. The overview of the generation and vulnerability labeling mechanism is depicted in Figure 5. Fig. 5: FormAI-v2 dataset generation Framework using different LLMs. 145.1 Code Generation During the creation process, special attention was given to ensure the diversity of the FormAI-v2 dataset, which contains 265,000 compilable C samples. Using a basic promptlike”generateaCprogram” oftenresultsinsimilaroutputs,suchasprograms that add two numbers or perform simple string manipulations, which do not meet our objectives. We need to systematically generate a comprehensive and varied col- lection of small programs. To meet this goal, we have developed a prompting method consisting a dynamic and a static part. The static component remains consistent and unchanged,whilethedynamicportionofthepromptundergoescontinuousvariation. An example of how a single prompt looks is shown under Figure 6. Fig. 6: Dynamic code generation prompt The dynamic part of the prompt, highlighted as [Type] and [Style], represent distinct categories within the prompt, each encompassing different elements. Every API call randomly selects a Type category from a set of 200 elements. This cate- gory contains topics such as Wi-Fi Signal Strength Analyzer, QR code reader, Image Steganography, Pixel Art Generator, Scientific Calculator Implementation, etc. Simi- larly,acodingStyleischosenfromasetof100elementsduringeachquery.Thishelps minimize repetition, as coding styles such as “excited”, “relaxed”, or “mathematical” arerandomlycombinedwithaTypecategory.Byemployingthismethod,wecangen- erate 200×100 = 20,000 distinct combinations. As demonstrated by insights from [80, 108], there’s a need for a code base that supports diverse settings while ensur- ing tasks remain concise to fit within the token constraints of large language models (LLMs). With this consideration, we designed tasks in the Type category, selecting prompts that LLMs can efficiently process. For instance, complex prompts like “Cre- ate a CRUD application using React for the front-end, Node.js with Express for the back-end, and MongoDB for the database” must be broken down into smaller, man- ageabletasks.Furthermore,taskswithdifferentstyles,suchas”Filehandling”witha ”thriller” versus a ”happy” style, lead to different representations in the vector space upon tokenization. Despite potential compatibility issues between certain Type-Style combinations, encouraging LLMs to code in varied styles has generally enhanced the diversity and distinctiveness of responses to identical tasks. Decreasingthenumberofunsuccessfulqueriesbyrefiningthepromptisimportant from an efficiency perspective. We have established five instructions in each prompt to minimize the error within the generated code. These instructions, along with their corresponding explanations, are the following: 151. Minimum 50 lines: This encourages the LLM to avoid the generation of overly |
simplistic code with only a few lines (which occasionally still happens); 2. Be creative!: The purpose of this instruction is to generate a more diverse dataset; 3. Do not say I am sorry: This instruction aims to circumvent objections and responses such as “As an AI model, I cannot generate code”, and similar statements. 4. Make sure the program compiles: This instruction encourages the model to include header files and create a complete and compilable program. 5. Generate a code snippet that starts with ‘‘‘c: Enable easy extraction of the C code from the response. Once a C code is generated, the GNU C compiler5 is employed to verify whether the corresponding code is compilable. During the code generation process, we ensure that the FormAI-v2 dataset exclusively consists of compilable code while excluding anyothercodethatdoesnotmeetthiscriterion.Differentmodelscangeneratevarying percentages of compilable code depending on their parameter size. Models like GPT- 4, GEMINi-pro, or FALCON-180B can achieve compilation rates higher than 90%, whereas smaller models with 7B parameters typically produce C code with a com- pilability rate between 55-70%. The primary reason for having non-compilable code was due to the absence of necessary headers, such as math.h, ctype.h, or stdlib.h. The generation of 265,000 code samples was completed within 48 hours. As the cost of generation associated with different models can significantly vary, we did not gen- erate the same number of samples from each model. For example, GPT-4 API calls can be up to 60 times as expensive as GPT-3.5. Table 2 shows how many samples we acquired from each LLM. Table 2: Content of the FormAI v2.0 dataset. LLM Model Company Size License Sample Size GPT-4 OpenAI N/A Proprietary 1000 Llama2-13B Meta 13B Open 5000 Mistral-7B Mistral AI 7B Apache 2.0 10000 CodeLlama-13B Meta 13B Proprietary 12000 Gemini 1.0 Pro Google N/A Proprietary 40000 Gemma-7b Google 7B Gemma-terms-of-use 47000 Falcon-180B TII 180B Apache 2.0 72000 GPT-3.5 OpenAI 175B Proprietary 78000 5https://gcc.gnu.org 165.2 Vulnerability Classification Following the code generation, we executed ESBMC on each file to classify them. Let us denote the set of all the generated C samples by Σ, such that Σ = {c ,c ,...,c },whereeachc representsanindividualsample.AsshowninTable 1 2 265,000 i 3, we can group all the samples into four distinct categories. These categories are mutuallyexclusive,meaningasinglesamplecannotbelongtomorethanonecategory. Table 3: Summary of Vulnerabilities with Detailed Descriptions. Category Description VS: Verification Success The set of samples for which the verification process completed successfullywithnovulnerabilitiesdetected. VU: Verification Unknown Thesetofsamplesforwhichtheverificationprocesswasnotcom- (Timeout) pleted within the provided time frame; no vulnerabilities found beforetimeout. VF: Verification Failed Thesetofsamplesforwhichtheverificationstatusfailed,vulner- abilitiesdetectedbyESBMCbasedoncounterexamples. ER: Error Thesetofsamplesforwhichtheverificationstatusresultedinan error. Asignificantchangehasbeenmadecomparedtotheclassificationin[9].Theprevious paper only included vulnerabilities if a program’s verification process was completely finished.OurnewapproachincludeseveryinstancewhereESBMCfoundacounterex- ample. This decision was motivated by identifying as many vulnerabilities as possible andprovidingthemostaccurateratioofvulnerabletonon-vulnerableprograms.Dis- missing these intermediate detection results could lead to a misleading assessment, potentially underestimating the number of vulnerabilities produced by larger mod- els. Thus, including all counterexamples ensures a more accurate representation of the true vulnerability landscape across different LLM outputs. For this reason, the category “TIMEOUT” (TO) has been renamed to VU: Verification Unknown. The category labeled as “ERROR” (ER) encompasses all instances where the verification process faced errors or crashes in the core ESBMC module, GOTO converter, SMT solver, or the clang compiler module. These samples are omitted from the classifica- tion because they cannot be handled using the latest ESBMC module. “Verification failed” (VF) represents the main focus of our interest. This set of samples, where the verificationstatuswasunsuccessful,hadvulnerabilitiesidentifiedbyESBMCthrough counterexamples. We divided VF into 6 main categories, and 19 subcategories. Note that in the JSON file that contains the vulnerability classification, the exact type of vulnerability is always indicated as provided by the ESBMC module, as shown in AppendixFig.1.Thecategoriesandsubcategories,alongwiththeprecisedistribution of vulnerabilities across the entire dataset, are detailed in Section 6, Table 7. 175.2.1 ESBMC Parameter Selection Model-checking tools like ESBMC offer adjustable parameters that can optimize per- formance based on program complexity. Default parameters such as those used in competitions like SV-COMP, may not be suitable for all software types, potentially leading to fewer detected vulnerabilities. We conducted a detailed analysis to under- stand how various settings impact verification outcomes, particularly for programs generated by LLMs. The different switches are explained in Appendix Table 1. Givenourexpectationthatitgeneratesmorecomplexcode,wehaveselected1,000 samplesgeneratedbyGPT-4,servingasthebasisforselectingtheESBMCparameters fortheentiredataset.Forthesesamples,wetestedmultipleparameterconfigurations ofESBMCtodeterminewhichsettingsyieldedthebestresultsregardingruntimeeffi- |
ciency and vulnerability detection. We focus on two objectives. Firstly, to minimize verification unknown outcomes (VU) through the t parameter and preferably com- pleting the verification process; and secondly, to identify as many vulnerabilities as possible. Table 4 illustrates the verification outcomes of the 1,000 samples, demon- stratinghowvariouscombinationsofunwind(u)andtime(t),alongsidetheutilization of k-induction, incremental-bmc, or falsification techniques, impact the results. Our analysisrevealedthatmerelyincreasingtheunwindparameteruwhilekeepingashort timeout (e.g., 1 second) often leads to timeouts. For example, setting the unwind to 10 with a 1-second timeout resulted in most samples (684) falling into VU. This lim- itation reflects the capability of the underlying architecture; in our case, the AMD RyzenThreadripperPRO3995WXwith32CPUcores.Amorepowerfulsystemcould potentially handle more computations within the same timeframe. A larger unwind parameter increases the detection of vulnerabilities in loops if adequate processing time is given. K-induction improves the detection rate for larger unwindsettings,butthedefaultk-boundis1unlessadjusted.Afterextensivetesting, we extended the timeout to 500 seconds and allowed unlimited k-steps, transitioning from bounded to unbounded model checking. This adjustment ensures that if the verification is completed within this time frame, we either identify a counterexample or confirm the absence of the examined vulnerabilities. This represents a significant change from our previous methodology in creating the FormAI-v1 dataset. We used the following ESBMC switches during our experiments, as depicted in Figure 7. Note that --overflow, --memory-leak-check, and --multi-property are the same for the entire Table 4 experiments and were not changed. Using these param- eters on our 1000 sample set, 416 files were deemed non-vulnerable, while 519 files Fig. 7: ESBMC command employed to verify each sample in the dataset. 18Table 4: Results of classification for varying parameters based on a dataset of 1000 samples generated by GPT-4. ESBMC Parameters RESULTS Running u time k-ind bmc fls |ϕ| VS VF VU ER time (m:s) ✗ 300 ✓ ✗ ✗ 1698:53 1678 471 491 25 13 2 1000 ✗ ✗ ✗ 1418:03 1638 505 407 70 18 3 1000 ✗ ✗ ✗ 2100:36 1620 495 390 94 21 ✗ 100 ✓ ✗ ✗ 653:05 1583 486 468 33 13 2 100 ✗ ✗ ✗ 224:25 1580 496 393 96 15 1 1000 ✗ ✗ ✗ 419:45 1529 538 428 21 13 ✗ 30 ✓ ✗ ✗ 216:28 1513 494 448 45 13 ✗ 30 ✗ ✓ ✗ 216:20 1511 494 448 45 13 ✗ 30 ✗ ✗ ✓ 232:36 1511 494 448 45 13 2 30 ✗ ✗ ✗ 99:05 1500 486 371 129 14 1 100 ✗ ✗ ✗ 79:09 1465 536 421 30 13 ✗ 10 ✓ ✗ ✗ 84:11 1432 500 430 57 13 3 100 ✗ ✗ ✗ 344:01 1408 478 350 158 14 1 10 ✗ ✗ ✗ 21:47 1351 527 399 61 13 2 10 ✗ ✗ ✗ 47:48 1272 469 330 187 14 3 10 ✗ ✗ ✗ 62:14 951 433 272 281 14 ✗ 1 ✓ ✗ ✗ 13:23 941 474 336 177 13 ✗ 1 ✗ ✗ ✓ 13:39 938 475 335 177 13 ✗ 1 ✗ ✓ ✗ 13:34 936 476 334 177 13 1 1 ✗ ✗ ✗ 7:41 911 487 323 177 13 2 1 ✗ ✗ ✗ 10:30 559 404 205 377 14 10 1 ✗ ✗ ✗ 14:14 158 224 79 684 13 ✗ 10 ✗ ✗ ✗ 152:41 69 75 25 887 13 Legend: ✓:Enabled;✗:Notset;|ϕ|:NumberofVulnerabilities;VS:VerificationSuccess; VF:VerificationFailed:vulnerabilitiesdetected;VU:VerificationUnknown(Timeout) ER:Error:Verificationprocessresultedinanerror;k-ind:k-induction; bmc:incremental-bmc;fls:falsificationtechnique;u:unwind were vulnerable. Among these 519 files, a total of 2116 unique vulnerabilities were detected. Considering the classification of 256,000 programs, the worst-case scenario is that every program from FormAI-v2 would utilize its allocated time, resulting in 500 seconds dedicated to verifying each sample. Using 32 CPU threads, the entire verification process on our experimental setup would take approximately 46,3 days in this worst-case scenario, calculated as 256,000×500/60/60/24/32. It is important to notice that while the best-performing setting from Table 4 had amuchlowertimesettingthanthesecondbest,morefilesutilizedthemaximumtime frame, thus making the verification process longer. 6 Discussion The following section summarizes our main results. First, we examine statistics on theentiredatasetregardingoverallverificationresultsandvulnerabilitytypes.Thisis followedbyevaluatingeachLLMandcomparingtheircyclomaticcomplexity,keyword 19frequency and secure coding capabilities. In the original FormAI dataset, we only created 112.000 compilable C samples using GPT-3.5. Furthermore, the complexity of each program was not measured. This research closes this gap by comparing eight state-of-the-art LLMs and providing a vulnerability-labelled dataset to the research community, comprising 256.000 C programs. 6.1 Evaluation of the FormAI-v2 Dataset Wehaveexaminedover21,994,613linesofCcode,withanaverageof83.70linesper sample. In total, we performed the verification process on 256,000 C program files, and our results for the entire dataset are shown in Table 5. The TOP 10 violations throughout the entire dataset are presented in Table 6. Table 5:SummaryofVerificationOutcomes. Category Count (%) TotalFiles 265,000 100.00 VerificationSuccess 23,036 8.69 VerificationUnknown(Timeout) 66,899 25.24 VerificationFailed 168,208 63.47 Error 6,857 2.59 Table 6: Total Violations Across All Categories. Rank Category Violation Type Count Percentage 1 DF Dereferencefailure:NULLpointer 285,505 41.73% 2 BO Bufferoverflowonscanf 181,122 26.47% 3 DF Dereferencefailure:invalidpointer 61,022 8.92% 4 DF Dereferencefailure:forgottenmemory 21,222 3.10% 5 DF Dereferencefailure:arrayboundsviolated 20,200 2.95% 6 DF Arrayboundsviolated:upperbound 18,895 2.76% |
7 DF Arrayboundsviolated:lowerbound 17,275 2.52% 8 AO Arithmeticoverflowonsub 15,089 2.21% 9 AO Arithmeticoverflowonadd 12,552 1.83% 10 AO Arithmeticoverflowonmul 10,088 1.47% During the 500-second verification time-frame, ESBMC identified 168,208 unique programswithvulnerabilities.Incontrast,only23,036programs,representing8.69%, were verified as secure. Expanding computational resources may increase the number ofprogramsuncoveredfromVU,therebypotentiallyextendingtheVF category.These givenresultsprovideanevenbetterlowerboundcomparedto[9],onwhatpercentage of LLM-generated code is vulnerable. The big picture is worse than simply saying 63.47%, as one file can contain more than one vulnerability. The total number of property violations detected by ESBMC is 684,227. A breakdown of the distribution of these different types of vulnerabilities is shown in Table 7. 20Table 7: Detailed Categorisation of Vulnerabilities as Detected by ESBMC 7.5. Category Description Count Percentage DF Dereferencefailures: -NULLpointer 285505 41.73% -Invalidpointer 61022 8.92% -Forgottenmemory 21222 3.10% -Arrayboundsviolated 20200 2.95% -Invalidateddynamicobject 2904 0.42% -Accesstoobjectoutofbounds 1898 0.28% -Accessedexpiredvariablepointer 1091 0.16% -Writeaccesstostringconstant 751 0.11% -Ofnon-dynamicmemory 251 0.04% -Objectaccessedwithincompatiblebasetype 270 0.04% -Oversizedfieldoffset 141 0.02% -Dataobjectaccessedwithcodetype 11 0.00% AO Arithmeticoverflows: -Onsub 15089 2.21% -Onadd 12552 1.83% -Onmul 10088 1.47% -IEEEmul 7545 1.10% -IEEEdiv 2755 0.40% -IEEEadd 1729 0.25% -IEEEsub 1464 0.21% -Ondiv 691 0.10% -Onshl 669 0.10% -Onmodulus 220 0.03% -Onneg 105 0.02% BO Bufferoverflows: -Onscanf 181122 26.47% -Onfscanf 6593 0.96% -Onsscanf 2810 0.41% ABV Arrayboundsviolations: -Upperbound 18895 2.76% -Lowerbound 17275 2.52% -VLAarraysizeinbytesoverflowsaddressspacesize 4082 0.60% DBZ Divisionbyzero 3592 0.52% MV MiscellaneousVulnerabilities: -Thepointertoafileobjectmustbeavalidargument 1125 0.16% -InvalidFunctionargumentissues 339 0.05% -Sameobjectviolation 137 0.02% -Operandoffreemusthavezeropointeroffset 84 0.01% The most common type of vulnerability is related to “Dereference failures” accounting for 57.77% of the cases, predominantly due to NULL pointer issues. This categoryincludesavarietyofpointer-relatedissuessuchasinvalidpointers,forgotten memory, and array bounds violations, among others. “Buffer overflows”, mainly trig- gered by the scanf function, comprise a significant 27.84% of the vulnerabilities. This highlights common issues in handling buffer sizes and input functions. “Arithmetic overflows” are also notable, covering various operations like subtraction, addition, multiplication, and division, indicating frequent issues in handling numeric calcula- tions without adequate checks. The table further lists “Array bounds violations” and “Division by zero” as common issues, illustrating challenges in correctly managing 21arraysandarithmeticoperations.Asmallerportionofthetablecovers”Miscellaneous Vulnerabilities,” which includes a variety of less frequent but notable issues such as invalid file object pointers and operand violations in memory deallocation. Overall, the data emphasizes the need for robust handling of pointers, buffers, and numeric operations within the source code to mitigate the risk of vulnerabilities. 6.2 CWE Classification Next, we link the vulnerabilities to Common Weakness Enumeration (CWE) iden- tifiers by manually assigning the appropriate CWEs after reviewing source codes associated with each vulnerability group. The multifaceted nature of software flaws often results in a single vulnerability associated with multiple CWE identifiers. Table 9 shows the categorization of the most prevalent vulnerabilities and the associated 39 unique CWEs we identified across these categories. The “Miscellaneous Vulnera- bilities” (MV) category includes Assertion failure, Same object violation, Operand of free must have zero pointer offset, function call: not enough arguments, and sev- eral types of dereference failure issues. For these categories we did not individually assign CWE identifiers, as in general they constitute a very small percentage of the dataset.Thesmallsamplesizealsopreventsmachinelearningalgorithmstoefficiently recognise and learn these patterns. While in the .json file they are still marked as vulnerable and count for statistical purposes, we aggregated these samples under the other category so they can be conveniently excluded from the training data. In total, 42 unique CWE were identified in the dataset. From MITRE’s Top 25 Most Dangerous Software Weaknesses for 2023 list, six is present in our list as shown in Table 8. The remaining CWEs in the top 25 list are related to web vulnerabilities Table 8: CWEs from 2023’s MITRE Top 25. Rank CWE Description 1 CWE-787:Out-of-boundsWrite 4 CWE-416:UseAfterFree 6 CWE-20:ImproperInputValidation 7 CWE-125:Out-of-boundsRead 12 CWE-476:NULLPointerDereference 14 CWE-190:IntegerOverfloworWraparound like SQL injection, XSS, and authentication, which are irrelevant to our C language samples. It is vital to emphasize that, in our situation, classifying the C programs basedonCWEidentifiersisnotpractical,contrarytowhatisdoneforotherdatabases like Juliet. As shown in Table 1, most datasets contain only one vulnerability per sample.InthedatasetsReVeal,BigVul,andDiversevul,afunctionisvulnerableifthe |
vulnerability-fixing commit changes it, while in Juliet, a single vulnerability is intro- ducedforeachprogram.InFormAI,asinglefileoftencontainsmultiplevulnerabilities. As noted, a single vulnerability can be associated with multiple CWEs. Additionally, multiple CWEs can be required as a prerequisite for a vulnerability to manifest. As an example, “CWE-120: Buffer Copy without Checking Size of Input (Classic Buffer 22Table 9: Detailed Categorisation of Vulnerabilities as Detected by ESBMC 7.5. Description CWE Associated CWEs Dereference failures: - NULL pointer CWE-476 CWE-690, CWE-391 - Invalid pointer CWE-822 CWE-119, CWE-787, CWE-822 - Forgotten memory CWE-825 CWE-401, CWE-404, CWE-459 - Array bounds violated CWE-125 CWE-119, CWE-787 - Invalidated dynamic object CWE-824 CWE-416, CWE-415 - Access to object out of bounds CWE-125 CWE-119, CWE-787 - Accessed expired variable pointer CWE-416 CWE-825 - Write access to string constant CWE-843 CWE-758 - Of non-dynamic memory CWE-590 CWE-415, CWE-415, CWE-762 - Object accessed with incompatible CWE-843 CWE-119 base type - Oversized field offset CWE-787 CWE-119, CWE-125, CWE-823 - Data object accessed with code type CWE-843 CWE-686, CWE-704 Arithmetic overflows: - On sub CWE-191 CWE-20, CWE-190, CWE-192 - On add CWE-190 CWE-20, CWE-191, CWE-192 - On mul CWE-190 CWE-20, CWE-191, CWE-192 - Floating-point ieee mul CWE-190 CWE-681 - Floating-point ieee div CWE-682 CWE-369, CWE-681 - Floating-point ieee add CWE-190 CWE-681 - Floating-point ieee sub CWE-190 CWE-681 - On div CWE-190 CWE-20, CWE-369 - On shl CWE-190 CWE-192 - On modulus CWE-190 CWE-20, CWE-191 - On neg CWE-191 CWE-190, CWE-192 Buffer overflows: - On scanf CWE-120 {CWE-20, CWE-121, CWE-122 - On fscanf CWE-120 CWE-129, CWE-131, CWE-628 - On sscanf CWE-120 CWE-676, CWE-680, CWE-787} Array bounds violations: - lower bound CWE-129 {CWE-119, CWE-125, CWE-129 - upper bound CWE-788 CWE-131, CWE-193, CWE-787} - VLA array size in bytes overflows CWE-190 CWE-131, CWE-680 Division by zero CWE-369 CWE-691 Miscellaneous Vulnerabilities: - The pointer to a file must be valid CWE-476 CWE-690, CWE-459 - Same object violation CWE-628 CWE-843, CWE-668 - Operand of free must have zero CWE-761 CWE-415, CWE-590 pointer offset Overflow)”,canhappenasaresultof“CWE-676:UseofPotentiallyDangerousFunc- tion”, which can be the scanf function. If this is combined with “CWE-20: Improper Input Validation”, it can result in “CWE-787: Out-of-bounds Write”. Labelling the vulnerable function name, line number, and vulnerability type iden- tified by the ESBMC module provides granular information that can benefit machine learningtraining.Thislevelofdetailcanallowmodelstodiscernpatternsandcorrela- tions with higher precision, thereby improving vulnerability prediction and detection capabilities. Since our programs exhibit numerous vulnerabilities, including multiple 23occurrences of the same type, categorizing each solely into one CWE group, as seen withJuliet,wouldbesub-optimalfortrainingpurposes.Thismethodfailstocommu- nicatecrucialdetailsaboutthevulnerabilities.Forinstance,both”Arithmeticoverflow on add” and ”Arithmetic overflow on div” are assigned the same primary CWE, they manifest differently in the source code. Therefore, merely labelling them with CWEs does not offer the same level of granularity and makes the dataset less suitable for ML. While other datasets focus more on CWEs related to vulnerabilities that could potentially be exploited, ESBMC also detects issues related to software safety. 6.3 LLMs comparison 6.3.1 Keyword Frequency We employ a token-based keyword-counting mechanism to extract the cardinality of the32CkeywordsfromeachLLM-generatedsubset,asshowninFigure8and9.The frequency is normalized to show the occurrence of keywords per million lines of code for each model. It is also important to note that while the distribution is similar, the Normalized Average Keyword Frequency Heatmap (Per Million Lines of Code) int 92644 89066 96517 106378 91534 if 46165 44055 45176 41093 32027 100000 return 41172 29138 32188 30799 25819 char 34986 36087 41765 38441 24815 for 27974 32922 28784 27322 29811 struct 22625 35457 16173 21025 23654 void 21408 23880 27263 27736 17174 else 10188 11074 9247 10342 11528 double 9748 3599 5953 6652 6350 80000 while 7691 7179 10602 12055 9139 typedef 7638 5222 10128 8308 8909 case 7092 2574 7119 7412 19869 const 3989 2729 6441 3446 68 unsigned 3735 550 4802 1321 597 float 2812 2850 4744 2141 1732 60000 switch 1522 581 1604 1356 3936 long 1144 691 1385 480 196 enum 817 61 96 190 56 bool 772 94 2394 457 1 static 675 321 240 153 5 do 189 265 469 709 26 40000 short 146 603 153 54 122 goto 88 11 111 6 4 sizeof 18 0 24 4 0 union 16 0 14 3 47 volatile 13 3 41 1 0 break 2 0 0 0 0 20000 default 1 0 0 0 0 extern 1 0 9 0 0 register 1 0 0 0 0 continue 1 0 0 0 0 signed 0 3 0 0 0 auto 0 0 0 0 0 0 GEMINI-pro LLAMA2-13B MISTRAL-7B FALCON-180B GEMMA-7B Fig. 8: 32 C keyword distribution - PART I. 24slight fluctuations indicate that the different models handle statements, expressions, and variables differently. This observation is supported by examining the cyclomatic complexity statistics of each model. |
Normalized Average Keyword Frequency Heatmap (Per Million Lines of Code) int 111899 99170 97220 98330 void 56664 20698 24493 22603 char 39482 29631 31768 32646 if 33881 37138 42329 39570 100000 return 33678 28700 32337 31334 for 19258 19574 28111 25275 struct 18070 21252 32086 22325 const 16216 1906 2071 2625 case 12179 14123 7707 11367 double 11793 6976 6475 7216 80000 else 11406 15870 10764 12377 typedef 7021 3460 6845 6549 while 6819 6988 6685 8724 float 5756 4986 4146 3387 unsigned 5611 3465 2264 2555 long 2917 1099 309 809 60000 bool 2299 1766 698 977 switch 1700 1361 1279 1747 do 946 1003 126 579 static 705 193 37 242 enum 657 249 563 317 short 203 260 50 164 40000 sizeof 164 109 32 45 union 48 21 13 19 volatile 39 15 1 9 signed 0 3 0 1 goto 0 38 15 35 extern 0 1 0 1 20000 break 0 0 0 0 auto 0 0 3 0 register 0 3 1 1 continue 0 0 0 0 default 0 4 0 2 0 GPT-4 GPT-35 CODELLAMA-13B FormAI-v2 Full Fig. 9: 32 C keyword distribution - PART II. 6.4 Cyclomatic Complexity and General Statistics To comprehend the meaning and limitations of cyclomatic complexity, it’s essen- tial to delve into the industry-standard guidelines established by Microsoft and NIST [109, 110]. Simply put, “higher numbers are bad and lower numbers are good”. Cyclomatic complexity serves as a metric for assessing the difficulty involved in test- ing, maintaining, or troubleshooting code, as well as predicting the likelihood of errors. It is calculated by counting decision points in the source code. NIST defines it as“the amount of decision logic in a source code function” and sets the recommended maximumas10[110].Theynotethatacomplexitythresholdabove10isonlyaccept- able for bigger projects where experienced engineers can manage the extra testing 25needed.Sincetheprogramsaresmallstandaloneunits,acyclomaticcomplexitynum- ber surpassing 10 is deemed unacceptable, and even the suggested threshold may be considered excessive. Table 10: Statistics of Cyclomatic Complexity, ordered by Average CC. Total Avg Min Max Avg Median Std.Dev. Model Samples Lines CC CC CC CC CC GPT-4 1000 104.54 1 33 3.01 2.71 1.60 Mistral-7B 10000 75.02 1 33.0 3.85 3.33 2.22 Falcon-180B 72000 72.06 1 46 4.34 3.33 3.10 Llama2-13B 5000 73.35 1 34.0 4.29 3.50 2.77 Gemini-Pro 40000 98.25 0 154 4.52 3.25 4.05 Codellama-13B 12000 83.30 1 94 4.63 3.20 4.64 Gemma-7B 47000 66.53 1 109 5.03 3.33 5.23 GPT-3.5 78000 96.52 0 240.0 6.10 4.43 5.68 Wecomputeeachmodel’scyclomaticcomplexity(CC)numberbyrandomlyselect- ing 1000 samples, as illustrated in Figure 10. The results unveil both expected and unexpectedfindings.WhiletheexactparametersizeofGPT-4remainsunknown,this modelconsistentlygeneratesthehighest-qualitycodeintermsofCC.Moreover,GPT- 4demonstratesremarkableperformanceonvariousbenchmarks,suchasHumanEval, indicatingitsproficiencyinefficientlysolvingtaskswithoutgeneratingoverlycomplex code. However, contrary to the expectations, the analysis of Table 10 reveals that GPT-4 does not necessarily produce shorter or simpler code. It tends to generate the longestCprogramsandexhibitsthehighestverificationunknownscore,implyingthat theverificationprocessforESBMCtakeslongerinthecaseofGPT-4samples.Despite this, it manages to keep the CC number low. Among the examined LLMs, GPT-3.5 andGemma7Bdemonstratethepoorestperformanceregardingcyclomaticcomplexity (CC).Interestingly,despitebeingasmallermodel,Mistral7BcloselyfollowsGPT-4in code quality. GPT-4, Mistral-7B, Falcon-180B, and Llama2-13B consistently exhibit low CC numbers across all samples, positioning them as the top performers regard- ing average CC. Conversely, other models display high maximum CC values, with corresponding high standard deviations, indicating less consistent performance. It is importanttonote,thatduetothedifferenttotalsamplesize,certainmetricslikeMax CC should be treated with caution. The more samples are generated the higher the chance for programs to appear with high CC numbers. Still, these metrics can serve as a good indicator, as for example Falcon-180B never managed to produce a pro- gram with a higher CC number then 46, despite the sample size for this model was the second largest. We aim to expand the dataset and equalise the sample size across thevariousmodels,tobeabletomakeanevencomparison.Despitethis,suchmetrics are a useful indicator of performance. 2650 50 Avg. CC = 3.01 Avg. CC = 3.80 40 40 30 30 20 20 10 10 0 0 0 200 400 600 800 1000 0 200 400 600 800 1000 (a) GPT4. (b) MISTRAL-7B. 50 50 Avg. CC = 4.36 Avg. CC = 4.41 40 40 30 30 20 20 10 10 0 0 0 200 400 600 800 1000 0 200 400 600 800 1000 (c) FALCON-180B. (d) LLAMA2-13B. 50 50 Avg. CC = 4.57 Avg. CC = 4.69 40 40 30 30 20 20 10 10 0 0 0 200 400 600 800 1000 0 200 400 600 800 1000 (e) GEMINI-pro. (f) CODELLAMA-13B. 50 50 Avg. CC = 4.91 Avg. CC = 6.13 40 40 30 30 20 20 10 10 0 0 0 200 400 600 800 1000 0 200 400 600 800 1000 (g) GEMMA-7B. (h) GPT-3.5. Fig.10:CyclomaticcomplexityofdifferentLLMson1000randomlyselectedsamples. 276.5 Vulnerability Ranking TheTables11,12,13,and14provideanoverviewofthetop10vulnerabilitiesproduced by each model. Important to note that since the number of samples created by each model is different, the individual vulnerability counts do not serve as a basis for comparison. Buffer Overflow on scanf ranks among the top three vulnerabilities identified |
across every LLM model. The functions scanf, fscanf, and sscanf do not limit the input size to the size of their respective buffers. This oversight leads to a risk of buffer overflow, which could enable an attacker to execute arbitrary code or cause a crash. As earlier noted, this issue can be associated with several CWEs, including the CWE-676, CWE-20, and CWE-787. While buffer overflow is a type of out-of- boundswrite,CWE-787encompassesawiderarrayofvulnerabilities,whereCWE-120 is focused specifically on the classic scenario of buffer overflow resulting from not checking input sizes during buffer copy operations. Note that this analysis does not provide an accurate count of every identified CWE, but rather on the vulnerabilities verifiably identified by ESBMC. The results indicate that LLMs consistently produce these errors in a zero-shot setting. While more complex issues such as arithmetic overflows and array bounds violationsrequireadeeperunderstandingoftheprogrammingscenario,issuesrelated to scanf would be comparatively simpler to avoid. Nonetheless, every model tested consistently exhibits buffer overflow errors on scanf, although the frequency varies. Table 11: Top 10 Vulnerabilities in GPT-3.5 and FALCON-180B. Rank Category Violation Type Count Percentage GPT-3.5 1 BO Bufferoverflowonscanf 84,363 38.05% 2 DF Dereferencefailure:NULLpointer 58,889 26.56% 3 DF Dereferencefailure:invalidpointer 20,887 9.42% 4 DF Dereferencefailure:forgottenmemory 4,788 2.16% 5 DF Arrayboundsviolated:lowerbound 7,836 3.53% 6 DF Arrayboundsviolated:upperbound 7,773 3.51% 7 AO Arithmeticoverflowonsub 6,422 2.90% 8 DF Dereferencefailure:arrayboundsviolated 6,701 3.02% 9 AO Arithmeticoverflowonadd 5,169 2.33% 10 AO Arithmeticoverflowonmul 4,266 1.92% FALCON-180B 1 BO Bufferoverflowonscanf 52,811 34.99% 2 DF Dereferencefailure:NULLpointer 44,957 29.79% 3 DF Dereferencefailure:invalidpointer 15,822 10.48% 4 DF Dereferencefailure:forgottenmemory 5,942 3.94% 5 DF Dereferencefailure:arrayboundsviolated 4,675 3.10% 6 DF Arrayboundsviolated:upperbound 4,266 2.83% 7 AO Arithmeticoverflowonsub 3,697 2.45% 8 DF Arrayboundsviolated:lowerbound 3,566 2.36% 9 AO Arithmeticoverflowonadd 2,880 1.91% 10 BO Bufferoverflowonfscanf 2,555 1.69% 28Table 12: Top 10 Vulnerabilities in LLAMA2-13B, and GEMMA-7B. Rank Category Violation Type Count Percentage llama2-13B 1 DF Dereferencefailure:NULLpointer 4,649 54.12% 2 BO Bufferoverflowonscanf 814 9.48% 3 DF Dereferencefailure:invalidpointer 811 9.44% 4 DF Dereferencefailure:arrayboundsviolated 435 5.06% 5 DF Dereferencefailure:forgottenmemory 389 4.53% 6 AO Arithmeticoverflowonadd 236 2.75% 7 AO Arithmeticoverflowonmul 179 2.08% 8 DF Arrayboundsviolated:upperbound 176 2.05% 9 AO Arithmeticoverflowonsub 152 1.77% 10 BO Divisionbyzero 125 1.46% gemma7b 1 DF Dereferencefailure:NULLpointer 89,888 65.76% 2 BO Bufferoverflowonscanf 19,097 13.97% 3 DF Dereferencefailure:forgottenmemory 5,491 4.02% 4 DF Dereferencefailure:invalidpointer 3,929 2.87% 5 DF Arrayboundsviolated:upperbound 3,408 2.49% 6 DF Arrayboundsviolated:lowerbound 2,828 2.07% 7 AO Arithmeticoverflowonsub 2,094 1.53% 8 DF Dereferencefailure:arrayboundsviolated 2,060 1.51% 9 AO Arithmeticoverflowonfloating-pointieee mul 1,458 1.07% 10 AO Arithmeticoverflowonadd 1,415 1.04% Dereferencefailure-relatedissueswerecollectivelythemostnumerouscategoryfor every LLM, and NULL pointer dereference consistently ranks first or second in every model. The generated code often includes pointers, but these models may not always correctly manage pointer operations or accurately simulate real-world applications’ complexmemorymanagementbehaviors,leadingtofrequentdereferenceissues.How- ever, LLMs, which operate primarily on pattern recognition without understanding underlying principles, often generate code with flawed pointer handling. In addition, the datasets the were used for the training of these models often include a wide variety of pointer usage examples, which may not always adhere to safeprogrammingstandards.This,coupledwiththecomplexitiesofdynamicmemory management in languages like C, poses challenges for LLMs in consistently generat- ing secure code. The recurrent issues with pointer dereferencing underscore the risks associated with deploying LLM-generated code in critical systems where security and reliability are paramount. To mitigate these risks, it is crucial to develop enhanced training methodologies focusedonrobustmemoryhandling.Itisalsoparamounttoimplementadvancedcode analysis tools and frameworks. These tools will help detect and rectify vulnerabilities before deployment, thereby improving the security and reliability of LLM-generated code for real-world applications. While dereference failures and buffer overflows pose risks, their severity and frequency vary significantly. For instance, GEMMA-7B shows a disproportionately high incidence of NULL pointer dereference failures at 65.76%, suggesting specific 29Table 13: Top 10 Vulnerabilities in CODELLAMA-13B, and GEMINI-pro 1.0. Rank Category Violation Type Count Percentage CODELLAMA-13B 1 DF Dereferencefailure:NULLpointer 11,741 45.22% 2 BO Bufferoverflowonscanf 5,162 19.88% |
3 DF Dereferencefailure:invalidpointer 3,528 13.59% 4 DF Dereferencefailure:arrayboundsviolated 880 3.39% 5 DF Dereferencefailure:forgottenmemory 711 2.74% 6 DF Arrayboundsviolated:upperbound 659 2.54% 7 AO Arithmeticoverflowonadd 512 1.97% 8 DF Arrayboundsviolated:lowerbound 465 1.79% 9 AO Arithmeticoverflowonmul 454 1.75% 10 AO Arithmeticoverflowonsub 373 1.44% GEMINI-pro1.0 1 DF Dereferencefailure:NULLpointer 68,570 57.27% 2 DF Dereferencefailure:invalidpointer 13,361 11.16% 3 BO Bufferoverflowonscanf 13,106 10.95% 4 DF Dereferencefailure:arrayboundsviolated 4,651 3.88% 5 DF Dereferencefailure:forgottenmemory 3,433 2.87% 6 AO Arithmeticoverflowonmul 2,450 2.05% 7 DF Arrayboundsviolated:upperbound 2,154 1.80% 8 AO Arithmeticoverflowonadd 1,882 1.57% 9 DF Arrayboundsviolated:lowerbound 1,840 1.54% 10 AO Arithmeticoverflowonsub 1,815 1.52% Table 14: Top 10 Vulnerabilities in MISTRAL-7B and GPT-4. Rank Category Violation Type Count Percentage MISTRAL-7B 1 DF Dereferencefailure:NULLpointer 6,318 33.19% 2 BO Bufferoverflowonscanf 5,169 27.15% 3 DF Dereferencefailure:invalidpointer 2,474 13.00% 4 DF Dereferencefailure:arrayboundsviolated 747 3.92% 5 DF Arrayboundsviolated:lowerbound 621 3.26% 6 AO Arithmeticoverflowonsub 475 2.50% 7 DF Arrayboundsviolated:upperbound 452 2.37% 8 DF Dereferencefailure:forgottenmemory 413 2.17% 9 AO Arithmeticoverflowonadd 402 2.11% 10 BO Bufferoverflowonsscanf 393 2.06% GPT-4 1 BO Bufferoverflowonscanf 600 34.70% 2 DF Dereferencefailure:NULLpointer 493 28.51% 3 DF Dereferencefailure:invalidpointer 210 12.15% 4 AO Arithmeticoverflowonsub 61 3.53% 5 AO Arithmeticoverflowonfloating-pointieee mul 57 3.30% 6 AO Arithmeticoverflowonadd 56 3.24% 7 DF Dereferencefailure:forgottenmemory 55 3.18% 8 DF Dereferencefailure:arrayboundsviolated 51 2.95% 9 DF Arrayboundsviolated:lowerbound 34 1.97% 10 AO Arithmeticoverflowonmul 25 1.45% 30weaknessesinitsmemorymanagementcapabilities.Arithmeticoverflowsappearcon- sistently for all models, mostly populating the bottom 5 ranks. They vary in terms of the specific operations (add, sub, mul) affected, indicating differing arithmetic han- dlingacrossthemodels.Interestingly,lallama2-13Bstandsoutastheonlymodelbelow 10% on scanf() related violations, with Gemini-pro following suit around 11%. How- ever,similartoGEMMA-7B,bothmodelsexhibitadisproportionatelyhighincidence of NULL pointer dereference failures. Theconsistentappearanceofcertainerrorsacrossdifferentmodelsemphasizesthe need for comprehensive testing and validation frameworks to address these recurrent issues before deployment. While all models exhibit the same vulnerabilities, signifi- cant differences in the frequency and type of other vulnerabilities, such as arithmetic overflows, suggest that model-specific optimizations and enhancements are necessary. 6.6 LLM Ranking: Which model is the most secure Tocomparewhichmodelisthe“worst”orthe“best”whenitcomestosecurecoding— and to do this as fairly as possible—we will investigate several metrics, such as the ratio of verification results, average property violation per file, and average property violation per line of code. Table 15: Verification Results Summary, Sorted by Average Property Violation per Line. AvgProp. AvgProp. VU Category Viol. Rank VS Rank VF Viol. (Timeout) perLine perFile GPT4 0.0165 3 11.40% 2 50.80% 36.50% 3.40 Llama2-13B 0.0234 2 13.36% 1 47.50% 35.58% 3.62 Mistral 7B 0.0254 7 7.11% 4 62.00% 28.19% 3.07 Codellama 13B 0.0260 1 15.24% 3 52.34% 30.12% 4.13 Falcon180B 0.0291 8 6.49% 5 62.10% 28.61% 3.38 GPT35 0.0295 6 7.57% 7 64.25% 26.65% 4.42 Gemini Pro 0.0305 5 9.56% 6 63.63% 24.39% 4.70 Gemma7B 0.0437 4 11.29% 8 69.29% 15.28% 4.20 Legend: VS:VerificationSuccess;VF:VerificationFailed;VU:VerificationUnknown(Timeout) The results indicate that there is no clear winner.Mistral-7B,despitehav- ing the fewest property violations per file, writes shorter code, reducing its likelihood of coding errors. However, this model also performs poorly in the VS metric, with only 7.11% of its samples categorized as being free of vulnerabilities. Codellama-13B achieved the highest VS rate, followed by Llama2-13B, and their VF ratio is ranking is third and second respectively, which is a good result for the LLama family. Still, it is best to remember that nearly half of their samples had vulnerabilities. Moreover, their VU is fairly high at 30% and 35%, which means that with further verification, there is still a chance that other models will take the lead. 31GPT-4outperformsGPT-3.5whileshowingthehighestVU percentageundercur- rent ESBMC settings, indicating its ability to produce more complex and longer outputs. It is important to note that this complexity is not reflected by the CC num- ber as discussed earlier, which confirms the criticism towards Cyclomatic Complexity by practitioners. While GPT-4 ranks third in VS and second in VF, it finishes first withanaveragepropertyviolationperline.Thismightbethefairestwaytocompare |
models, as the more lines, the more chances to have vulnerabilities, while this metric doesn’t punish models producing shorter codes. There is no definitive winner in this analysis; however, Gemma-7B, Gemini-Pro, and GPT-3.5—with the current verification settings—has the highest VF ratios and highest average property violation both per line and file. This unveils an interesting and unexpected finding: empirically, a slight correlation between having high cyclo- maticcomplexityandvulnerablecodingpatternscanbeobserved.AsTable10shows, thesethreemodelshadtheworstCCstatistics.Thecorrelationis,however,notclear, andtherearecertainlyotherfactors.Forexample,CodeLlama-13B,withamaximum CC number of 94, did not show poor performance in the verification results. It is important to underline that it might be tempting to speculate on a winner. However, having such a high verification failed ratio is unacceptable from a SE per- spective for any model. All models surpassed the VF threshold of 47%, indicating that nearly half or more of the generated programs are vulnerable. The conclusions of this analysis must be clear: Using code generated by the state-of-the-art Large Language Models, without any additional framework for validation and vulnerability analysis, carries severe risks. While LLMs can be useful for automating simple tasks and scripting, directly including such codes in produc- tion software without oversight from experienced software engineers is irresponsible and should be avoided. 7 Limitations and Future Research 7.1 Future Research Directions Thedatasetcontainingall265,000Cprogramfiles,alongwiththe.jsonand.csvfiles are published on GitHub6. The dataset is specifically prepared to support Machine Learningandfine-tuning.Theabsenceoffalsepositivesmakesthedatasetsuitablefor benchmarking the effectiveness of various static and dynamic analysis tools and ML- basedclassifiers.Trainingondatathatisvoidoffalsenegativesisreallyimportant,so we indicate in the .json file for each file whether the verification process has finished, as such files do not contain the vulnerabilities detectable by ESBMC. The diverse structure of the C programs generated in the FormAI-v2 dataset made it excellent foranunexpectedusecase:fuzzingdifferentapplications.Weuncoveredandreported over thirteen bugs while executing ESBMC using the FormAI dataset. After validat- ing these issues, ESBMC developers managed to resolve them. These included errors in the goto-cc conversion and the creation of invalid SMT solver formulas. Addition- ally, we identified bugs in the CBMC [111] and the Clang compiler, which failed to 6https://github.com/FormAI-Dataset 32compile several programs, while GNU C had no issue. We promptly communicated these findings to the respective developers. The FormAI-v2 dataset aims to be a use- fulresourcefortrainingmachinelearningalgorithmstopossessthecapabilitiesofthe ESBMC module. Our results give rise to several interesting research directions: • It would be important to investigate why programs under “Verification Success- ful” are void of vulnerabilities. Is it because of better coding practices or simply becauseforexampletheydon’ttakeuserinput,therebyavoidingbufferoverflows? • What is the right path towards LLMs producing secure code: Re-training mod- els on better data, fine-tuning, or using current models in various few-shot frameworks with better prompting? • Sinceseveralcodescontainmultiplevulnerabilities,thisdatasetisidealforbench- marking and testing various vulnerability detection tools. • As our motivation section showcased, GPT-4 did not excel at avoiding and fixing the vulnerability in the example. How do different LLMs compare in understanding, correctly fixing, and detecting coding errors? • WeaimtofurthergrowtheFormAIdatasetincludingmorestate-of-the-artmod- els,andalsobyincreasingthenumberofsamplesforeachLLMtohaveanoverall larger dataset. • How do different programming Tasks or Styles impact vulnerable coding pat- terns? Are there tasks that LLMs consistently mess up? While we can partially address the last question, noting the use of insecure func- tions and poor input sanitization in handling user inputs, exploring this issue across various domains, such as networking or cryptography would be beneficial. 7.2 Limitations and Threats to Validity With a larger timeout setting, ESBMC might find slightly more vulnerabilities in a given program. Whether the verifier can finish the process under a given timeout is up to the available computational capacity. The same parameter setting can yield a higherorlowerdetectionrateondifferentarchitectures.Tofindallerrorsdetectableby ESBMC, unwind must be set to infinite, and ESMBC must complete the verification process. As we provided the original C programs and the instructions on how to run ESBMC,researcherswhoinvestadditionalcomputationalresourceshavethepotential toenhanceourfindings.Asthe“VerificationUnknown”categorystillcontainssamples foreverymodel,thecurrentresultsarestronglyboundtothepercentageofvulnerable files LLMs produce. While ESBMC is a robust tool for detecting many types of errors in C, it is not currently suited to detect design flaws, semantic errors, or performance issues. As such, more vulnerabilities might be present besides the detected ones in the code. Thuswerecommendthatthetrainingandfine-tuningapplicationstoberestrictedto the vulnerabilities detectable by ESBMC on this dataset. 338 Conclusions In this research, we analyzed eight state-of-the-art Large Language Models to assess their likelihood of introducing vulnerabilities during neutral prompt based code gen- eration. The models included in our analysis were Mistral-7B, Falcon-180B, GPT-4, |
Llama2-13B, Codellama-13B, Gemma-7B, GPT-3.5, and Gemini-Pro. We employed a zero-shot prompting method to encompass numerous programming scenarios for C code generation. These programs constitute the FormAI-v2 dataset, containing 256,000 independent compilable C programs. We used the Efficient SMT-based Bounded Model Checker (ESBMC), a state-of- the-art formal verification tool, to identify vulnerabilities. Each program was given a verificationperiodof500secondswiththeunwindingparametersettoinfinite,uncov- ering a total of 684,227 vulnerabilities. Overall 67% of the codes were vulnerable. Detailed labelling of each sample—including filename, type of vulnerability, func- tion name, error type, and source code—is documented in a .json file, as detailed in Appendix Fig. 1, to facilitate the dataset’s use in machine learning applications. Additionally,theFormAI-v2datasetprovedinstrumentalforfuzzingvariousappli- cations, leading to the identification of multiple bugs in ESBMC, CBMC, and the Clang compiler. We have identified 42 distinct CWE identifiers, six of which are fea- tured on MITRE’s Top 25 list for 2023. Notably, ”CWE-787 Out-of-bounds Write,” thetopvulnerabilityonthelist,wasconfirmedinatleast181,122cases.Thesefindings provide clear answers to our research questions: While the literature reveals significant variations in the ability of these models to solve tasks, this is not mirrored in their susceptibility to produce vulnerabilities in source code. Our findings conclusively show that despite differences among the examined models, in terms of generating code, they all consistently introduce severe vulnerabilities when prompted with simple coding tasks. Our study indicates that despite the impressive capabilities of Large Language Models in code generation, employingtheiroutputinproductionrequiresmeticulousriskassessment.Relyingon these models without expert oversight in a production context is inadvisable. 34Appendix Switch Description –timeout <seconds> Thisswitchestablishesatimelimitinsecondsfortheanalysis. Iftheverificationprocessexceedsthisduration,ESBMCwill automaticallyhalttheanalysis. –k-induction <number> Thisenablesk-inductionproof,aformalverificationtechnique toprovepropertiesofloops. –unlimited-k-steps Allows the k-induction process to proceed without a pre- defined limit on the number of iterations, aiming for more thoroughverification. –unwind <number> Sets the unwind limit for loops and recursion. This speci- fies how many times loops or recursive functions should be unwoundduringtheanalysis. –no-unwinding-assertions Disables assertions that check whether the loop or recursion unwinding was sufficient, useful when the unwind bound is consideredadequate. –falsification ESBMC prioritizes finding violations of the properties, and thetoolterminatesassoonasitdiscoversacounterexample. –overflow Enablescheckingforarithmeticoverflowswithintheprogram, helpingdetectwhenoperationsexceedthedatatypelimits. –memory-leak-check Activatesthedetectionofmemoryleaks,ensuringthatallallo- catedmemoryisproperlyfreedbeforeprogramtermination. –multi-property Enablestheverificationofmultiplepropertiessimultaneously, optimizingtheanalysisbycheckingallspecifiedpropertiesin asinglerun. –show-stacktrace Whenanerroroccurs,thisoptionprovidesastacktrace,use- ful for debugging by showing the sequence of function calls leadingtotheerror. –verbosity 6 Sets the verbosity level of the output to detailed, offering a deeper insight into the tool’s operations and the verification process. –incremental-bmc Enhances standard BMC by gradually increasing the bound depth, beginning at a low level and continuing until a bug is detected, a specified depth is reached, or resources are exhausted. Table 1: Description of Investigated ESBMC Switches. 35Fig. 1:ExamplefromtheJSONfile,demonstratingthelabellingprocess.Thesource code object has been omitted. Data Availability Statements In this study, a total of 265,000 C samples were generated and examined. The find- ings and all the generated C samples are available for access and download from the project’s website at https://github.com/FormAI-Dataset. Conflicts of interest The authors have no competing interests to declare that are relevant to the content of this article. 36References [1] Wang, J., Huang, Y., Chen, C., Liu, Z., Wang, S., Wang, Q.: Software testing with large language models: Survey, landscape, and vision. IEEE Transactions on Software Engineering (2024) [2] Xu, F.F., Alon, U., Neubig, G., Hellendoorn, V.J.: A systematic evaluation of large language models of code. In: Proceedings of the 6th ACM SIGPLAN International Symposium on Machine Programming, pp. 1–10 (2022) [3] Jain, N., Vaidyanath, S., Iyer, A., Natarajan, N., Parthasarathy, S., Rajamani, S., Sharma, R.: Jigsaw: Large language models meet program synthesis. In: Proceedings of the 44th International Conference on Software Engineering, pp. 1219–1231 (2022) [4] Bui, N.D.Q., Le, H., Wang, Y., Li, J., Gotmare, A.D., Hoi, S.C.H.: CodeTF: One-stop Transformer Library for State-of-the-art Code LLM. arXiv (2023). http://arxiv.org/abs/2306.00029 Accessed 2023-06-22 [5] Ross, S.I., Martinez, F., Houde, S., Muller, M., Weisz, J.D.: The Programmer’s Assistant:ConversationalInteractionwithaLargeLanguageModelforSoftware Development.In:Proceedingsofthe28thInternationalConferenceonIntelligent User Interfaces. IUI ’23, pp. 491–514. Association for Computing Machin- |
ery, New York, NY, USA (2023). https://doi.org/10.1145/3581641.3584037 . https://dl.acm.org/doi/10.1145/3581641.3584037 Accessed 2023-06-22 [6] Chavez, M.R., Butler, T.S., Rekawek, P., Heo, H., Kinzler, W.L.: Chat Genera- tivePre-trainedTransformer:whyweshouldembracethistechnology.American Journal of Obstetrics and Gynecology 228(6), 706–711 (2023) https://doi.org/ 10.1016/j.ajog.2023.03.010 . Accessed 2023-06-22 [7] Charalambous,Y.,Tihanyi,N.,Jain,R.,Sun,Y.,Ferrag,M.A.,Cordeiro,L.C.: A New Era in Software Security: Towards Self-Healing Software via Large Lan- guage Models and Formal Verification. arXiv (2023). https://doi.org/10.48550/ arXiv.2305.14752 . http://arxiv.org/abs/2305.14752 Accessed 2023-05-31 [8] Perry, N., Srivastava, M., Kumar, D., Boneh, D.: Do users write more insecure code with ai assistants? In: Proceedings of the 2023 ACM SIGSAC Confer- ence on Computer and Communications Security. CCS ’23, pp. 2785–2799. Association for Computing Machinery, New York, NY, USA (2023). https: //doi.org/10.1145/3576915.3623157.https://doi.org/10.1145/3576915.3623157 [9] Tihanyi,N.,Bisztray,T.,Jain,R.,Ferrag,M.A.,Cordeiro,L.C.,Mavroeidis,V.: Theformaidataset:Generativeaiinsoftwaresecuritythroughthelensofformal verification. In: Proceedings of the 19th International Conference on Predictive Models and Data Analytics in Software Engineering. PROMISE 2023, pp. 33– 43. Association for Computing Machinery, New York, NY, USA (2023). https: 37//doi.org/10.1145/3617555.3617874.https://doi.org/10.1145/3617555.3617874 [10] Team, G., Anil, R., Borgeaud, S., Wu, Y., Alayrac, J.-B., Yu, J., Soricut, R., Schalkwyk, J., Dai, A.M., Hauth, A., et al.: Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805 (2023) [11] Almazrouei,E.,Alobeidli,H.,Alshamsi,A.,Cappelli,A.,Cojocaru,R.,Debbah, M., Goffinet, E´., Hesslow, D., Launay, J., Malartic, Q., et al.: The falcon series of open language models. arXiv preprint arXiv:2311.16867 (2023) [12] Roziere, B., Gehring, J., Gloeckle, F., Sootla, S., Gat, I., Tan, X.E., Adi, Y., Liu, J., Remez, T., Rapin, J., et al.: Code llama: Open foundation models for code. arXiv preprint arXiv:2308.12950 (2023) [13] Gadelha,M.Y.R.,Ismail,H.I.,Cordeiro,L.C.:Handlingloopsinboundedmodel checking of C programs via k-induction. Int. J. Softw. Tools Technol. Transf. 19(1), 97–114 (2017) https://doi.org/10.1007/s10009-015-0407-9 [14] Gadelha, M.R., Monteiro, F.R., Morse, J., Cordeiro, L.C., Fischer, B., Nicole, D.A.: Esbmc 5.0: an industrial-strength c model checker. In: Proceedings of the 33rdACM/IEEEInternationalConferenceonAutomatedSoftwareEngineering, pp. 888–891. ACM, Montpellier, France (2018) [15] Gadelha, M.Y.R., Monteiro, F.R., Cordeiro, L.C., Nicole, D.A.: ESBMC v6.0: Verifying C programs using k-induction and invariant inference - (competition contribution). In: Beyer, D., Huisman, M., Kordon, F., Steffen, B. (eds.) Tools andAlgorithmsfortheConstructionandAnalysisofSystems(TACAS).LNCS, vol. 11429, pp. 209–213 (2019). Springer [16] Menezes, R.S., Aldughaim, M., Farias, B., Li, X., Manino, E., Shmarov, F., Song, K., Brauße, F., Gadelha, M.R., Tihanyi, N., Korovin, K., Cordeiro, L.C.: ESBMCv7.4:Harnessingthepowerofintervals-(competitioncontribution).In: Tools and Algorithms for the Construction and Analysis of Systems (TACAS). LNCS, vol. 14572, pp. 376–380 (2024). Springer [17] McCabe,T.J.:Acomplexitymeasure.IEEETransactionsonSoftwareEngineer- ing SE-2(4), 308–320 (1976) https://doi.org/10.1109/TSE.1976.233837 [18] Chen, M., Tworek, J., Jun, H., Yuan, Q., Oliveira Pinto, H.P., Kaplan, J., Edwards, H., Burda, Y., Joseph, N., Brockman, G., Ray, A., Puri, R., Krueger, G., Petrov, M., Khlaaf, H., Sastry, G., Mishkin, P., Chan, B., Gray, S., Ryder, N.,Pavlov,M.,Power,A.,Kaiser,L.,Bavarian,M.,Winter,C.,Tillet,P.,Such, F.P., Cummings, D., Plappert, M., Chantzis, F., Barnes, E., Herbert-Voss, A., Guss, W.H., Nichol, A., Paino, A., Tezak, N., Tang, J., Babuschkin, I., Balaji, S.,Jain,S.,Saunders,W.,Hesse,C.,Carr,A.N.,Leike,J.,Achiam,J.,Misra,V., Morikawa, E., Radford, A., Knight, M., Brundage, M., Murati, M., Mayer, K., |
Welinder,P.,McGrew,B.,Amodei,D.,McCandlish,S.,Sutskever,I.,Zaremba, 38W.: Evaluating large language models trained on code (2021) arXiv:2107.03374 [cs.LG] [19] OpenAI: GPT-4 Technical Report. arXiv (2023). http://arxiv.org/abs/2303. 08774 Accessed 2023-05-29 [20] Nehorai, N.: Analyzing Common Vulnerabilities Introduced by Code- Generative AI — HackerNoon (2024). https://hackernoon.com/ analyzing-common-vulnerabilities-introduced-by-code-generative-ai Accessed 2024-02-28 [21] Cordeiro, L.C., Lima Filho, E.B., Bessa, I.V.: Survey on automated symbolic verification and its application for synthesising cyber-physical systems. IET Cyper-Phys. Syst.: Theory & Appl. 5(1), 1–24 (2020) https://doi.org/10.1049/ IET-CPS.2018.5006 [22] Anwar, U., Saparov, A., Rando, J., Paleka, D., Turpin, M., Hase, P., Lubana, E.S., Jenner, E., Casper, S., Sourbut, O., et al.: Foundational challenges in assuring alignment and safety of large language models. arXiv preprint arXiv:2404.09932 (2024) [23] Kirova, V.D., Ku, C.S., Laracy, J.R., Marlowe, T.J.: Software engineering education must adapt and evolve for an llm environment. In: Proceed- ings of the 55th ACM Technical Symposium on Computer Science Educa- tion V. 1. SIGCSE 2024, pp. 666–672. Association for Computing Machin- ery, New York, NY, USA (2024). https://doi.org/10.1145/3626252.3630927 . https://doi.org/10.1145/3626252.3630927 [24] Austin, J., Odena, A., Nye, M., Bosma, M., Michalewski, H., Dohan, D., Jiang, E., Cai, C., Terry, M., Le, Q., et al.: Program synthesis with large language models (2021) [25] Lu, S., Guo, D., Ren, S., Huang, J., Svyatkovskiy, A., Blanco, A., Clement, C., Drain,D.,Jiang,D.,Tang,D.,etal.:Codexglue:Amachinelearningbenchmark datasetforcodeunderstandingandgeneration.arXivpreprintarXiv:2102.04664 (2021) [26] White, J., Fu, Q., Hays, S., Sandborn, M., Olea, C., Gilbert, H., Elnashar, A., Spencer-Smith, J., Schmidt, D.C.: A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT. arXiv (2023). https://doi.org/10.48550/ arXiv.2302.11382 . http://arxiv.org/abs/2302.11382 Accessed 2023-06-24 [27] Yao, S., Yu, D., Zhao, J., Shafran, I., Griffiths, T.L., Cao, Y., Narasimhan, K.: Tree of Thoughts: Deliberate Problem Solving with Large Language Models. arXiv (2023). http://arxiv.org/abs/2305.10601 Accessed 2023-05-29 [28] Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B., Xia, F., Chi, E., 39Le, Q., Zhou, D.: Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. arXiv (2023). https://doi.org/10.48550/arXiv.2201.11903 . http://arxiv.org/abs/2201.11903 Accessed 2023-06-24 [29] Guo, D., Zhu, Q., Yang, D., Xie, Z., Dong, K., Zhang, W., Chen, G., Bi, X., Wu, Y., Li, Y., et al.: Deepseek-coder: When the large language model meets programming–the rise of code intelligence. arXiv preprint arXiv:2401.14196 (2024) [30] Wang, H., Liu, Z., Wang, S., Cui, G., Ding, N., Liu, Z., Yu, G.: Intervenor: Prompt the coding ability of large language models with the interactive chain of repairing. arXiv preprint arXiv:2311.09868 (2023) [31] Huang, D., Bu, Q., Zhang, J.M., Luck, M., Cui, H.: Agentcoder: Multi-agent- based code generation with iterative testing and optimisation. arXiv preprint arXiv:2312.13010 (2023) [32] Muennighoff,N.,Liu,Q.,Zebaze,A.,Zheng,Q.,Hui,B.,Zhuo,T.Y.,Singh,S., Tang, X., Von Werra, L., Longpre, S.: Octopack: Instruction tuning code large language models. arXiv preprint arXiv:2308.07124 (2023) [33] Lin, F., Kim, D.J., et al.: When llm-based code generation meets the software development process. arXiv preprint arXiv:2403.15852 (2024) [34] Khoury, R., Avila, A.R., Brunelle, J., Camara, B.M.: How Secure is Code Gen- erated by ChatGPT? arXiv (2023). http://arxiv.org/abs/2304.09655 Accessed 2023-05-30 [35] Pearce, H., Ahmad, B., Tan, B., Dolan-Gavitt, B., Karri, R.: Asleep at the Keyboard? Assessing the Security of GitHub Copilot’s Code Contributions. arXiv (2021). https://doi.org/10.48550/arXiv.2108.09293 . http://arxiv.org/ abs/2108.09293 Accessed 2023-06-10 [36] Ma, W., Liu, S., Wang, W., Hu, Q., Liu, Y., Zhang, C., Nie, L., Liu, Y.: The Scope of ChatGPT in Software Engineering: A Thorough Investigation. arXiv (2023). https://doi.org/10.48550/arXiv.2305.12138 . http://arxiv.org/ abs/2305.12138 Accessed 2023-06-10 |
[37] Imani,S.,Du,L.,Shrivastava,H.:Mathprompter:Mathematicalreasoningusing large language models (2023). https://doi.org/10.48550/arXiv.2303.05398 [38] Hou,X.,Zhao,Y.,Liu,Y.,Yang,Z.,Wang,K.,Li,L.,Luo,X.,Lo,D.,Grundy, J., Wang, H.: Large Language Models for Software Engineering: A Systematic Literature Review (2024) [39] Chan,A.,Kharkar,A.,Moghaddam,R.Z.,Mohylevskyy,Y.,Helyar,A.,Kamal, 40E., Elkamhawy, M., Sundaresan, N.: Transformer-based vulnerability detec- tion in code at edittime: Zero-shot, few-shot, or fine-tuning? arXiv preprint arXiv:2306.01754 (2023) [40] Nguyen, V., Yuan, X., Wu, T., Nepal, S., Grobler, M., Rudolph, C.: Deep learning-based out-of-distribution source code data identification: How far we have gone? arXiv preprint arXiv:2404.05964 (2024) [41] Gao, Z., Wang, H., Zhou, Y., Zhu, W., Zhang, C.: How far have we gone in vulnerability detection using large language models. arXiv preprint arXiv:2311.12420 (2023) [42] Gao, S., Mao, W., Gao, C., Li, L., Hu, X., Xia, X., Lyu, M.R.: Learning in the wild: Towards leveraging unlabeled data for effectively tuning pre-trained code models. arXiv preprint arXiv:2401.01060 (2024) [43] Grishina,A.,Hort,M.,Moonen,L.:Theearlybirdcatchesthebug:Onexploiting earlylayersofencodermodelsformoreefficientcodeclassification.In:Proceed- ings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering, pp. 895–907 (2023) [44] Khare,A.,Dutta,S.,Li,Z.,Solko-Breslin,A.,Alur,R.,Naik,M.:Understanding the effectiveness of large language models in detecting security vulnerabilities. arXiv preprint arXiv:2311.16169 (2023) [45] Noever, D.: Can large language models find and fix vulnerable software? arXiv preprint arXiv:2308.10345 (2023) [46] Shestov, A., Cheshkov, A., Levichev, R., Mussabayev, R., Zadorozhny, P., Maslov, E., Vadim, C., Bulychev, E.: Finetuning large language models for vulnerability detection. arXiv preprint arXiv:2401.17010 (2024) [47] Steenhoek, B., Gao, H., Le, W.: Dataflow analysis-inspired deep learning for efficient vulnerability detection. In: Proceedings of the IEEE/ACM 46th International Conference on Software Engineering. ICSE ’24. Association for Computing Machinery, New York, NY, USA (2024). https://doi.org/10.1145/ 3597503.3623345 . https://doi.org/10.1145/3597503.3623345 [48] Sun,Y.,Wu,D.,Xue,Y.,Liu,H.,Ma,W.,Zhang,L.,Shi,M.,Liu,Y.:Llm4vuln: Aunifiedevaluationframeworkfordecouplingandenhancingllms’vulnerability reasoning. arXiv preprint arXiv:2401.16185 (2024) [49] Tang, W., Tang, M., Ban, M., Zhao, Z., Feng, M.: Csgvd: A deep learning approach combining sequence and graph embedding for source code vulnerabil- ity detection. J. Syst. Softw. 199(C) (2023) https://doi.org/10.1016/j.jss.2023. 111623 41[50] Thapa, C., Jang, S.I., Ahmed, M.E., Camtepe, S., Pieprzyk, J., Nepal, S.: Transformer-based language models for software vulnerability detec- tion. In: Proceedings of the 38th Annual Computer Security Applications Conference. ACSAC ’22, pp. 481–496. Association for Computing Machin- ery, New York, NY, USA (2022). https://doi.org/10.1145/3564625.3567985 . https://doi.org/10.1145/3564625.3567985 [51] Zhang,C.,Liu,H.,Zeng,J.,Yang,K.,Li,Y.,Li,H.:Prompt-enhancedsoftware vulnerability detection using chatgpt. arXiv preprint arXiv:2308.12697 (2023) [52] T´oth,R.,Bisztray,T.,Erdodi,L.:LLMsinWeb-Development:EvaluatingLLM- Generated PHP code unveiling vulnerabilities and limitations (2024) [53] Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv (2023). http://arxiv.org/abs/2305.17493 Accessed 2023-06-27 [54] Chen,Y.,Ding,Z.,Chen,X.,Wagner,D.:DiverseVul:ANewVulnerableSource Code Dataset for Deep Learning Based Vulnerability Detection. arXiv (2023). http://arxiv.org/abs/2304.00409 Accessed 2023-06-27 [55] Fan, J., Li, Y., Wang, S., Nguyen, T.N.: A C/C++ Code Vulnerability Dataset with Code Changes and CVE Summaries. In: Proceedings of the 17th Inter- national Conference on Mining Software Repositories. MSR ’20, pp. 508–512. Association for Computing Machinery, New York, NY, USA (2020). https: //doi.org/10.1145/3379597.3387501 . https://doi.org/10.1145/3379597.3387501 Accessed 2023-06-27 [56] Russell, R.L., Kim, L.Y., Hamilton, L.H., Lazovich, T., Harer, J.A., Ozdemir, O., Ellingwood, P.M., McConley, M.W.: Automated Vulnerability Detection in Source Code Using Deep Representation Learning. In: 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. |
757–762. IEEE, Orlando, FL, USA (2018). https://doi.org/10.1109/ICMLA. 2018.00120 . https://api.semanticscholar.org/CorpusID:49670513 [57] Kim,L.,Russell,R.:DraperVDISCDataset-VulnerabilityDetectioninSource Code. Publisher: OSF (2018). https://osf.io/d45bw/ Accessed 2023-06-27 [58] Black, P.E.: A Software Assurance Reference Dataset: Thousands of Programs With Known Bugs. Journal of Research of the National Institute of Standards andTechnology123,1–3(2018)https://doi.org/10.6028/jres.123.005.Accessed 2023-06-27 [59] Jr, F.E.B., Black, P.E.: The Juliet 1.1 C/C++ and Java Test Suite. NIST 45(10), 88–90 (2012). Last Modified: 2021-10-12T11:10-04:00 Publisher: Fred- erick E. Boland Jr., Paul E. Black. Accessed 2023-05-28 42[60] Zhou, Y., Liu, S., Siow, J., Du, X., Liu, Y.: Devign: Effective Vulnerability IdentificationbyLearningComprehensiveProgramSemanticsviaGraphNeural Networks,pp.10197–10207.CurranAssociatesInc.,RedHook,NY,USA(2019) [61] Chakraborty, S., Krishna, R., Ding, Y., Ray, B.: Deep Learning Based Vulnera- bilityDetection:AreWeThereYet?IEEETransactionsonSoftwareEngineering 48(9), 3280–3296 (2022) https://doi.org/10.1109/TSE.2021.3087402 [62] Tihanyi,N.,Bisztray,T.,Jain,R.,AmineFerrag,M.,C.Cordeiro,L.,Mavroei- dis, V.: FormAI Dataset: A Large Collection of AI-Generated C Programs and Their Vulnerability Classifications. IEEE Dataport (2023). https://doi.org/10. 21227/vp9n-wv96 . https://dx.doi.org/10.21227/vp9n-wv96 [63] Jain, R., Gervasoni, N., Ndhlovu, M., Rawat, S.: A code centric evaluation of c/c++ vulnerability datasets for deep learning based vulnerability detection techniques. In: Proceedings of the 16th Innovations in Software Engineering Conference, pp. 1–10. ACM, Prayagraj, India (2023) [64] Cordeiro, L., Fischer, B., Marques-Silva, J.: SMT-Based Bounded Model Checking for Embedded ANSI-C Software. IEEE Transactions on Software Engineering 38(4), 957–974 (2012) https://doi.org/10.1109/TSE.2011.59 [65] D’Silva, V., Kroening, D., Weissenbacher, G.: A Survey of Automated Tech- niquesforFormalSoftwareVerification.IEEETransactionsonComputer-Aided Design of Integrated Circuits and Systems 27(7), 1165–1178 (2008) https: //doi.org/10.1109/TCAD.2008.923410 [66] Morse, J., Cordeiro, L.C., Nicole, D.A., Fischer, B.: Context-bounded model checking of LTL properties for ANSI-C software. In: Barthe, G., Pardo, A., Schneider, G. (eds.) Software Engineering and Formal Methods - 9th Interna- tional Conference, SEFM 2011, Montevideo, Uruguay, November 14-18, 2011. Proceedings.LectureNotesinComputerScience,vol.7041,pp.302–317(2011). Springer [67] Wallace, D.R., Fujii, R.U.: Software verification and validation: an overview. IEEE Software 6(3), 10–17 (1989) https://doi.org/10.1109/52.28119 . Accessed 2023-06-22 [68] Alshmrany, K.M., Aldughaim, M., Bhayat, A., Cordeiro, L.C.: Fusebmc: An energy-efficienttestgeneratorforfindingsecurityvulnerabilitiesinCprograms. In: Loulergue, F., Wotawa, F. (eds.) Tests and Proofs - 15th International Con- ference,TAP2021,HeldasPartofSTAF2021,VirtualEvent,June21-22,2021, Proceedings.LectureNotesinComputerScience,vol.12740,pp.85–105(2021). Springer [69] Braberman, V.A., Bonomo-Braberman, F., Charalambous, Y., Colonna, J.G., Cordeiro, L.C., Freitas, R.: Tasks People Prompt: A Taxonomy of LLM 43DownstreamTasksinSoftwareVerificationandFalsificationApproaches(2024) [70] Hao,Y.,Chen,W.,Zhou,Z.,Cui,W.:E&v:Promptinglargelanguagemodelsto performstaticanalysisbypseudo-codeexecutionandverification.arXivpreprint arXiv:2312.08477 (2023) [71] Yang, A.Z., Le Goues, C., Martins, R., Hellendoorn, V.: Large language mod- els for test-free fault localization. In: Proceedings of the 46th IEEE/ACM International Conference on Software Engineering, pp. 1–12 (2024) [72] Quan, V.L.A., Phat, C.T., Van Nguyen, K., Duy, P.T., Pham, V.-H.: Xgv-bert: Leveragingcontextualizedlanguagemodelandgraphneuralnetworkforefficient software vulnerability detection. arXiv preprint arXiv:2309.14677 (2023) [73] Sun, T., Allix, K., Kim, K., Zhou, X., Kim, D., Lo, D., Bissyand´e, T.F., Klein, J.: Dexbert: Effective, task-agnostic and fine-grained representation learning of android bytecode. IEEE Transactions on Software Engineering 49(10), 4691– 4706 (2023) https://doi.org/10.1109/TSE.2023.3310874 [74] Tian,H.,Liu,K.,Li,Y.,Kabor´e,A.K.,Koyuncu,A.,Habib,A.,Li,L.,Wen,J., Klein, J., Bissyand´e, T.F.: The best of both worlds: Combining learned embed- dings with engineered features for accurate prediction of correct patches. ACM Trans. Softw. Eng. Methodol. 32(4) (2023) https://doi.org/10.1145/3576039 |
[75] Wang,W.,Wang,Y.,Joty,S.,Hoi,S.C.H.:Rap-gen:Retrieval-augmentedpatch generation with codet5 for automatic program repair. In: Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering. ESEC/FSE 2023, pp. 146–158. Association for Computing Machinery, New York, NY, USA (2023). https:// doi.org/10.1145/3611643.3616256 . https://doi.org/10.1145/3611643.3616256 [76] Zhang,Y.,Jin,Z.,Xing,Y.,Li,G.:Steam:simulatingtheinteractivebehaviorof programmersforautomaticbugfixing.arXivpreprintarXiv:2308.14460(2023) [77] Wu,Y.,Li,Z.,Zhang,J.M.,Papadakis,M.,Harman,M.,Liu,Y.:Largelanguage models in fault localisation. arXiv preprint arXiv:2308.15276 (2023) [78] Mohajer, M.M., Aleithan, R., Harzevili, N.S., Wei, M., Belle, A.B., Pham, H.V., Wang, S.: Skipanalyzer: An embodied agent for code analysis with large language models. arXiv preprint arXiv:2310.18532 (2023) [79] Li, T.-O., Zong, W., Wang, Y., Tian, H., Wang, Y., Cheung, S.-C.: Finding Failure-Inducing Test Cases with ChatGPT (2023) [80] Pearce, H., Tan, B., Ahmad, B., Karri, R., Dolan-Gavitt, B.: Examining Zero- Shot Vulnerability Repair with Large Language Models. arXiv (2022). http: //arxiv.org/abs/2112.02125 Accessed 2023-06-10 44[81] Cao, J., Li, M., Wen, M., Cheung, S.-c.: A study on prompt design, advantages and limitations of chatgpt for deep learning program repair. arXiv preprint arXiv:2304.08191 (2023) [82] Deligiannis,P.,Lal,A.,Mehrotra,N.,Rastogi,A.:Fixingrustcompilationerrors using llms. arXiv preprint arXiv:2308.05177 (2023) [83] Fan, Z., Gao, X., Mirchev, M., Roychoudhury, A., Tan, S.H.: Automated repair ofprogramsfromlargelanguagemodels.In:2023IEEE/ACM45thInternational Conference on Software Engineering (ICSE), pp. 1469–1481 (2023). IEEE [84] Huang,Q.,Zhu,J.,Xing,Z.,Jin,H.,Wang,C.,Xu,X.:Achainofai-basedsolu- tions for resolving fqns and fixing syntax errors in partial code. arXiv preprint arXiv:2306.11981 (2023) [85] Islam,N.T.,Najafirad,P.:Codesecurityvulnerabilityrepairusingreinforcement learning with large language models. arXiv preprint arXiv:2401.07031 (2024) [86] Jin,M.,Shahriar,S.,Tufano,M.,Shi,X.,Lu,S.,Sundaresan,N.,Svyatkovskiy, A.: InferFix: End-to-End Program Repair with LLMs (2023) [87] Lajk´o, M., Csuvik, V., Vid´acs, L.: Towards javascript program repair with generative pre-trained transformer (gpt-2). In: 2022 IEEE/ACM International Workshop on Automated Program Repair (APR), pp. 61–68 (2022). https: //doi.org/10.1145/3524459.3527350 [88] Paul, R., Mohib Hossain, M., Hasan, M., Iqbal, A.: Automated program repair based on code review: How do pre-trained transformer models perform? arXiv e-prints, 2304 (2023) [89] Peng, Y., Gao, S., Gao, C., Huo, Y., Lyu, M.: Domain knowledge mat- ters: Improving prompts with fix templates for repairing python type errors. In: Proceedings of the IEEE/ACM 46th International Conference on Software Engineering. ICSE ’24. Association for Computing Machin- ery, New York, NY, USA (2024). https://doi.org/10.1145/3597503.3608132 . https://doi.org/10.1145/3597503.3608132 [90] Tian, H., Liu, K., Kabor´e, A.K., Koyuncu, A., Li, L., Klein, J., Bissyand´e, T.F.: Evaluating representation learning of code changes for predicting patch correctness in program repair. In: Proceedings of the 35th IEEE/ACM Interna- tional Conference on Automated Software Engineering. ASE ’20, pp. 981–992. Association for Computing Machinery, New York, NY, USA (2021). https: //doi.org/10.1145/3324884.3416532.https://doi.org/10.1145/3324884.3416532 [91] Wei, Y., Xia, C.S., Zhang, L.: Copiloting the copilots: Fusing large language models with completion engines for automated program repair. In: Proceedings 45of the 31st ACM Joint European Software Engineering Conference and Sympo- sium on the Foundations of Software Engineering. ESEC/FSE 2023, pp. 172– 184. Association for Computing Machinery, New York, NY, USA (2023). https: //doi.org/10.1145/3611643.3616271.https://doi.org/10.1145/3611643.3616271 [92] Widjojo, P., Treude, C.: Addressing compiler errors: Stack overflow or large language models? arXiv preprint arXiv:2307.10793 (2023) [93] Xia, C.S., Wei, Y., Zhang, L.: Practical program repair in the era of large pre- trained language models. arXiv preprint arXiv:2210.14179 (2022) [94] Xia, C.S., Zhang, L.: Keep the conversation going: Fixing 162 out of 337 bugs for $0.42 each using chatgpt. arXiv preprint arXiv:2304.00385 (2023) [95] Zhang, Q., Fang, C., Sun, W., Liu, Y., He, T., Hao, X., Chen, Z.: Appt: Boosting automated patch correctness prediction via fine-tuning pre-trained models. IEEE Transactions on Software Engineering 50(3), 474–494 (2024) |
https://doi.org/10.1109/TSE.2024.3354969 [96] Zhang, Q., Fang, C., Zhang, T., Yu, B., Sun, W., Chen, Z.: Gamma: Revisiting template-based automated program repair via mask predic- tion. In: 2023 38th IEEE/ACM International Conference on Automated Software Engineering (ASE), pp. 535–547. IEEE Computer Society, Los Alamitos, CA, USA (2023). https://doi.org/10.1109/ASE56229.2023.00063 . https://doi.ieeecomputersociety.org/10.1109/ASE56229.2023.00063 [97] Zhang,Y.,Li,G.,Jin,Z.,Xing,Y.:Neuralprogramrepairwithprogramdepen- dence analysis and effective filter mechanism. arXiv preprint arXiv:2305.09315 (2023) [98] Wu, Y., Jiang, N., Pham, H.V., Lutellier, T., Davis, J., Tan, L., Babkin, P., Shah,S.:Howeffectiveareneuralnetworksforfixingsecurityvulnerabilities.In: Proceedingsofthe32ndACMSIGSOFTInternationalSymposiumonSoftware Testing and Analysis, pp. 1282–1294 (2023) [99] Aho, A.V., Lam, M.S., Sethi, R., Ullman, J.D.: Compilers: Principles, Tech- niques, And Tools, 2nd edn. Addison-Wesley Longman Publishing Co., Inc., Boston, MA (2006) [100] Gadelha, M.Y.R., Steffinlongo, E., Cordeiro, L.C., Fischer, B., Nicole, D.A.: Smt-based refutation of spurious bug reports in the clang static analyzer. In: Atlee, J.M., Bultan, T., Whittle, J. (eds.) Proceedings of the 41st International Conference on Software Engineering, pp. 11–14. IEEE / ACM, Montreal, QC, Canada (2019). https://doi.org/10.1109/ICSE-Companion.2019.00026 [101] Sadowski, C., Yi, J.: How developers use data race detection tools. In: Pro- ceedings of the 5th Workshop on Evaluation and Usability of Programming 46Languages and Tools, pp. 43–51. ACM, Portland, USA (2014) [102] White, M., Tufano, M., Vendome, C., Poshyvanyk, D.: Deep learning code fragmentsforcodeclonedetection.In:Proceedingsofthe31stIEEE/ACMInter- nationalConferenceonAutomatedSoftwareEngineering,pp.87–98.Association for Computing Machinery, New York, USA (2016) [103] Zhao,G.,Huang,J.:Deepsim:deeplearningcodefunctionalsimilarity.In:Pro- ceedingsofthe201826thACMJointMeetingonEuropeanSoftwareEngineering Conference and Symposium on the Foundations of Software Engineering, pp. 141–151. ACM, Lake Buena Vista, USA (2018) [104] Cordeiro, L.C., Kroening, D., Schrammel, P.: JBMC: bounded model checking forjavabytecode-(competitioncontribution).In:ToolsandAlgorithmsforthe ConstructionandAnalysisofSystems(TACAS).LNCS,vol.11429,pp.219–223 (2019). Springer [105] Menezes, R., Moura, D., Cavalcante, H., Freitas, R., Cordeiro, L.C.: Esbmc- jimple: verifying kotlin programs via jimple intermediate representation. In: Ryu, S., Smaragdakis, Y. (eds.) ISSTA ’22: 31st ACM SIGSOFT International SymposiumonSoftwareTestingandAnalysis,VirtualEvent,SouthKorea,July 18 - 22, 2022, pp. 777–780 (2022). ACM [106] Gadelha, M.R., Monteiro, F.R., Morse, J., Cordeiro, L.C., Fischer, B., Nicole, D.A.: Esbmc 5.0: an industrial-strength c model checker. In: Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering. ASE ’18, pp. 888–891. Association for Computing Machin- ery, New York, NY, USA (2018). https://doi.org/10.1145/3238147.3240481 . https://doi.org/10.1145/3238147.3240481 [107] Beyer,D.:Competitiononsoftwareverificationandwitnessvalidation:Sv-comp 2023. In: Sankaranarayanan, S., Sharygina, N. (eds.) Tools and Algorithms for theConstructionandAnalysisofSystems,pp.495–522.Springer,Cham(2023) [108] Sandoval,G.,Pearce,H.,Nys,T.,Karri,R.,Garg,S.,Dolan-Gavitt,B.:Lostat C: A User Study on the Security Implications of Large Language Model Code Assistants. arXiv (2023) [109] Wallace, D.R., Watson, A.H., McCabe, T.J.: Structured testing : a testing methodology using the cyclomatic complexity metric. Technical Report NIST SP 500-235, National Institute of Standards and Technology, Gaithersburg, MD (1996). https://doi.org/10.6028/NIST.SP.500-235 . Edition: 0. https: //nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication500-235.pdf Accessed 2024-04-19 [110] Mikejo5000: Code metrics - Cyclomatic complexity - Visual Studio (Win- dows) (2024). https://learn.microsoft.com/en-us/visualstudio/code-quality/ 47code-metrics-cyclomatic-complexity?view=vs-2022 Accessed 2024-04-18 [111] Kroening, D., Tautschnig, M.: Cbmc–c bounded model checker: (competition contribution). In: Tools and Algorithms for the Construction and Analysis of Systems: TACAS 2014, pp. 389–391. Springer, Grenoble, France (2014) 48 |
2404.18496 AI-powered Code Review with LLMs: Early Results⋆ ZeeshanRasheed1,*,†, MalikAbdulSami2,†, MuhammadWaseem3,†, Kai-KristianKemell4,†, XiaofengWang5,†, AnhNguyen6,†, KariSystä7,† and PekkaAbrahamsson8,† 1Facultyofinformationtechnologyandcommunicationsciences),TampereUniversity,Finland 2FacultyofInformationTechnology,UniversityofJyväskylä,Finland 3FacultyofMathematicsandNaturalScience,UniversityofHelsinki,Finland 4FacultyofEngineering,FreeUniversityofBozenBolzano,Italy 5DepartmentofBusinessandIT,UniversityofSouthEasternNorway Abstract Inthispaper,wepresentanovelapproachtoimprovingsoftwarequalityandefficiencythroughaLarge LanguageModel(LLM)-basedmodeldesignedtoreviewcodeandidentifypotentialissues.Ourproposed LLMbasedAIagentmodelistrainedonlargecoderepositories.Thistrainingincludescodereviews, bugreports,anddocumentationofbestpractices.Itaimstodetectcodesmells,identifypotentialbugs, providesuggestionsforimprovement,andoptimizethecode. Unliketraditionalstaticcodeanalysis tools,ourLLM-basedAIagenthastheabilitytopredictfuturepotentialrisksinthecode.Thissupports adualgoalofimprovingcodequalityandenhancingdevelopereducationbyencouragingadeeper understandingofbestpracticesandefficientcodingtechniques.Furthermore,weexplorethemodel’s effectivenessinsuggestingimprovementsthatsignificantlyreducepost-releasebugsandenhancecode review processes, as evidenced by an analysis of developer sentiment towards LLM feedback. For futurework,weaimtoassesstheaccuracyandefficiencyofLLM-generateddocumentationupdatesin comparisontomanualmethods.Thiswillinvolveanempiricalstudyfocusingonmanuallyconducted codereviewstoidentifycodesmellsandbugs,alongsideanevaluationofbestpracticedocumentation, augmentedbyinsightsfromdeveloperdiscussionsandcodereviews. Ourgoalistonotonlyrefine theaccuracyofourLLM-basedtoolbutalsotounderscoreitspotentialinstreamliningthesoftware developmentlifecyclethroughproactivecodeimprovementandeducation. Keywords GenerativeAI,LargeLanguageModel, SoftwareEngineering, OpenAI,ArtificialIntelligence, Code Reviews 1. Introduction LargeLanguageModels(LLMs)haveemergedasatransformativeforceacrossvariousdomains, offeringuniquecapabilitiesinunderstanding, generating, andanalyzingtext[1], [2]. These Woodstock’22:Symposiumontheirreproduciblescience,June07–11,2022,Woodstock,NY ⋆Youcanusethisdocumentasthetemplateforpreparingyourpublication.Werecommendusingthelatestversion oftheceurartstyle. *Correspondingauthor. †Theseauthorscontributedequally. $zeeshan.rasheed@tuni.fi(Z.Rasheed);malik.sami@tuni.fi(M.A.Sami);muhammad.m.waseem@jyu.fi (M.Waseem);kai-kristian.kemell@helsinki.fi(K.Kemell);xiaofeng.wang@unibz.it(X.Wang); Anh.Nguyen.duc@usn.no(A.Nguyen);kari.systa@tuni.fi(K.Systä);pekka.abrahamsson@tuni.fi(P.Abrahamsson) ©2022Copyrightforthispaperbyitsauthors.UsepermittedunderCreativeCommonsLicenseAttribution4.0International(CCBY4.0). 4202 rpA 92 ]ES.sc[ 1v69481.4042:viXramodels, built on large datasets and advanced neural network architectures, demonstrate an abilitytounderstandcontextandprovideinsightsthatwaspreviouslyimpossible[3,4,5,6]. TheintegrationofLLMsintosoftwaredevelopmenthasledtosignificantadvancementsand intriguingpossibilities,suchashowcodeiswritten,reviewed,andoptimized[7]. Byutilizing LLMs,developerscantapintoadeepwellofcodingknowledgeandbestpractices,potentially elevatingsoftwarequalitytonewheights[8],[9],[10]. Despite the vast capabilities of LLMs, their application in the domain of code review and optimization remains underexplored [11]. Code review is a critical phase in the software developmentlifecycle,aimedatidentifyingbugs,ensuringadherencetocodingstandards,and fosteringknowledgesharingamongdevelopers[12]. Traditionalcodereviewprocessesand staticanalysistools,oftenlackthedepthtoprovideactionablefeedbackbeyondthedetectionof syntaxerrorsorknownpatternsofbugs[13]. Thisgaphighlightsasignificantchallenge: there iscurrentlynoLLM-basedmodelspecificallydesignedtoenhancecodereviewstoidentifying issuesandsuggestingoptimizationsandeducatingdevelopersonbestpractices. Addressingthischallenge,ourpaperintroducesanovelLLM-basedAIagentmodelspecifically tailoredforthesoftwaredevelopmentcontext. Thismodelistrainedonavastcorpusofcode repositories, includingcodereviews, bugreports, andbestpracticesdocumentation. Unlike conventionaltools,itidentifiescodesmells,potentialbugs,anddeviationsfromcodingstandards, and crucially, it provides actionable suggestions for improvement. These suggestions are aimedatoptimizingcodeandintroducingalternativeapproaches,therebyfacilitatingadual objective: enhancingcodequalityandpromotingdevelopereducation. Ourproposedmodel isasignificantdeparturefromtraditionalstaticanalysistools,offeringaproactiveapproach tocodeimprovementandadeeperengagementwiththeprinciplesofefficientandeffective codingpractices. Lookingforward,weoutlineatrajectoryforfutureresearchaimedatevaluatingtheaccuracy and efficiency of documentation updates generated by our LLM-based model compared to |
manualmethods. Thiswillinvolveanempiricalstudythatscrutinizesmanuallyconductedcode reviewstopinpointcodesmellsandbugreports,supplementedbyananalysisofbestpractice documentationanddeveloperdiscussions. Throughthiswork,weaimtonotonlyvalidatethe effectivenessofourmodelbutalsotohighlightitspotentialinrefiningsoftwaredevelopment processes,ultimatelyfacilitatingamorestreamlined,knowledgeable,andefficientapproachto buildinghigh-qualitysoftware. Ourcontributioncanbesummarizedasfollow: • DevelopedanovelLLM-basedAImodelforenhancedcodereview,providingactionable improvements. • Distinguishedourapproachfromtraditionalstaticanalysistoolsbyfocusingonproactive codeoptimization. • Plantovalidateourmodel’seffectivenessinfutureresearch,focusingondocumentation updatesanddevelopereducation. Therestofthepaperisorganizedasfollows.WereviewrelatedworkinSection2anddescribe thestudymethodologyinSection3. TheinitialresultsofthisstudyarepresentedinSection4. WeprovideourfuturegoalinSection5andthestudyisconcludedwithinSection6.2. Related Work Codereviewisanimportantpartinthesoftwaredevelopmentlifecycleandinvolvesasignificant amountofeffortandtimeofreviewers[14],[15]. Thefocusamongresearchersonautomating various aspects of the code review process is increasing, covering areas such as suggesting appropriatereviewers[16],[17],predictinglocationsforcomments[18],[19],recommending reviewcomments[20]andenhancingcodequality[21]. Thongtanunametal. [16]discoveredthat30%ofcodereviewsfacechallengeswithassigning thecorrectreviewers. Toaddressthisissue,theyintroducedRevFinder,atoolthatrecommends suitablecodereviewersbasedonfilelocations. Inresponsetothesamechallenge,Zanjaniet al. [17]introducedcHRev,aplatformthatrecommendsreviewersfornewcodemodifications byutilizinghistoricaldatafrompastcodereviews. Theirworkconcentratesonrefiningthe initialphasesofthecodereviewprocess,whereasotherscholarsarecommittedtoresolving theintricatedifficultiesthatariseduringvariousstagesofcodereview. Shietal.[18]introducedtheDACEframework,whichcombinesCNNandLSTMtechnologies, toforecastifasectionofcodechangewillreceiveapprovalfromreviewers. Hellendoornet al. [19] applied the Transformer architecture to address this challenge. Additionally, they exploredtherelationshipsbetweenvariouscodesectionswithinapullrequestbyencoding eachsectionandcalculatingattentionscoresamongthemtointegratethedata. Lietal. [22] approachedautomaticcodereviewfromamulti-instancelearningperspective,treatingeach codesectionasaninstancewiththegoalofpredictingtheacceptanceofapullrequest. Focusing onaspectslinkedtoreviewcomments,Siowetal. [23]suggestcodereviewsthrougharetrieval approach. TheyintroduceCORE,anattentionalmodelbasedonLSTMthataimstounderstand thesemanticdetailsinboththesourcecodeanditsreviewsbyusingmulti-levelembeddings. On adifferentnote,Tufanoetal. [21]employdeeplearningstrategiestoautomateanotheraspect ofcodereview. TheydevelopedaTransformer-basedmodeldesignedtomodifycontributors’ codeinordertomeetthespecificationsoutlinedinreviewcomments. Balachandran [24] and Singh et al. [25] advocate for the deployment of static analysis instruments to automatically identify violations of coding standards and prevalent errors. Regarding the automation of particular tasks in code review, the authors have put forward methodstoenhancetheallocationofreviewers. Throughananalysisoftoolsandmethodologiesthatfacilitatecodereview,Turzoetal. [26] determinedthatwidelyusedcodereviewplatforms(suchasGerrit,CodeFlow,Phabricator) generallyprovidesimilarfundamentalfeatures, withminimalautomationsupportfortasks. Concludingthereviewofrelatedwork,itisevidentthatthereisanotableabsenceofacompre- hensivetoolbasedonLLMsthatcanconductdetailedcodereviewsanddocumentbestpractices, in addition to detecting code smells, potential bugs, and violations of coding standards. To addressthisgap,weproposeLLMbasedAIagentmodeldesignedforautonomouscodereview thatnotonlyidentifiesbugsbutalsoofferssuggestionsandrecommendations. Additionally, our model facilitates code optimization, further enhancing the quality and efficiency of the softwaredevelopmentprocess.3. Research Method This section outlines the methodology adopted in our study to explore the deployment and efficacyofaLLM-basedAIagentwithinthesoftwaredevelopment,particularlyfocusingon thecodereviewprocess. Ourapproachencompassesthedesign,development,andevaluation ofanLLM-basedAIagentmodelaimedatidentifyingpotentialissuesincodeandproviding actionablerecommendations. Bydividingthemethodologyintospecificcomponentsaligned withourresearchquestion,weensureastructuredandcomprehensiveexaminationofhow LLMtechnologycanbeleveragedtoenhancesoftwarequalityanddevelopmentpractices. RQ1. How can a LLM-based AI agent effectively assist in code reviews by identifying potentialissuesandofferingactionablerecommendations? Thisquestionemergedfromtherecognitionofthelimitationsinherentintraditionalcode reviewprocessesandtools, whichoftenfailtoprovidedeepinsightsoractionablefeedback for optimization and follow the best practices. Our research seeks to address this gap by exploringthepotentialofLLM-basedagentstosignificantlyimprovethecodereviewprocess, thus contributing to the development of higher-quality software. By utilizing the advanced capabilitiesofLLMstounderstandcontextandprovidesuggestions,weaimtotransformthe code review process. This approach will enhances the efficiency and effectiveness of code |
reviews. Ultimately, our work aspires to establish a new standard in software engineering, whereAI-drivenreviewsbecomeessentialindevelopingstrong,innovative,anduser-focused solutions. 3.1. LLM-assistedCodeReview(RQ1) The methodology for our proposed LLM-based AI agent model, designed to assist in code reviews,revolvesaroundfourspecializedagents: theCodeReviewAgent,BugReportAgent, CodeSmellAgent,andCodeOptimizationAgent. Eachagentistaskedwithadistinctaspectof thecodereviewprocess,utilizingLLMtechnologytoanalyzecoderepositories,identifyissues, andsuggestimprovements. To train these agents, we utilize a comprehensive dataset of code repositories, including historicalcodereviews,bugreports,anddocumentationofbestpractices. Thistrainingequips eachagentwithacompleteunderstandingofbothcommonandcomplexissuesencounteredin softwaredevelopment,enablingthemtoprovidedetailed,actionablefeedbacktodevelopers. WeutilizedGitHubRestAPItoaccesspublicrepositorydata,includingcode,commits,issues, pullrequests. ThroughthisLLM-assistedcodereviewprocess,ourmodelaimstobridgethegap betweentraditionalstaticanalysistoolsandthedynamic,evolvingneedsofmodernsoftware development, offering a pathway to significantly improved software quality and developer proficiency. Belowweprovidethe CodeReviewAgent: CodeReviewAgentprimaryfunctionistoreviewcodeandextract potentialissueswithinit. WetrainedthisagentbyutilizingGitHubcoderepositories,which providedavastanddiversedatasetencompassingawidearrayofprogramminglanguagesand codingstyles. AccesstotheserepositorieswassecuredthroughGitHub’sAPIs,enablingustosystematicallydownloadandprocessthecode. ThisprocessallowedustotraintheCodeReview Agenttounderstandvariouscodingpatternsandpracticesandalsotoidentifydeviationsand potentialerrorseffectively. ThelargedatasetderivedfromGitHubensuredthatourmodelwas exposedtoabroadspectrumofreal-worldcodingscenarios,significantlyenhancingitsability toreviewandanalyzecodewithhighprecision. BugReportAgent: Thisagentspecializinginidentifyingpotentialbugswithinthecode, thisagentanalyzespatternsandanomaliesthathavehistoricallybeenassociatedwithsoftware bugs. ThisagentutilizingthewealthofGitHubcoderepositoriesasafoundationaldataset,this agentwastrainedtoanalyzecodeacrossvariousprogrammingparadigmsandenvironments. It utilizesadvancedLLMtechniquestechniquestodetectanomaliesandpotentialbugsinthecode andalsoutlizednaturallanguageprocessingtechniquestodescribetheseissuesclearlyand conciselyforthedevelopers. AccesstoGitHub’slargecodebasesthroughitsAPIsfacilitated theextractionofawide-rangingdataset,ensuringtheagent’sexposuretobothcommonand obscurecodingbugs. ThiscomprehensivetrainingapproachempowerstheBugDetectAgent toaccuratelyidentifybugsandcommunicatethefindingseffectively,therebyaidingdevelopers inswiftlyaddressingandrectifyingcodingissues. CodeSmellAgent: Dedicatedtodetectingcodesmells—symptomsofdeeperproblemsin codedesignandimplementation—thisagentusesitstrainingonvastcoderepositoriestorecog- nizeanti-patternsandsuggestrefactoringthatimprovecodemaintainabilityandperformance. CodeSmellAgent,specificallytrainedtodetectandarticulatecodesmellswithinasoftware codebaseintermsunderstandabletodevelopers. ThegoalwastoequiptheCodeSmellAgent withtheabilitytodiscernsubtle,non-obviouspatternsindicativeofcodesmells—practicesthat maynotbeoutrighterrorsbutcouldleadtomaintenancechallengesorscalabilityissues. To achieve this, we mainly focusing on segments of repositories known for exemplifying both exemplary and problematic coding practices. This targeted approach in training allows the CodeSmellAgenttoidentifythesenuancedcodesmellsbutalsogeneratedetailed,actionable feedbackaimedatguidingdeveloperstowardsenhancingcodequalityanddesign. Through thisspecializedtraining,theagentwasimbuedwithanuancedunderstandingofcodequality, directlyaddressingtheuniquechallengespresentedbycodesmellsinsoftwaredevelopment. CodeOptimizationAgent:InthecontinuationofourexplorationintoLLM-basedAIagents for software development, we introduced the Code Optimization Agent. This agent mainly providerecommendationsforimprovingcodeandalsoactivelyoptimizethegivencode. The coretrainingofthisagentwascenteredaroundacuratedselectionofGitHubcoderepositories, chosenfortheirrichexamplesofbothefficientandinefficientcodingpractices. Byanalyzinga broadspectrumofcode,fromhighlyoptimizedsnippetstothoseneedingrefinement,theCode OptimizationAgentwastrainedtorecognizepatternsthatcontributetoordetractfromcode efficiencyandperformance. ThroughtheuseofGitHub’sAPIs,weaccessedanextensiverangeofcodebases,focusing specificallyonpartsoftherepositoriesthatdemonstratedawidevarietyofcodingoptimizations andcommoninefficiencies. ThisallowedtheCodeOptimizationAgenttolearnnotjustthe theorybehindcodeoptimization,butalsothepracticalapplicationoftheseprinciplesacross differentcontextsandprogramminglanguages. Asaresult,theagentisequippedtoassesscode comprehensively,suggestenhancements,andautomaticallyapplyoptimizationsthatimprove codeexecutionspeed,reducememoryusage,andenhanceoverallcodemaintainability.The development of the Code Optimization Agent signifies a significant advancement in automatingthecoderefinementprocess. ItembodiesourcommitmenttoutilizingAItostream- linesoftwaredevelopmentworkflows,empoweringdeveloperstoachievehighercodequality |
withlessmanualeffort. ThisagentstandsasatestamenttothepotentialofAIinelevatingSE practicesbyprovidingdeep,actionableinsightsandautomatingcomplexoptimizationtasks. 4. Preliminary Result Inthissection,wepresentthestudyresultsoftheproposedLLM-basedmulti-agentmodelfor autonomouscodereviewforbugs,codesmells,andprovidesuggestionstooptimizecode. Our findings indicate that the proposed model demonstrated a strong capability in identifying a rangeofissuesfromminorbugstosignificantcodesmellsandinefficienciesacrossdifferent programming languages and AI application domains. Below, we present the results of our LLM-basedproposedmodelinSection4.1,specificallyreportingtheoutcomesofRQ1. 4.1. LLM-BasedAIAgentResults(RQ1) TheevaluationofourLLM-basedAIagentmodel,designedtoenhancethecodereviewprocess, yieldscompellingresultsthataddressourprimaryresearchquestion. Theeffectivenessofour proposed LLM-based AI agent model was evaluated by selecting 10 AI-based projects from GitHub. TheprojectsselectedrepresentabroadspectrumofAIapplications,includingmachine learningframeworks,naturallanguageprocessingtools,computervisionlibraries,andAI-based webapplications. EachprojectwasanalyzedthoroughlybyourAIagent,andtheoutcomes weremeticulouslyrecorded. Theresultspresentedbelowofferinsightsintotheperformanceof ourtoolacrossthesevariedprojects. For instance, in the project DeepDive, a text mining tool designed to extract value from massivetextdatasets,ourmodelidentifiedacriticalbugwherecertainunicodecharacterscaused theparsingprocesstofail. Therecommendationwastoincorporatearobustunicodehandling mechanismtoensuresmoothprocessingofdiversedatasets. Additionally,themodelsuggested refactoring the monolithic parsing function into smaller, more manageable components to enhancemaintainability. IntheprojectNeuroStartUp,aneuralnetworkframeworkaimedatsimplifyingthedeploy- mentofmachinelearningmodels,ourmodelflaggedseveralinstancesofcodesmell,primarily in the form of hard-coded parameters within the model training functions. This not only madethecodelessflexiblebutalsomoredifficulttooptimizefordifferentdatasets. Ouragent recommendedabstractingtheseparametersintoconfigurableoptions,allowinguserstoeasily adapttheframeworktotheirneeds. Anotherproject,VisionQuest,whichfocusesonapplyingcomputervisiontechniquesto drone-capturedimageryforenvironmentalmonitoring,hadinefficienciesinitsimageprocessing pipeline. Theagentpointedoutthattheuseofoutdatedimagesegmentationalgorithmsledto suboptimalperformance. Bysuggestingtheadoptionofmoremodern,efficientalgorithms,the modelprovidedapathwaytosignificantlyimproveprocessingspeedandaccuracy. Intherealmofnaturallanguageprocessing,theprojectLinguaKitaimedatprovidingcom- prehensivelinguisticanalysistools,wasfoundtohaveasub-optimalapproachtohandlinglargetextcorpora. Ourmodelidentifiedbottlenecksindataprocessingandrecommendedleveraging parallelprocessingtechniquesandmoreefficientdatastructurestoenhancethroughputand reducememoryusage. The project AIFriendly, which offers an interface for non-technical users to leverage AI models,sufferedfromalackoferrorhandlingthatcouldleaveuserspuzzledbyuninformative failure messages. The agent suggested implementing a detailed error reporting system that couldguideusersinresolvingissuesoradjustingtheirinputtobetterfitthemodelrequirements. Furthermore,QuantumLeap,aprojectexploringquantumcomputingalgorithmsforopti- mizationproblems,displayedsignificantcodesmellsinitsuseofglobalvariablesandunclear functionnames,hinderingtheproject’sscalabilityandreadability. Ouragentrecommendeda thoroughrefactoringtoencapsulatestatemoreeffectivelyandadoptaclearnamingconvention forfunctionsandvariables. BioNexus,aprojectdevelopingmachinelearningmodelsforpredictingproteinstructures, wasusinganinefficientmodeltrainingloopthatledtounnecessarycomputationtime. Our agentadvisedoptimizingthetrainingprocessbyincorporatingmoreefficientbatchprocessing andutilizingGPUaccelerationwherepossible. ForEcoSim,asimulationtoolforecosystemmanagement,ouragentunearthedinefficiencies initssimulationalgorithmsthatcouldbestreamlined. Thesuggestedoptimizationsincluded theuseofvectorizedoperationsoverloopsforcalculations,significantlyspeedingupsimulation times. InRoboTutor,aneducationalAIthatadaptstostudents’learningstyles,themodeldetected overlyrigiddecisiontreesthatcouldnoteffectivelyhandleedgecasesinstudentresponses. It recommendedtheintegrationofmachinelearningtechniquestodynamicallyadjustthelearning pathbasedonstudentinteractions. Lastly, SafeRoute, an AI-driven application for generating safe walking routes based on historical crime data, had a problem with its data caching mechanism that led to outdated informationbeingserved. Theagent’ssuggestionwastoimplementamoredynamiccaching strategy that updates more frequently and reliably, ensuring users have access to the most currentinformation. Thesedetailedfindingsandrecommendationsunderscorethedepthofanalysispossiblewith ourLLM-basedAIagent. Bypinpointingspecificissuesandofferingtailoredadvice,ourmodel demonstratesitspotentialasaninvaluabletoolfordevelopersseekingtopushtheboundaries ofAIinnovation. 5. Future Work AsweadvancetheintegrationofLLMsintothesoftwaredevelopmentlifecycle,ourresearch haslaidafoundationalsteptowardsenhancingtheefficiencyandqualityofsoftwarethrough AI-assistedcodereviews. Lookingahead,ourtrajectoryforfutureresearchencompassesseveral |
keyareasaimedatfurthervalidatingandexpandingthecapabilitiesofourLLM-basedmodel. Aprimaryfocuswillbeonevaluatingtheaccuracyandefficiencyofgeneratedoutcomeof ourmodelincomparisontotraditionalmanualmethods. Anempiricalstudywillbeconducted tometiculouslyanalyzethedeveloperdiscussiononcodereviewsforabovementionedprojects,specificallytargetingtheidentificationofcodesmellsandthedocumentationofbugreports. Thisstudywillextendtoincludeacomprehensiveexaminationofbestpracticedocumentation andinsightsgleanedfromdeveloperdiscussions. Theobjectiveofthisempiricalresearchistwofold. First,toquantitativelyandqualitatively assesstheeffectivenessofourLLM-basedmodelingeneratingdocumentationthatisaccurate andmoreefficientlyproducedthanmanualefforts. Second,toexplorethemodel’spotential incontributingtoamorestreamlinedsoftwaredevelopmentprocess. Byautomatingaspects ofdocumentationandcodereview,weanticipateareductioninthetimedevelopersspendon thesetasks,allowingforagreaterfocusoncoredevelopmentactivities. Additionally,weaimtoexploretheeducationalimpactofourmodelonsoftwaredevelopers. Byprovidingactionablefeedbackandsuggestionsforcodeimprovement,thereispotentialfor significantadvancementindeveloperknowledgeandadherencetobestpractices.Understanding theextenttowhichourmodelcancontributetodevelopereducationwillbeakeyaspectofour futureinvestigations. Ultimately,ourgoalistovalidatetheeffectivenessofourLLM-basedtoolandtoilluminate itspotentialinrefiningsoftwaredevelopmentprocesses. ByutilizingthecapabilitiesofLLMs, we envision a future where software development is more efficient, knowledge-driven, and focusedonproducinghigh-qualitysoftware. Ourcontinuedresearchwillseektobringthis vision closer to reality, paving the way for innovations that could redefine the standards of softwaredevelopment. 6. Conclusions Thispaperintroducedanovelapproachtosoftwaredevelopmentandqualityassurancethrough the deployment of a LLM-based AI agent framework, specifically designed to enhance the codereviewprocess. Byintegratingfourspecializedagentsintothesoftwaredevelopment,we demonstratedasubstantialimprovementinidentifyingpotentialissuesincodeandproviding actionablerecommendationsforoptimization. OurfindingsindicatethatLLM-basedAIagentscansignificantlyaugmenttraditionalcode review processes, offering a dual benefit: enhancing code quality and facilitating developer education. The agents ability to detect a wide array of code anomalies, suggest meaningful optimizations,andpromotebestcodingpracticespresentsanotableadvancementintheau- tomationofsoftwarequalityassurance. Furthermore,thepositivefeedbackfromdevelopers regardingtheactionablerecommendationsprovidedbyourmodelunderscoresthepotential of LLM technology to serve as a tool for error detection and also as a valuable resource for learninganddevelopment. Theimplicationsofourresearchextendbeyondimmediateenhancementsincodereview efficiencyandeffectiveness. ByshowcasingthecapabilityofLLM-basedAIagentstoimprove softwarequalityanddeveloperknowledge,weopenthedoortofutureinnovationsinsoftware development processes. Our planned future work, focusing on the comparative analysis of LLM-generateddocumentationagainstmanualmethods,aimstofurtherexplorethepotential ofourmodeltostreamlinesoftwaredevelopmentpractices. Inconclusion,thesuccessfulapplicationofLLMtechnologyinthiscontextsignalsapromisingnewdirectionforsoftwaredevelopment. IthighlightsthetransformativepotentialofAIand machinelearninginenhancingthetechnicalaspectsofdevelopment,suchascodequalityand efficiencyandalsoinelevatingthecollectiveknowledgeandskillsetofthedevelopercommunity. Aswecontinuetoexploreandrefinethesetechnologies,weanticipatetheirintegrationinto variousaspectsofsoftwaredevelopment,ultimatelyleadingtoamoreefficient,knowledgeable, andquality-drivenapproachtocreatingsoftware. 7. Acknowledgment WeexpressoursinceregratitudetoBusinessFinlandfortheirgeneroussupportandfundingof ourproject. Theircommitmenttofosteringinnovationandsupportingresearchinitiativeshas beeninstrumentalinthesuccessofourwork. References [1] C.Treude, Navigatingcomplexityinsoftwareengineering: Aprototypeforcomparing gpt-nsolutions, arXivpreprintarXiv:2301.12169(2023). [2] Z.Rasheed,M.Waseem,K.Systä,P.Abrahamsson, Largelanguagemodelevaluationvia multiaiagents: Preliminaryresults, in: ICLR2024WorkshoponLargeLanguageModel (LLM)Agents,???? [3] A.Radford,K.Narasimhan,T.Salimans,I.Sutskever,etal., Improvinglanguageunder- standingbygenerativepre-training(2018). [4] A.Radford,J.Wu,R.Child,D.Luan,D.Amodei,I.Sutskever,etal., Languagemodelsare unsupervisedmultitasklearners, OpenAIblog1(2019)9. [5] L.Ouyang,J.Wu,X.Jiang,D.Almeida,C.Wainwright,P.Mishkin,C.Zhang,S.Agarwal, K. Slama, A. Ray, et al., Training language models to follow instructions with human feedback, AdvancesinNeuralInformationProcessingSystems35(2022)27730–27744. [6] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P.Shyam,G.Sastry,A.Askell,etal., Languagemodelsarefew-shotlearners, Advancesin neuralinformationprocessingsystems33(2020)1877–1901. [7] J.Lu,L.Yu,X.Li,L.Yang,C.Zuo, Llama-reviewer: Advancingcodereviewautomation withlargelanguagemodelsthroughparameter-efficientfine-tuning, in: 2023IEEE34th |
International Symposium on Software Reliability Engineering (ISSRE), IEEE, 2023, pp. 647–658. [8] A.Fan,B.Gokkaya,M.Harman,M.Lyubarskiy,S.Sengupta,S.Yoo,J.M.Zhang, Large languagemodelsforsoftwareengineering: Surveyandopenproblems, arXivpreprint arXiv:2310.03533(2023). [9] M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. d. O. Pinto, J. Kaplan, H. Edwards, Y. Burda, N.Joseph,G.Brockman,etal., Evaluatinglargelanguagemodelstrainedoncode, arXiv preprintarXiv:2107.03374(2021). [10] Z. Rasheed, M. Waseem, M. Saari, K. Systä, P. Abrahamsson, Codepori: Large scale model for autonomous software development by using multi-agents, arXiv preprint arXiv:2402.01411(2024).[11] F.F.Xu,U.Alon,G.Neubig,V.J.Hellendoorn, Asystematicevaluationoflargelanguage modelsofcode, in: Proceedingsofthe6thACMSIGPLANInternationalSymposiumon MachineProgramming,2022,pp.1–10. [12] Z.Li,S.Lu,D.Guo,N.Duan,S.Jannu,G.Jenks,D.Majumder,J.Green,A.Svyatkovskiy, S. Fu, et al., Codereviewer: Pre-training for automating code review activities, arXiv preprintarXiv:2203.09095(2022). [13] R.Tufano,S.Masiero,A.Mastropaolo,L.Pascarella,D.Poshyvanyk,G.Bavota, Usingpre- trainedmodelstoboostcodereviewautomation, in: Proceedingsofthe44thinternational conferenceonsoftwareengineering,2022,pp.2291–2302. [14] A.Bosu,J.C.Carver, Impactofpeercodereviewonpeerimpressionformation: Asurvey, in: 2013 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement,IEEE,2013,pp.133–142. [15] C. Sadowski, E. Söderberg, L. Church, M. Sipko, A. Bacchelli, Modern code review: a casestudyatgoogle, in: Proceedingsofthe40thinternationalconferenceonsoftware engineering: Softwareengineeringinpractice,2018,pp.181–190. [16] P.Thongtanunam,C.Tantithamthavorn,R.G.Kula,N.Yoshida,H.Iida,K.-i.Matsumoto, Whoshouldreviewmycode? afilelocation-basedcode-reviewerrecommendationap- proachformoderncodereview, in: 2015IEEE22ndInternationalConferenceonSoftware Analysis,Evolution,andReengineering(SANER),IEEE,2015,pp.141–150. [17] M.B.Zanjani,H.Kagdi,C.Bird, Automaticallyrecommendingpeerreviewersinmodern codereview, IEEETransactionsonSoftwareEngineering42(2015)530–543. [18] S.-T.Shi,M.Li,D.Lo,F.Thung,X.Huo, Automaticcodereviewbylearningtherevisionof sourcecode, in: ProceedingsoftheAAAIConferenceonArtificialIntelligence,volume33, 2019,pp.4910–4917. [19] V.J.Hellendoorn,J.Tsay,M.Mukherjee,M.Hirzel, Towardsautomatingcodereviewat scale, in: Proceedingsofthe29thACMJointMeetingonEuropeanSoftwareEngineering ConferenceandSymposiumontheFoundationsofSoftwareEngineering,2021,pp.1479– 1482. [20] A.Gupta,N.Sundaresan, Intelligentcodereviewsusingdeeplearning, in: Proceedings ofthe24thACMSIGKDDInternationalConferenceonKnowledgeDiscoveryandData Mining(KDD’18)DeepLearningDay,2018. [21] R. Tufano, L. Pascarella, M. Tufano, D. Poshyvanyk, G. Bavota, Towards automating code review activities, in: 2021 IEEE/ACM 43rd International Conference on Software Engineering(ICSE),IEEE,2021,pp.163–174. [22] H.-Y. Li, S.-T. Shi, F. Thung, X. Huo, B. Xu, M. Li, D. Lo, Deepreview: automatic code reviewusingdeepmulti-instancelearning, in: AdvancesinKnowledgeDiscoveryand DataMining: 23rdPacific-AsiaConference,PAKDD2019,Macau,China,April14-17,2019, Proceedings,PartII23,Springer,2019,pp.318–330. [23] J. K. Siow, C. Gao, L. Fan, S. Chen, Y. Liu, Core: Automating review recommendation for code changes, in: 2020 IEEE 27th International Conference on Software Analysis, EvolutionandReengineering(SANER),IEEE,2020,pp.284–295. [24] V. Balachandran, Reducing human effort and improving quality in peer code reviews usingautomaticstaticanalysisandreviewerrecommendation, in: 201335thInternational ConferenceonSoftwareEngineering(ICSE),IEEE,2013,pp.931–940.[25] D.Singh, V.R.Sekar, K.T.Stolee, B.Johnson, Evaluatinghowstaticanalysistoolscan reducecoderevieweffort,in:2017IEEEsymposiumonvisuallanguagesandhuman-centric computing(VL/HCC),IEEE,2017,pp.101–105. [26] A. K. Turzo, A. Bosu, What makes a code review useful to opendev developers? an empiricalinvestigation, EmpiricalSoftwareEngineering29(2024)6. |
2405.01202 DLAP: A Deep Learning Augmented Large Language Model Prompting Framework for Software Vulnerability Detection YanjingYanga, XinZhoua, RunfengMaoa, JinweiXua, LanxinYanga, YuZhanga, HaifengShenb and HeZhanga aSofewareInstitute,NanjingUniversity,China bFacultyofScienceandEngineering,SouthernCrossUniversity,Australia ARTICLE INFO ABSTRACT Keywords: Softwarevulnerabilitydetectionisgenerallysupportedbyautomatedstaticanalysistools,whichhave VulnerabilityDetection recentlybeenreinforcedbydeeplearning(DL)models. However,despitethesuperiorperformance LargeLanguageModel ofDL-basedapproachesoverrule-basedonesinresearch,applyingDLapproachestosoftwarevul- PromptingEngineering nerabilitydetectioninpracticeremainsachallengeduetothecomplexstructureofsourcecode,the Framework black-boxnatureofDL,andthedomainknowledgerequiredtounderstandandvalidatetheblack-box resultsforaddressingtasksafterdetection. ConventionalDLmodelsaretrainedbyspecificprojects and,hence,excelinidentifyingvulnerabilitiesintheseprojectsbutnotinothers.Thesemodelswith poorperformanceinvulnerabilitydetectionwouldimpactthedownstreamtaskssuchaslocationand repair. Moreimportantly,thesemodelsdonotprovideexplanationsfordeveloperstocomprehend detectionresults.Incontrast,LargeLanguageModels(LLMs)havemadelotsofprogressinaddress- ingtheseissuesbyleveragingpromptingtechniques.Unfortunately,theirperformanceinidentifying vulnerabilitiesisunsatisfactory. ThispapercontributesDLAP,aDeepLearningAugmentedLLMs PromptingframeworkthatcombinesthebestofbothDLmodelsandLLMstoachieveexceptional vulnerabilitydetectionperformance.ExperimentalevaluationresultsconfirmthatDLAPoutperforms state-of-the-artpromptingframeworks,includingrole-basedprompts,auxiliaryinformationprompts, chain-of-thoughtprompts,andin-contextlearningprompts,aswellasfine-turningonmultiplemet- rics. wellonexperimentaldatasetsmaysufferfromsevereperfor- 1. Introduction mancedegradationinreal-worldprojects. Thisismainlybe- Software vulnerability detection is paramount for safe- causeofthecomplexityofsourcecodestructureandthecon- guardingsystemsecurityandindividualprivacy. Asthecy- cealmentofvulnerabilitycharacteristics[4]. UsingASATs berenvironmentgrowsincreasinglycomplexandattacktech- withDLmodelstodetectvulnerabilitiesmakesanimpacton niques grow quickly, these various threats to software sys- acollectionofdownstreamtasks,includingbutnotlimited temshavelongpuzzledsoftwareorganizations[26,36]. In to vulnerability validation, localization, and repair. More- particular,vulnerabilityisoneofthecriticalthreatsthatmay over,itischallengingfordeveloperswhoareresponsiblefor result in information leakage, data tampering, and system checkingvulnerabilitiesindicatedbyDLmodels[39]. breaks [37]. Vulnerability detection aims to identify vul- In recent years, Large Language Models (LLMs) such nerabilities,mitigatetheirimpact,andpreventmaliciousat- asChatGPT[3]andCopilot[5]haveshownprominentper- tacks [14]. Moreover, vulnerability detection helps to en- formance in various tasks [46, 22, 18]. However, LLMs hancesoftwarequality,usability,andtrustworthiness. Nowa- have not achieved satisfactory results in vulnerability de- days,vulnerabilitydetectionhasbecomeamust-haveinmod- tection [34]. Dai et al. [11] indicated that one of the main ernsoftwaredevelopment. reasons is the inappropriate use of LLMs. LLMs are pre- Manyautomatedstaticanalysistools(ASATs)havebeen trained by a vast amount of data, but not all of them have appliedforvulnerabilitydetection[24,26]. However,onthe positiveeffectsonthedownstreamtaskssuchasvulnerabil- one hand, the outputs of ASATs are difficult to validate as ity detection [16,29]. Thereare two techniques to address theyrequiredeveloperstomastermoreexpertiseandexpe- this problem: fine-tuning and prompt engineering. Fine- rienceinvulnerabilitydetection[31];ontheotherhand,the tuning is a commonly used technique but requires signifi- performanceofASATsispoor(e.g.,highfalsepositiverates) cantcomputationalresourcesandtime. LLMswithprompts due to they are based on string pattern matching [9, 19]. allowuserstointeractwithLLMsiterativelytoproducebe- In recent years, the advancement of deep learning (DL) in spoke results [42, 11]. However, as detection performance naturallanguageprocessinghasinspiredresearcherstointe- ishighlysusceptibletoprompts,agenericpromptingframe- grateDLmodels1 intoASATs. ThesemodernASATsgen- workwouldnotbeabletoachievesatisfactoryperformance[1]. erallyoutperformtheirconventionalcounterpartsinvulner- PromptengineeringmakesLLMsadapttoaspecificdown- abilitydetection[24,36]. However,DLmodelsthatperform streamtaskandgeneratecustomizedoutputs[42,11]. More- over,promptengineeringcanjointlyworkwithfine-tuning, zhouxin@nju.edu.cn(X.Zhou) ORCID(s):0000-0002-3263-1275(X.Zhou) makingitacost-effectiveandpromisingtechniqueforvul- 1Inthispaper, ‘DLmodel’referstotheconventionaldeeplearning nerabilitydetection[35]. modelsotherthanlargelanguagemodelssuchasGPT,Copilot,andLlama. Yang et al.: PreprintsubmittedtoElsevier Page1of15 4202 yaM 2 ]ES.sc[ |
1v20210.5042:viXraA Deep Learning Augmented Large Language Model Prompting Framework Previous work has utilized LLMs for vulnerability de- velopersinusingASATsforvulnerabilitydetectiontasks. tectionusingvariouspromptingframeworks[45,11,34,30]. Themaincontributionsofthispaperareasfollows. However,existingpromptsinputlimitedinformationtoLLMs, • WeproposeDLAP,abespokeLLMpromptingframe- makingthemprovidelittlehelpinimprovingtheperformance workforvulnerabilitydetection. DLAPcombinesthe of LLMs in real-world projects. To address this problem, advantagesofDLmodelsandLLMswhileovercom- we propose a bespoke prompting framework DLAP2. Al- ingtheirrespectiveshortcomings. Additionally,DLAP though DL models can not achieve unsatisfactory perfor- hasthepotentialtobeadaptedforotherASATtasks. mancesinmultipleprojects,theyhavesuperiorperformance • Weconductrigorousexperimentstodemonstratethe in a single project. The core idea of DLAP is using pre- effectiveness of selecting appropriate DL models for trained DL models for the target project to stimulate adap- DLAPandshowcasetheexceptionalvulnerabilityde- tiveimplicitfine-tuningofLLMs. Weselectthemostsuit- tectionperformanceoverstate-of-the-artpromptingframe- able DL model as plugins among three categories to aug- works. mentDLAP.Thisprocessisimplementedthroughtwostate- • Weempiricallydemonstratetheadvantagesofprompt- of-the-art prompts: In-context Learning (ICL) prompt and ingoverfine-tuningforvulnerabilitydetectioninterms Chain-of-Thought(COT)prompt. Ontheonehand,theICL ofdetectionaccuracy,cost-effectiveness,andexplana- promptutilizeslocality-sensitivehash(LSH)tosamplecan- tions. didatecodefragmentsthataresimilartotheinputcode. Then, Therestofthepaperisorganizedasfollows. Section2 apre-trainedDLmodelisemployedtoobtaintheprediction reviewsthebackgroundandrelatedwork. Section3delin- probabilities of candidate fragments. The combination of eatesthedesignofDLAP.Section4presentstheexperimen- thecandidatecodefragmentsandtheircorrespondingprob- tal design and parameter setting of DLAP, followed by re- abilities forms the ICL prompt for the input code. On the sultsandanalysisinSection5. Section6discussesDLAP’s other hand, the COT prompt synthesizes the results from generation capability and DL model selection. Finally, we staticscanningtoolsandpre-trainedDLmodelsasqueries. presentthreatstovalidityinSection7andconcludethispa- Following these queries, DLAP locates the corresponding perinSection8. COT templates within the detection step template library we constructed based on Common Weakness Enumeration 2. BackgroundandRelatedWork (CWE3).ThenDLAPusestheseCOTtemplatestogenerate thecustomizedcompleteddetectionCOTpromptsforinput This section describes the related work on vulnerabil- codes. ThisstimulatesLLMstoconductimplicitfine-tuning itydetectionandthebackgroundofpromptengineeringfor toachievebetterperformanceinvulnerabilitydetectionand LLMs. providesupplementaryinformationtofacilitatetheinspec- tionandcomprehensionofdetectionresults. 2.1. VulnerabilityDetection Weconductexperimentsusingfourlarge-scaleprojects Whilethereisaplethoraofworkonthistopic,wefocus withmorethan40,000examplestoevaluateDLAP.Wefirst onvulnerabilitydetectionenhancedbyDLandLLMs. conductexperimentstoselectthemostsuitableDLmodelto 2.1.1. DeepLearningforVulnerabilityDetection formDLAP.Weassessperformancebyintegratingvarious Vulnerabilitydetectionhasreceivedalotofconcernsin DLmodelsintoDLAPandcomparingtheirresultstodeter- recent years. Lin et al. [27] proposed a framework that in- minetheoptimaldeeplearningmodel. Theresultsshowthat corporatesoneprojectand12DLmodelsforslice-levelvul- combining Linevul with an LLM outperformed other DL nerabilitydetection. Zhouetal.[49]proposedDevignwhich modelsby15%acrossallevaluationmetrics. Thenweselect usesgraphrepresentationforinputperformedbetterthanap- theLinevultogeneratepromptswithinDLAPandcompare proachesusingtokensofcodedirectly. Lietal. developed DLAPagainstthestate-of-the-artpromptingframeworksin- a series of DL-based approaches including VulDeePecker, cludingrole-basedprompts,auxiliaryinformationprompts, chain-of-thoughtsprompts,andin-contextlearningprompts. 𝜇VulDeePecker, and SySeVR [24, 25, 50] to complete the constructionofaDLframeworkappliedtovulnerabilityde- The results show that DLAP surpasses baselines across all tection. Despite achieving advanced results in experimen- metrics for each project, achieving a 10% higher F1 score tal setups, there still exists generalization issues in practi- anda20%higherMatthewsCorrelationCoefficient(MCC). calapplications. Withtheadventofnetworksbasedonthe ThisindicatesthatDLAPismoreeffectiveforvulnerability transformer architecture and language models, researchers detection. Finally,wecompareourapproachwiththemost havestartedapplyingtheseadvancedNLPtechniquestovul- prevalentfine-tuningtechniquestoexploretheeffectiveness nerabilitydetection. FuandTantithamthavorn[13]applied ofDLAPversusfine-tuning. TheresultsrevealthatDLAP RoBERTaasapre-trainingmodel,fine-tunedonsubsequent can achieve 90% of the extensive fine-tuning process at a vulnerabilitydetectiontasks,achievingthebestexperimen- lowercostandevenoutperformfine-tuningonsomemetrics. |
talperformanceinbothfunction-levelandline-levelvulner- Moreover,theDLAP-drivenLLMmodelgeneratesmoreex- abilitypredictiontasks. planatorytextthanfine-tuning,whichisimportanttoaidde- Chakrabortyetal.[4]foundthattheperformanceofsev- 2Dataandmaterials:https://github.com/Yang-Yanjing/DLAP.git eralDL-basedapproachesdroppedbyanaverageof73%on 3 https://cwe.mitre.org datasetsbuiltfrommultiplereal-worldprojects,highlighting Yang et al.: PreprintsubmittedtoElsevier Page 2 of 15A Deep Learning Augmented Large Language Model Prompting Framework the need for further research into cross-project vulnerabil- Researches[3,8,2]revealthatLLMsaretransformer-based itydetection. Steenhoeketal.[36]conductedanempirical models. Differentinputscancausechangesintheattention studytodemonstratethevariabilitybetweenrunsofamodel layers within their architecture, and as such, the construc- andthelowagreementamongDLmodels’outputsandstud- tionofhigh-qualitypromptscanassisttheLLMsinprovid- iedinterpretationmodelstoguidefuturestudies. ingsatisfactoryanswersforthespecifictargettasks. Incon- trast, inappropriate prompts will impinge on its attention, 2.1.2. LargeLanguageModelforVulnerability which may mislead LLMs to produce hallucinations [47]. Detection TheCOT[42]andICL[11]promptingarecurrentlythemost Theoutstandingperformanceindialogue,codegenera- effectiveapproachestopromptengineering. COTprompting tion, and machine translation of LLMs has sparked the in- isanapproachofdecomposingatargetproblemintostepsto terestofresearchersandpractitionersinapplyingLLMsto promptLLMstoprovidetheanswers. ICLpromptsLLMs softwaresecurity. Katsadourosetal.[20]highlightedthepo- todelivercorrectanswersbyreferringtosimilarquestions. tentialofLLMstopredictsoftwarevulnerabilities, empha- In this paper, DLAP integrates both approaches to provide sizingtheiradvantageovertraditionalstaticmethods. Thapa appropriatepromptstodriveLLMsforvulnerabilitydetec- etal.[38]discoveredthattransformer-basedLLMsoutper- tion. formed conventional DL-based models. Zhang et al. [45] enhancedtheeffectivenessofChatGPTinsoftwarevulnera- 3. TheProposedFramework: DLAP bilitydetectionthroughinnovativepromptdesignsandlever- agingthemodel’sabilitytomemorizemulti-rounddialogue. ThissectiondescribesthedesignofDLAP,whichcon- However,accordingtoCheshkovetal.[7],ChatGPTand sistsoftwoprompttechniques. GPT-3 in Java code vulnerability detection do not outper- formthecurrenttools. Inthemeantime,Liuetal.[29]em- 3.1. Motivation phasizedthatChatGPTcannotreplaceprofessionalsecurity Vulnerability detection can be formulated as a binary engineers in vulnerability analysis, indicating that closed- classificationproblem. Givenavulnerabilitydatasetthat source LLMs are not the end of the story. These findings is represented with = {(𝑥,𝑦)}𝑁 ,𝑦 = 0or1, where 𝑖 𝑖 𝑖=1 suggestthattheperformanceofLLMsintherealmofvul- 𝑥 is the source code of a function, and 𝑦 is the ground- 𝑖 𝑖 nerability detection leaves much to be desired. The poten- truth(0–NO,1–YES).Detectionmodelsareexpectedtoes- tialfalsepositivesandillusionsgeneratedbyLLMsinspe- tablishtheirmappingsautomatically. Oneofthemainpro- cific applications [47] are attributable to the extensive un- cessesofdetectionmodelsisinput(sourcecode)represen- constrained training data and the multitude of training pa- tation. Specifically, the source code can be represented as rameters. Consequently,itisessentialtofine-tuneanLLM semantictokens,abstractparsingtrees(ASTs),data/control beforedeployingitforspecifictasks. Luetal.[30]proposed flowgraphs(DFGs/CFGs), orotherformats. Conventional a method called GRACE that processes code structure us- DL models use only a single format as input, which may ing CodeT5, combines semantic with syntactic features to missusefulinformation. LLMshavethecapabilityofcom- conductsimilaritysearches,andutilizesin-contextlearning bining multiple representations, making it a more promis- promptstodrivetheLLMbeyondallbaselineDLmodelson ing detection technique for vulnerability detection. When complexreal-worlddatasets. levering LLMs for vulnerability detection, it is fundamen- As indicated above, the performance of LLMs remains tal to make LLMs understand domain and task knowledge unsatisfactory,accompaniedbyahighfalsepositiverate. In asLLMsaretrainedbygeneralcorpora. Itcanbeexpected thisstudy,GRACE[30]alongwithotherevaluatedprompts[45] that using LLMs for vulnerability detection directly would will serve as the baselines, facilitating the comparison and achieve unsatisfactory results. Therefore, LLMs use fine- evaluation. To achieve better performance in vulnerability tuningorpromptengineeringtoaddressthistask. detection,weselectSysver[24],Devign[49],andLinevul[13] Fine-tuningisanintuitivetechniquetomaketheparam- astheaugmentedcomponentsforourframework. eters ofLLMsadapttodownstreamtasks(i.e.,vulnera- bilitydetection)ion𝐸 toachievebetterresults,whichcan 𝑜 2.2. PromptEngineeringforLargeLanguage bedescribedasEquation(1). Models TheincreaseinthenumberofparametersinLLMsleads toariseinthecostoffine-tuningLLMs. Low-costapproaches =argmin( 𝑌 − ((𝑋))) (1) such as Low-Rank Adaptation of Large Language Models |
where 𝑌 is the ground truth (label) for function level code (LoRA) [17] and P-tuning [28] have significantly reduced andisthelossfunction. Byfine-tuningtheweightparam- the cost of fine-tuning. However, the cost for some appli- etersofLLMs,thepredictiveprobabilitiesareexpectedto cations is still considerable. For example, fine-tuning an beclosertothegroundtruth. However,fine-tuningiscost- LLMwith33Bparametersrequirestwohigh-precision40G intensive[17,44]. Forexample,LoRAwhichisoneofthe GPUs[17]. TheparametersoftheLLMsthathavebeenre- most efficient LLMs requires approximately 80G graphics portedtoachieveexcellentperformanceallexceedonehun- cardmemoryandalotoftimeforfine-tuninganLLMwith dred billion, which may cost a lot of computing resources. only13Bparameters. Yang et al.: PreprintsubmittedtoElsevier Page 3 of 15A Deep Learning Augmented Large Language Model Prompting Framework Figure 1: An overview of the proposed DLAP PromptengineeringisanewtechniquetoaugmentLLMs. late implicit fine-tuning of the LLMs for them to adapt to LLMscanincorporatevariousinputsandgeneratetheiran- vulnerabilitydetectiontasks. Inthisway,itcanreduceper- swers; therefore, they can be prompted [3, 8, 2]. Techni- formancedegradationcausedbyhallucinationanddatadis- cally, we use 𝑇 and 𝐸 to describe pretraining sets and tributiondifferences. 𝑠 𝑜 testing sets, respectively. When using prompt engineering AsshowninFigure1,DLAPiscomposedoftwomain (()̇ )forvulnerabilitydetection,LLMsacceptasetof(𝑋) parts,including(1)PartI:constructionofin-contextlearn- asinputsandoutputtheestimatedprobabilitiesofthem,where ingpromptsaugmentedbyDLmodelsand(2)PartII:gener- 𝑋meansacollectionofexamplesfrom𝐸. Onecost-effective ationofthebespokeCOTpromptstoaugmentLLMs. InPart 𝑜 promptingtemplate forvulnerabilitydetectioncanbede- I,weemployDLmodelstogeneratedetectionprobabilities scribedasEquation(2). for input codes and select candidate codes based on simi- larity. The combination of candidate codes and their cor- respondingsimilaritiesformstheICLpromptfordetection. =argmin( 𝑌 − ((𝑋))) (2) In Part II, we combine the results of DL models and static tools to query pre-defined templates in a preset COT tem- AccordingtoLiuetal.[28],Equation(2)canachievethe plate library as key-value pairs. Based on the characteris- sameeffectsasEquation(1). Thatis,bothpromptengineer- ticsofeachinputsample,wecompletethechainofthought, ing and fine-tuning can make predictive probabilities close generatingCOTpromptsfordetection. Thesetwopartswill togroundtruths. Inthefollowingsubsections,weelaborate be introduced in Section 3.3 and Section 3.4 respectively. onDLAPwhichleveragespromptengineeringforvulnera- InSection3.5,weshowanexampleofsynergizingthetwo bilitydetection. promptstogeneratethepromptsofthefinalDLAP. 3.2. FrameworkOverview DLAPleveragesattentionmechanismswithinLLMs,in- 3.3. In-ContextLearningPromptsConstruction corporatingselectivelytrainedDLmodelsasenhancements. According to the earlier point, LLMs encapsulate vast This approach, known as In-Context Learning (ICL), acts knowledgethroughtheirexpansiveweightstructures. Firstly, tosubtlyrefineLLMs,makingthemmoreadeptatspecific we select the pre-trained DL model through training sets. projects. Moreover,DLAP’suseofchain-of-thoughts(COT) Fornewprojects,wecanalsobuildDLmodelsfromnewly enables LLMs to discard incorrect generative paths effec- collected samples based on existing research [13, 24, 49]. tively. Consequently, DLAP enhances LLMs’ capabilities Then to create the appropriate in-context, the most similar indetectiontasks,ensuringrobustperformancewithoutin- codecandidatesarefoundinthetrainingsetusingLocality- curring significant costs. ICL can stimulate the attention sensitivehashing(LSH),anefficientsimilaritysearchalgo- layerofanLLMtoadapttothedownstreamdetectiontask, rithm in the Retrieval Augmented Generation (RAG) tech- whichisdefinedby[11]asimplicitfine-tuning. Aswithgen- nique. Although the LSH similarity calculation algorithm eralfine-tuning,implicitfine-tuningcanalsodriveLLMsto can only concern the similarity of code segments, consid- adapttodownstreamtasksandachievebetterperformance. ering that as a prompting framework, we cannot spend too Well-designed prompts stimulate LLMs to perform better muchtimegeneratingpromptsformation,itisnecessaryto in downstream detection tasks. The idea behind the pro- sample multiple codes as a similar code candidate set effi- posed DLAP framework is that it uses DL models to aug- ciently. ment LLMs by constructing appropriate prompts to stimu- FollowingDaietal.[11],wereverselyusethedualform Yang et al.: PreprintsubmittedtoElsevier Page 4 of 15A Deep Learning Augmented Large Language Model Prompting Framework oftheattentionoftransformerderivedbythem. Therefore, theadaptiveimplicitfine-tuningonattentionlayer̃ofLLMs stimulatedbyDLAPforspecificprojectscanbewrittenas Equation (3). Please refer to Section 8 in the appendix for details. ̃(𝐪)=(𝑊init+Δ𝑊 𝐼𝐶𝐿(𝑥))𝐪 (3) We train the DL model with training data informa- tioninfowhichcontainstherelationshipbetweentheproject data and the label. Then the DL model generates a detec- tionprobabilityProbs foradetectionobject𝑥asshownin Equation(4). Probs ICL(𝑂𝑏𝑗info)=(𝑂𝑏𝑗info)(𝑥) (4) |
TheprobabilitiesoutputbytheDLmodelrepresentchar- acteristics of input codes. DLAP uses the probabilities to constructICLpromptstoaugmentLLMs. Then,weobtain therelaxedattentionrepresentatioñoftheLLMinEqua- tion(5). Figure 2: An example of ICL prompts of DLAP ̃(𝐪)=( 𝑊init+Δ𝑊(func(Probs(𝑂𝑏𝑗info)))) 𝐪 (5) 3.4. Chain-Of-ThoughtPromptsGeneration ThesecondpartofDLAPistogeneratespecificprompts Equation (5) indicates that the relaxed attention of the foreachtestedsample. Itisdividedintothefollowingstages. LLMisrelatedtoprobabilitiesoutputbytheselectionofa Firstly,becausethecharacteristicsanddetectionstepsofvul- DL model and the probabilities are related to the training nerabilitiesvary,weneedtopre-setdifferentdetectiontem- project data. This results in an implicit fine-tuning for the plates into a COT library. According to the existing peer- LLMtoadapttospecificprojectinformation[11]. Wefur- ther explain this adaptive process with more details in the reviewed vulnerability taxonomies (i.e., [43, 23]) and reli- resultanalysissectionthroughacomparativeanalysisexper- able grey literature (i.e., [41]), we construct a hierarchical detection COT library that has six major categories as fol- iment. lows. Compared to conventional fine-tuning methods, DLAP does not require excessive resource consumption to update • SFE (Security Features Errors): Errors induced by imperfectsecurityfeatures theparametersofLLMs. TheICLpromptsupdatetheoutput oftheattentionlayerintheLLMs. Astheexampleshownin • LOG(LogisticsErrors): Errorsinducedbyprogram execution Figure2,theICLpromptsofDLAPstimulateimplicitfine- tuningoftheLLMstowardadaptingtothecharacteristicsof • MEM(MemoryErrors):Errorsrelatedtomemoryre- sources theprojectstobedetected. ByaddingtheresultsgeneratedbytheDLmodeltothe • NUM(NumericErrors): Errorsinducedbynumerical computations prompts of LLM, DLAP can include more in-context in- formation. The DL model trained to form weights of net- • IDN(ImproperDataNeutralization): Errorsinduced bynon-standardization(verification,restriction)ofex- works contains the complex relationship between the pre- changeddata dictedprobabilityandtheinputcodetext. ThereforetheDL model’s output contains the characteristics of training sets. • UNT(UnknownTaxonomyErrors): Unknownerrors Subsequently,accordingtotheparent-childrelationshipde- ICLbuiltinthiswaycontainsmorepromptinformationthan scribedintheCWEresearchconcept4,somecategoriesabove normal ICL, which stimulates LLMs to perform better in arerefined(atotalof45subcategories). Inaddition,byre- downstreamtasks. ferringtotherelevantresearch[29,45,32]onstep-by-step As shown in the upper part of Figure 1, after conduct- solutionstovulnerabilitydetectionthroughLLMsdrivenby ingthedetectionprobabilityofthecandidatecodesthrough theCOT,weestablishageneralparadigmforthegeneration theDLmodel,wesetthecodecandidatesetandthecorre- oftheCOTasfollows. sponding probability as the question and answer combina- tion. Thesecombinations(anexampleshownonFigure2) • Semantics: Comprehendingthefunctionofthecode. are constructed to be the in-context learning (ICL) prompt • Logic: Analyzingthestructureofthecode. togetherwithpartIIinSection3.4toformthefinalDLAP • Internalrisks: Identifyingcomponentsthatmayin- troducevulnerabilities. augmentedprompts. 4 https://cwe.mitre.org/data/definitions/1000.html Yang et al.: PreprintsubmittedtoElsevier Page 5 of 15A Deep Learning Augmented Large Language Model Prompting Framework • External risks: Inspecting for unsafe functions that couldpotentiallyleadtovulnerabilities. • GeneratingtheCOT:Integratingtheinformationac- quired above and generating a COT to inquire about whethertherearepotentialvulnerabilitiesstepbystep. The specific COT refines the generation paradigms for correspondingCOTguidancefordifferentcategories. Each subcategoryisassociatedwithaspecificdetectionCOTtem- plate guidance. DLAP selects two open-sourcefunctional- levelvulnerabilitydetectionstatictools,i.e.,Flawfinder5and Cppcheck6,togeneratestaticscanningresults. Itparsesthe result text of the static tools and maps them to the corre- spondingcategoriesinthetaxonomytree. Thentheresults arescoredandrecordedforeachtool. Thehighest-scoringK categoriesaretakenoutandaddedtothequerykey. DLAP selects the same DL model in Section 3.3 because of their betterperformanceaccordingtothestudies[49,13,24]. DLAP combinesthedetectionresultsoftheDLmodelandthescan- ningresultsofthestatictooltobecomethekeyofaquery. Using this key, DLAP obtains customized COT generation guidancetemplatesfromtheCOTlibraryforthetestcodes. Thekeyisformedasadictionarythatcontainsthestatictool Figure 4: An example of the refined COT prompts of DLAP outputclassandtheresultoftheDLmodeljudgment,such asthefollowingFigure3 Algorithm1Bespokepromptsfordetectionsamples Require: ; ={(𝑥,𝑦)}𝑁 ;;. 𝑖 𝑖 𝑖=1 1: 𝑇 ← Sample() # the training set for constructing the 𝑜 DL-basedmodel 𝑌 ←Label(𝑇) 𝑜 𝐸 =−𝑇 𝑜 𝑜 2: =argmin(𝑇,𝑌) 𝑜 Figure 3: A query key example of DLAP 3: for𝑥 𝑖,𝑖=1to𝑁 in 𝑜𝐸 do 4: [𝐶𝑎𝑛𝑑𝑖𝑑𝑎𝑡𝑒𝑠]=𝐿𝑆𝐻(𝑥) 𝑖 For instance, if the key is the null pointer dependency 5: [𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑖𝑒𝑠]=([𝐶𝑎𝑛𝑑𝑖𝑑𝑎𝑡𝑒𝑠]) |
that falls under the IDN category, then DLAP will get the 6: ICLprompt(𝑥)={[𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑖𝑒𝑠],[𝐶𝑎𝑛𝑑𝑖𝑑𝑎𝑡𝑒𝑠]} 𝑖 refinedCOTguidancefromthetaxonomytreeasshownin 7: 𝑅𝑒𝑠𝑢𝑙𝑡 (𝑥)←Ranking((𝐸)) 𝑖 𝑜 Figure4. ThelibraryofCOTguidancetemplatesispublicly 8: (𝑥)=𝑃𝑟𝑒𝑑𝑖𝑐𝑡𝑖𝑜𝑛 (𝑥)+𝑅𝑒𝑠𝑢𝑙𝑡 (𝑥) 𝑖 𝑖 𝑖 availableonGitHub7. 9: Utilizing(𝑥)asthekeytoretrieveCOTpromptsgen- 𝑖 Throughthekeygenerationprocessdescribedearlier,the erationguidance. 𝐺(𝑥)=((𝑥)) 𝑖 𝑖 COTguidanceandtheresultsoftheDLmodelarecombined 10: UsingGPTtocomplete𝐺(𝑥):𝐶𝑂𝑇(𝑥)=𝐺𝑃𝑇(𝐺(𝑥)) 𝑖 𝑖 𝑖 togeneratefinalCOTpromptsforthetarget-specificdetec- tionsamples. 11: DLAPprompts(𝑥)=ICLprompt(𝑥)+COTprompt(𝑥) 𝑖 𝑖 𝑖 12: endfor 3.5. PromptsSynergy Ensure: SpecificCOTpromptsofalldetectionsamples. Figure5showsanexampleofouralgorithmwherethefi- nalpromptsaremadeinsuchaformat. TheprocessofDLAP generatingpromptsisdescribedinAlgorithm1,whichtakes eachinputcodeinthetestset𝐸,theLSHalgorithmisused the selected DL models, the training/history data of target tofindthemostsimilarcode[𝐶𝑜 𝑎𝑛𝑑𝑖𝑑𝑎𝑡𝑒𝑠]inthisset. Then, detectionproject = {(𝑥,𝑦)}𝑁 ,theselectedstatictools the DL model is utilized to get detection probabilities 𝑖 𝑖 𝑖=1 ,andthepresetCOTlibrary astheinput. [𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑖𝑒𝑠]. Using [𝐶𝑎𝑛𝑑𝑖𝑑𝑎𝑡𝑒𝑠] and [𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑖𝑒𝑠], First, the target detection project 𝑋 is sampled to con- DLAPconstructsquestion-answeringcombinationstoform struct training sets 𝑇. 𝑇 (if open-source information is theDL-augmentedICLprompts. 𝑜 𝑜 directlycollected)islabeledas𝑌. Theremainingpartofde- DLAP carries out a bespoke process described in Fig- tectingproject𝑋istestsets𝐸. 𝑇 andlabel𝑌 areutilized ure5togetthespecificCOTprompt. Staticstoolsareuti- 𝑜 𝑜 to minimize the loss function for training . Next, for lized to get analysis results (𝐸) = { ∶ scores, ∶ 𝑜 1 2 5 https://dwheeler.com/flawfinder scores, 3 ∶scores,...}and𝑅𝑒𝑠𝑢𝑙𝑡 (𝑥 𝑖)isobtainedbyrank- 6 ing(𝐸). ThenDLAPgeneratestheDLmodelprediction http://cppcheck.net 𝑜 7 https://github.com/Yang-Yanjing/DLAP.git(COTTree) result. After that, the results 𝑃𝑟𝑒𝑑𝑖𝑐𝑡𝑖𝑜𝑛𝑠 are combined Yang et al.: PreprintsubmittedtoElsevier Page 6 of 15A Deep Learning Augmented Large Language Model Prompting Framework Figure 5: An example of the refined COT prompts of DLAP toformaquery,whichisutilizedasthekeytoretrieveCOT Motivation&Setup: Previousresearchhasshownthatfine- prompts generation guidance 𝐺(𝑥) from the COT library. tuning is helpful for augmenting LLMs. In this paper, we 𝑖 GPT is used to complete 𝐺(𝑥) for generating 𝑐𝑜𝑡(𝑥) for use prompt engineering rather than fine-tuning to develop 𝑖 𝑖 eachdetectionsample. Finally, thefinalpromptsofDLAP DLAPbecauseofitslowercost. InRQ3,wecompareDLAP arecomposedoftheCOTpromptsandICLprompts. Each againstafine-tuningLLM(Llama-13B)[40]toseeifithas specificpromptisutilizedtodriveLLMsfordetection. The thesameperformanceasfine-tuning. Specifically,weselect final step is to use the generated COT prompts for LLMs LoRA[17],astate-of-the-artLLMfine-tuningtechniquefor toproduceunderstandablevulnerabilitydetectionresultsby comparison. followingaparticularanswerformat. 4.2. Datasets According to Croft et al. [10], the common vulnerabil- 4. ExperimentalDesign itydetectiondatasetshavelabelingbias. Todevelopanap- Thissectiondetailsresearchquestions,datasets,DLmod- propriate experimental dataset, we customize three criteria els,baselineLLMprompts,andevaluationmetrics. forselectingprojects: (1)ithasbeenresearchedbyrelated work[4,49,12](toensureexternalvalidity); (2)ithasac- 4.1. ResearchQuestions cumulatedmorethan3,000functions(toexcludeno-active TheexperimentalevaluationofDLAPisstructuredwith projects); and(3)itistraceable(toexcludeprojectswhose threeresearchquestions(RQs). vulnerabilityinformationisincorrectorevenunknown). As RQ1: WhichcategoryofDLmodelsisthemosteffective aresult,ourexperimentaldatasetconsistsoffouropen-source toDLAP? projects,includingChrome,Linux,Android,andQemu. These Motivation&Setup: AnimportantdriverofDLAPistheDL projects we selected are of good open-source quality and model,whichaddstheinformationstoredbytheDLmodel havehigh-qualityvulnerabilityfixrecordsfortraceability. trainingtothepromptingprocessofLLMsthroughtheICL Thebasicinformationoftheselectedprojectsisshown approach. As little is known about which category of DL inTable1,fromwhichwecanobservethenumberoftrues models is suitable for augmenting LLMs in the context of (vulnerabilities)andfalsesineachprojectisimbalanced. To vulnerability detection, we propose RQ1 to compare three mitigate the impact of data imbalance on training the DL representativeDLmodels: Sysevr,Devign,andLinevul(cf. augmentmodel, wefirstperformedrandomundersampling Section4.3forrationales). onthenon-vulnerablesamplesofthefourprojects. Thenwe RQ2: HoweffectiveisDLAPcomparedtoexistingprompt- dividedthedatasetintotrainingandtestingsetswiththe8:2 ingframeworks? proportion. The training set was used to build DL models, Motivation&Setup: Previous research has shown that the whilethetestingsetwasusedtoevaluatetheperformanceof performanceofLLMsissusceptibletopromptsandinappro- DLAP. priate prompts lead to unsatisfactory performance. In this |
paper,wedesignDLAPasapromptaugmentedframework 4.3. DLAPRefinement forvulnerabilitydetection. InRQ2,wecompareDLAPagainst ToaddressRQ1,weselectthreeDLmodelsforvulnera- fourexistingpromptingframeworks:PRol,PAux,PCot,GRACE bilitydetectiontorefineDLAP.Eachofthethreerepresents (cf. Section4.4forrationales)toevaluateitseffectiveness. onetypeofDLmodel. Theirrationalesandhyperparameter settingsareasfollows. RQ3: How effective is DLAP compared to LoRA fine- tuning? Yang et al.: PreprintsubmittedtoElsevier Page 7 of 15A Deep Learning Augmented Large Language Model Prompting Framework rameterswereselectedinTable2asthepre-sethyperparam- Table 1 etersinourframework. Bydoingso,weaimtoreplicatethe Basic information of dataset optimal performance achieved by these models and ensure Project #Functions #Vulnerbilities Usedby consistencyinourevaluationandcomparison. Chrome 77,173 3,939 Chakrabortyetal.[4] Linux 46,855 1,961 Fanetal.[12] Android 8,691 1,277 Fanetal.[12] 4.4. Baselines Qemu 3,096 125 Zhouetal.[49] WecompareDLAPagainstfourpromptingframeworks[45, 34,30,44]thatleverageLLMstodetectvulnerabilities. Table 2 The settings of DL model ∙PRol (Role-basedprompts):AccordingtoWhiteetal.[44], providingGPTwithaclearrolewouldgreatlyalleviateits DLModel Hyperparameter Selection illusionproblem. OurfirstbaselineisproposedbyZhang Javaversion Java8 etal.[45],makingGPTavulnerabilitydetectionsystem. Statictools Joern0.3.1 Graphdatabase Neo4j Datapreprocessing Slice Role-based prompts Embeddingalgorithm Word2vec IwantyoutoactasaVulnerabilityDetectionSystem. -samplingalgorithm CBOW Myfirstrequestis“Isthefollowingprogrambuggy?” Sysevr -samplingwindow 5 PleaseanswerYesorNo. [CODE] -min_Count 5 Networkarchitecture BiLSTM -epoch 100 ∙PAux (Auxiliaryinformationprompts): Basedontheview -batch_size 32 of Zhang et al.[45], providing the LLMs more semantic -optimizer sgd informationaboutthecodeforvulnerabilitydetectionim- -lossfunction binarycross-entropy provesitsperformance. Therefore,inbaseline2,wepro- Javaversion Java8 Statictools Joern2.0.157 videdataflowasauxiliaryinformationtoprompts. Datapreprocessing Graph Embeddingalgorithm Word2vec Auxiliary information prompts -verctor_size 100 Iwantyoutoactasavulnerabilitydetectionsystem. I -epoch 10 -min_count 1 willprovideyouwiththeoriginalprogramandthedata Devign Networkarchitecture CNN flow information, and you will act upon them. Is the -epoch 200 followingprogrambuggy? [CODE],[DFdescription]. -batch_size 128 -input_channels 115 -hidden_channels 200 ∙PCot (Chain-of-thoughtprompts):AccordingtoWeietal.[42], -num_of_layers 6 due to the potential capabilities of LLMs for multi-turn -optimizer adam dialogue,constructingaCOTbetterassistsLLMsinrea- -lossfunction binarycross-entropy soning [42]. Therefore, in baseline 3, we constructed a Datapreprocessing Slice two-stepthinkingchaintodrivetheLLMintheprocessof Embeddingalgorithm BPE+Transformer Pretrainedmodel codeBERT vulnerabilitydetection. Step1: TomakeLLMscorrectly Linevul -batch_size 256 determinewhetheracodeisvulnerable. Thisstepdrives -num_attention_head 12 theLLMstounderstandthepurposeofthecodeexactly. -optimizer Adam Therefore,wedesignedfirst-steppromptsfordetectingthe -lossfunction binarycross-entropy intentofthecode. Step2: Basedonthefirststep,wecon- tinuetopromptLLMstodetectvulnerabilitiesforinputs. • Sysevr[24]representsthecategorythatusescodefea- turesincludingsyntactic,semantic,andvectorrepre- Chain-of-thoughts prompts sentation. It filters the code into slice input by static Step1:Please describe the intent of the given code. analysisofsemanticsandsyntax. [CODE]. • Devign [49] represents the category that introduces Step2:Iwantyoutoactasavulnerabilitydetectionsys- morestructuredgraphstructuresandgraphneuralnet- tem. Istheaboveprogrambuggy? PleaseanswerYes worksintothevulnerabilitydetectionmodel. orNo • Linevul[13]representsthecategorythatutilizespre- traineddeeplearningmodel. Thisnovelsystemdetec- ∙GRACE: GRACEisavulnerabilitydetectionprompting tionmodelisbasedontheTransformerarchitecture. frameworkthatenhancesthecapabilitiesofLLMforsoft- We selected these three DL models to evaluate a range warevulnerabilitydetection. Itachievesthisbyincorpo- ofsimilarmodelsforthecategoriestheyeachrepresent. As ratinggraphstructuralinformationfromthecode. GRACE part of our model selection process, we referenced the pa- employs codeT5 and ICL techniques to use graph infor- rametersreportedintherespectiveresearchpapersofthese mation. DL models that achieved the best performance. These pa- Yang et al.: PreprintsubmittedtoElsevier Page 8 of 15A Deep Learning Augmented Large Language Model Prompting Framework 4.5. EvaluationMetrics LinevulachievesthehighestMCCof37.6%,surpassingDe- Asvulnerabilitydetectionisformulatedasabinaryclas- vign’s26.1%andSysevr’s14.6%. Thisfindingisconsistent sificationprobleminthispaper,weuseprecision( ),re- in the Linux dataset, where it secures an MCC of 56.4%, vul call( ),andF1-score(𝐹 )tomeasuretheperformanceof comparedto44.4%and8.8%forDevignandSysevr,respec- vul 1 |
eachframework. Consideringvulnerabilityisaminorclass tively. Furthermore,theprecisionandF1scoresofLinevul but is of great severity, we also use FPR as a metric. FPR are notably higher across the datasets, underscoring its ro- pays attention to false positives since making mistakes on bustnessinidentifyingvulnerabilitieswithgreateraccuracy themwouldcausemoreseriousoutcomesthanmakingmis- andfewerfalsepositives, asevidencedbyitslowerFPRin takesonfalsenegatives. Inthispaper,theminorclass(pos- datasets. Overall,usingLinevulsurpassesusingDevignby itive)isvulnerability, anditoccupiesaverysmallportion. an average of 7.2% and 10.5% on the comprehensive eval- ThedefinitionofFPRisshowninEquation(6). Moreover, uation metrics F1 and MCC, respectively. It also outper- Matthews correlation coefficient (MCC) is also used as an formsintegratingSysevrbyanaverageof28.4%and34.0% evaluation metric. MCC, a.k.a., phi coefficient, is a metric onthesamemetrics. ThisdemonstratesthatLinevulhassu- to measure the performance of binary classifiers on imbal- perioradaptabilityandgeneralizabilitywhenintegratedinto anceddatasets. MCCisamorecomprehensivemetricthan LLMs compared to other DL models. These results indi- FPR.ThedefinitionofMCCisshowninEquation(8). cate the effectiveness of integrating Linevul into DLAP to detect vulnerabilities, especially its superior 𝐹 , which im- 1 pliesahigherlikelihoodofdetectingactualvulnerabilities. FP FPR= (6) ItsMCC,acriticalindicatorofthequalityofbinaryclassi- FP+TN fications, shows the ability of DLAP with Linevul to solve extremelyimbalanceddatasets. TP×TN−FP×FN TofurtherdistinguishwhichDLmodelismoresuitable MCC= (7) √ asaplug-inforDLAP,wealsoanalyzetheintermediateout- (TP+FP)(TP+FN)(TN+FP)(TN+FN) put(detectionprobability)oftheDLmodel. Table4presents where TP represents correctly detected vulnerabilities, TN the variability in the performance of different DL models represents correctly detected non-vulnerabilities, FP repre- acrossseveralsoftwareprojects. TheLinevulmodel,high- sentsincorrectlydetectedvulnerabilities,andFNrepresents lightedingrayandbold,displaysthehighest𝐶𝑉. Bycom- incorrectlydetectednon-vulnerabilities. paring and analyzing probability density distribution plots TheCoefficientofVariation(CV)isastatisticalmeasure Figure6and𝐶𝑉 Table4inthelargestprojectdatasetGoogle, usedtodeterminethedispersionofdatapointsinadataset we noticed that Linevul displays a more discrete data dis- relativetoitsmean. Itisparticularlyvaluablewhencompar- tributionwhencomparedtotheothermodels. Thisunique ingthevariabilityofdatasetswithdifferentmeans. TheCV discretedetectiondistributionpropertyfacilitatesLLMgen- iscalculatedusingtheequation: eration with implicit fine-tuning for downstream detection tasksmoreeffectively. 𝜎 CV= (8) Summary for RQ1: Linevul achieves the best perfor- 𝜇 manceamongthethreetypesofDLmodels. Moreover, where𝜎representsthestandarddeviationand𝜇denotesthe Linevulresultsinthemostdiscretedetectionprobability, meanofthedataset. indicatingitspredictionhasthehighestconfidencesothat AhigherCVindicatesagreaterlevelofdispersionwithin itcanstimulateLLMsbest. Therefore,weselectLinevul the data distribution, reflecting more variability relative to as the driver of DLAP to conduct the follow-up experi- themean. , ,𝐹 ,andFPRrangefrom0to1,with ments. vul vul 1 higher values indicating better performance of a classifier. MCC ranges from -1 to +1, with higher values indicating 5.2. RQ2: ComparisionwithOtherPrompting betterperformanceofaclassifier. Weusepercentageval- Frameworks ues(%)tohighlightthedifferencesbetweenresults. DuetocostconstraintsassociatedwithOpenAIAPIcalls, we employed the GPT-3.5-turbo-0125 model for vulnera- bility detection. Table 5 illustrates the performance com- 5. ResultsandAnalysis parisonbetweentheGPTmodelusingthebaselineprompt- Thissectionanalyzestheexperimentalresultstoaddress ing framework and the DLAP approach. The performance theresearchquestions. ofeachframeworkisevaluatedbasedonfivemetrics: Pre- cision ( ), Recall ( ), F1 Score (𝐹 ), False Positive 5.1. RQ1: SelectionofDLModels vul vul 1 Rate (FPR), and Matthews Correlation Coefficient (MCC). We conducted experiments on four large-scale projects DLAPconsistentlyoutperformstheotherframeworksacross to investigate which category of DL model is suitable for nearlyallmetricsanddatasets. Specifically,DLAPachieves DLAP.TheresultsareprovidedinTable3,whichrevealthat the highest Precision, Recall, F1 Score, and MCC values, usingLinevuloutperformsusingothersinmostdatasetsand showcasingitssuperiorabilitytoaccuratelyidentifyvulner- metrics. For instance, in the Chrome dataset, DLAP with Yang et al.: PreprintsubmittedtoElsevier Page 9 of 15A Deep Learning Augmented Large Language Model Prompting Framework Table 3 Results of DL model comparison Linevul Devign Sysevr Project 𝐹 FPR MCC 𝐹 FPR MCC 𝐹 FPR MCC vul vul 1 vul vul 1 vul vul 1 Chrome 40.4 73.3 52.1 28.4 37.6 29.3 85.5 43.7 54.0 26.1 27.7 56.8 37.2 39.0 14.6 Android 34.6 86.2 49.3 41.4 36.1 31.7 85.5 46.2 46.7 31.3 29.4 80.3 43.1 48.7 25.5 Linux 57.1 76.4 65.4 13.9 56.4 48.8 66.3 56.3 16.9 44.4 27.7 22.6 24.9 14.4 08.8 |
Qemu 84.2 55.1 66.7 01.9 63.9 52.8 65.5 58.5 10.7 50.3 28.6 10.0 14.8 04.4 09.0 Figure 6: Distribution of probability density (Sysevr, Devign, and Linevul) on Google project butalsomaintainsalowfalsepositiverateandachievesout- Table 4 standingoverallperformanceasevidencedbyitsF1Scores 𝐶𝑉 of DL model comparison and MCC values. This demonstrates DLAP’s exceptional DLmodel Chrome Android Linux Qemu average effectiveness in harnessing the power of LLM for the crit- Sysevr 0.1 1.2 0.4 0.02 0.43 ical task of vulnerability detection, which outperforms the Devign 0.5 1.2 2.0 2.0 1.4 capabilitiesofotherpromptingframeworks. Linevul 2.4 2.6 2.5 3.3 2.7 Summary for RQ2: DLAP’s overall performance is superior to other prompting frameworks, which is evi- abilities with minimal false positives. For instance, in the dencedbyitsexceptionalMCCscoresandhighervalues Chrome dataset, DLAP’s Precision of 40.4% and Recall of in , ,and𝐹 . 73.3%significantlysurpassthoseofthenextbestframework, vul vul 1 GRACE.Furthermore,DLAP’sexceptionalperformanceis highlightedbyitsF1Score,reachingupto52.1%inChrome, 5.3. RQ3: Promptingvs. Fine-tuning Table6showsthatfine-tuninganLLMonalargeproject 49.3%inAndroid,65.4%inLinux,andanimpressive66.7% inQemu,whicharehighercomparedtothebaselineframe- has a higher 𝐹 than DLAP. However, on a small project 1 with imbalanced data, DLAP performs better. In particu- works. In terms of FPR, DLAP demonstrates a moderate lar, LLMscannotfine-tunedonQemubecausetheproject FPRacrossthedatasets. DespitethatFPRisnotthelowest has a small amount of data. In contrast, DLAP gets the ontheChrome,Android,andLinuxdatasetswhencompared distributioncharacteristicsofsmallsamplesandhencecan to the baselines of PRol and PCot, DLAP is far superior to achievebetterperformance. Inaddition,fine-tuninganLLM themonthe𝐹 andMCC.Therefore,DLAP’soveralleffect 1 requiresstoppingthemodelandretrainingitbeforeusingit, exceedsthebaselineframeworks. whereas DLAP does not need to be removed for retraining In particular, DLAP’s MCC values, which indicate the during its use. It is used as a plug-in to access an LLM in quality of binary classifications, significantly exceed those realtimetoaugmentthevulnerabilitydetectioncapabilityof oftheothermethods, suchas37.6%inChromeand63.9% LLMs. Besides, the comparison of computational cost be- inQemu,furtherestablishingitssuperiorperformanceinthe tweenDLAPandLoRAfine-tuningisshownonTable7. It taskofvulnerabilitydetectionusingLLMs. Ourframework isclearthatfine-tuninga13BLLMrequirescloseto40GB consistentlysurpassesthetopbaselineintermsoftheMCC of graphics memory and a lot of time. In contrast, DLAP indicator,which,basedonthenatureoftheMCCcorrelation canselectasmallDLmodelandtrainittofitthetargetdata coefficient,suggeststhatourpredictionsmoreaccuratelyre- inlessthanonehour. flecttheactualdistributionandindicatesDLAPissuperior Equation(5)(Cf. Section3.3)indicatesthatDLmodel to the baselines in the generalization performance of large traininginformationchangestherelaxedattentionoftheLLM. datasets. Thisresultsinanimplicitfine-tuningfortheLLMtoadaptto Overall,theanalysisrevealsthatDLAPnotonlyexcels aspecificdetectiontask[11]. Whetherthroughfine-tuning inidentifyingvulnerabilitieswithhighprecisionandrecall Yang et al.: PreprintsubmittedtoElsevier Page 10 of 15A Deep Learning Augmented Large Language Model Prompting Framework Table 5 Results of prompting framework comparison Chrome Android Linux Qemu Framework 𝐹 FPR MCC 𝐹 FPR MCC 𝐹 FPR MCC 𝐹 FPR MCC vul vul 1 vul vul 1 vul vul 1 vul vul 1 P 24.4 07.2 11.1 05.8 02.3 22.5 06.4 10.0 05.6 01.3 22.4 06.6 10.2 05.6 01.7 22.2 06.9 10.5 04.4 04.2 Rol P 22.7 54.6 32.1 48.6 04.8 21.8 63.4 32.5 58.3 04.2 24.6 70.2 36.5 52.6 14.1 19.3 55.2 28.6 42.1 09.5 Aux P 16.8 05.4 08.1 07.0 02.6 31.6 03.1 05.7 01.7 04.0 30.7 08.0 12.7 04.4 06.5 64.7 38.0 47.8 03.8 43.0 Cot GRACE 32.6 37.5 32.6 80.2 11.2 25.0 82.6 38.4 74.0 08.5 25.0 76.0 37.6 76.0 02.0 17.1 93.1 28.9 82.4 10.6 DLAP 40.4 73.3 52.1 28.4 37.6 34.6 86.2 49.3 41.4 36.1 57.1 76.4 65.4 13.9 56.4 84.2 55.1 66.7 01.9 63.9 Table 6 Results of performance comparison with fine-tuning Fine-TuningVicuna-13B DLAP Dataset 𝐹 FPR MCC 𝐹 FPR MCC vul vul 1 vul vul 1 Chrome 91.4 74.4 82.0 01.8 78.6 40.4 73.3 52.1 28.4 37.6 Android 67.0 35.8 46.7 04.5 40.4 34.6 86.2 49.3 41.4 36.0 Linux 96.4 55.4 70.3 00.5 68.9 57.1 76.4 65.4 14.0 56.4 Qemu 99.9 06.7 12.1 00.1 23.4 84.2 55.2 66.7 01.9 63.9 Total 88.7 43.0 52.8 01.2 52.8 54.1 72.8 58.4 21.4 48.5 Table 7 Reuslts of computational cost comparison Fine-Tuning DLAP Dataset M(MB)T(h) GPU(GB) M(MB)T(h) GPU(GB) Chrome 5.1 11.1 31.2 3.6 0.8 6.3 Android 4.9 4.2 30.3 4.3 0.5 5.5 Figure 7: Predictive distributions resulted by DLAP and fine- |
Linux 4.9 5.5 30.3 3.8 0.4 5.5 tuning Qemu 4.8 1.3 28.7 0.9 0.3 2.8 Summary for RQ3: Although the overall performance orIn-ContextLearning(ICL),theextenttowhichamodelis of DLAP is slightly lower than that of fine-tuning, their fine-tunedreflectsitsabilitytoadapttothetargettask,serv- predictivedistributionsaresimilar. Thisresultshowsthat ingasacrucialfactorinstimulatingLLMstoperformwell. DLAPpromptsLLMstoproduceeffectiveimplicitfine- According to the detection result shown in Table 6, DLAP tuning with performance comparable to that of explicit performswellinapproximatingfine-tuningonperformance fine-tuningbutatasignificantlylowercost. evaluationmetrics. TofurtherexplainwhatmechanisminducesLLMtopro- duceimplicitfine-tuningandachievegoodperformanceon 6. Discussion thetargettask, weextracttheattentionlayerfromthefine- tunedlocalLLMtocalculatetheprobabilityforeachdetec- In this section, we discuss the DL model selection for tion category. Subsequently, we gather the ICL outputs by DLAPandDLAP’spotentialgeneralizationcapability. theLLMwithDLAPtocalculatetheprobabilityforeachde- tectioncategory. Theprobabilitydistributionofthedifferent 6.1. DLModelSelectionforDLAP classesindicatesthedegreeoffine-tuningofthemodel. Sec- BasedontheinsightsgainedfromRQ1,DLmodelswith tion 5.3 shows that probability distributions between fine- discrete predictive probability density distributions for the tuning and DLAP are similar. The same distribution ex- data are more suitable as an integrated plug-in for DLAP. plains that DLAP enables implicit fine-tuning at a reduced Additionally, we have observed the effectiveness of a DL cost. modelasanLLMpromptmodel. Wehavediscoveredthatits In comparison with fine-tuning, Figure 8 shows a real utility significantly improves when it exhibits discrete data example of using DLAP to detect vulnerabilities in Linux. with the highest value of 𝐶𝑉. Moreover, our experiments Theoutcomes,whichareeasilyunderstandabletodevelop- havehighlightedtheexceptionalperformanceofTransformer- ers,closelymatchtherecordsfromtheactualissuefixcom- basedmodelsindrivingtheLLM.Thisadvantagecouldbe mit. Incontrast,theoutputfromafine-tunedLLMislimited attributed to the architectural resemblance between Trans- tosimple‘yes’or‘no’responses. DLAP’sresultsaremore formermodelsandthedesignarchitectureoftheLLM.The comprehensible to developers than those from fine-tuning similarityintheirstructuresallowsforseamlessintegration, alone. enablingtheattention-layerparametersderivedfromTrans- Yang et al.: PreprintsubmittedtoElsevier Page 11 of 15A Deep Learning Augmented Large Language Model Prompting Framework Figure8: AnexampleofapplyingDLAPtorepairLinuxcodeissues(commitid: f6d8bd0) former models to play a pivotal role in facilitating implicit knowntobeaffectedbythereportedvulnerabilitiesasaCOT fine-tuningwithintheLLM. templatelibraryforaffectedlibrariesidentification. Theknown Byleveragingtheseattention-layerparameters,theLLM affected libraries are collected to train a DL model. Then, dynamicallyadjustsandrefinesitsinternalmechanisms,im- existingstatictools(fastXML8)andtheDLmodelareused plicitlyadaptingitselftothenuancesandintricaciesofdif- to generate paramilitary results for the XML library list in ferent downstream tasks. This implicit fine-tuning process the project. Finally, by combining the results as a key to empowers the LLM to generate more accurate and contex- querytheCOTtemplatelibrary,COTpromptscanbebuilt tuallyrelevantresponses,therebyenhancingitsoverallper- toaugmenttheLLMs,andtheidentificationoflibrariesfrom formanceinvariousapplicationscenarios. vulnerabilitydatamaybemoreaccurate. In summary, our experiments have revealed the crucial CodesmelldetectionisanASATtaskthatpreventssoft- rolesplayedbyboththevariedconformityoftheDLmodel ware from technical debts. Code smell detection based on and the resemblance between Transformer models and the DLmodelsisamulti-classificationdetectiontaskcomprised architecture of LLMs. These factors, combined with the ofseveralbinaryclassificationmodels,eachdesignedtode- implicitfine-tuningfacilitatedbyattention-layerparameters, tect a specific category of code smell [21, 33]. Utilizing enabletheLLMtoexcelinadaptingtoandfulfillingthere- DLAP requires the creation of a comprehensive reference quirementsofdiversedownstreamtasks. library of code smells and a high-quality coding standards library. ThenDLAPusesthestatictools(checkstyle9)and 6.2. GeneralizationCapabilityofDLAP DLmodelsforspecificdetectionprojects. Liketheprogress The DLAP framework effectively stimulates LLMs to mentionedinthispaper,employingtheDLmodelaugments implicitly fine-tune themselves for other software develop- the LLMs with prompts for the specific project code smell ment tasks. By integrating existing static tools and deep detection. DLAPisutilizedinotherASATtaskswhichneed learningmodels,DLAPisappliedtoavarietyofASATtasks. to combine DL models with LLMs to improve the perfor- ThisadaptationsimplifiestheprocessofadoptingDLAPto manceofLLMsinthetargettasks. dealwithnewchallenges. Weintroducetwoscenariosthat mayextendtheapplicabilityofDLAP. Automatedidentificationofaffectedlibrariesfromvul- 7. ThreatstoValidity nerability data is an ASAT task that requires figuring out Thissectionanalyzespossiblethreatstovalidity[48]and |
whichlibrariesinsoftwarearerelatedtoeachofthereported oureffortstomitigatetheirimpacts. vulnerabilitiesinopenvulnerabilityreportsets(e.g.,NVD, InternalValidity. TheefficacyoftheDLAPreliesonits CVE).Thetaskisformulatedasanextrememulti-labellearn- ing [15, 6]. First, DLAP constructs a sufficient vulnerabil- 8 https://github.com/fastXML/fastXML 9 ity description database and combines them with libraries https://checkstyle.org Yang et al.: PreprintsubmittedtoElsevier Page 12 of 15A Deep Learning Augmented Large Language Model Prompting Framework corecomponent,theDLmodels. Whileitispermissiblefor References theseDLmodelstoresultinjudgmentbiasesforinputcode, [1] Arakelyan, S., Das, R., Mao, Y., Ren, X., 2023. Exploringdistri- acompletelyerroneousDLmodelthatiseitherirrelevantor butionalshiftsinlargelanguagemodelsforcodeanalysis,in: Pro- detrimental to the task can severely undermine the perfor- ceedingsofthe22ndConferenceonEmpiricalMethodsinNatural manceoftheDLAP.Therefore,whenemployingtheDLAP, LanguageProcessing(EMNLP),ACL.pp.16298–16314. itiscrucialtoselectDLmodelsthatarecapableofaddress- [2] Bai,Y.,Kadavath,S.,Kundu,S.,Askell,A.,Kernion,J.,Jones,A., ingthespecificobjectivesofthetasktoeffectivelyaugment Chen, A., Goldie, A., Mirhoseini, A., McKinnon, C., etal., 2022. Constitutional ai: Harmlessness from ai feedback. arXiv preprint theperformanceofLLMs. Besides,duetotheclosesource arXiv:2212.08073. natureofcertainLLMs(GPT-3.5-turbo),theirinternalstruc- [3] Brown,T.,Mann,B.,Ryder,N.,Subbiah,M.,Kaplan,J.D.,Dhariwal, tures and the specific fine-tuning methods they employ re- P.,Neelakantan,A.,Shyam,P.,Sastry,G.,Askell,A.,etal.,2020. main unknown. Therefore, for our experiments, we use an Languagemodelsarefew-shotlearners. AdvancesinNeuralInfor- open-sourceLLM(Llama-13b),forcomparativefine-tuning mationProcessingSystems33,1877–1901. [4] Chakraborty,S.,Krishna,R.,Ding,Y.,Ray,B.,2022.Deeplearning studies. basedvulnerabilitydetection: Arewethereyet. IEEETransactions ConstructValidity. TherelaxedattentionoftheLLMs onSoftwareEngineering48,3280–3296. changesunderthestimulationofDLAPaccordingtoEqua- [5] Chen, M., Tworek, J., Jun, H., Yuan, Q., Pinto, H.P.d.O., Kaplan, tion(5). Wedefinethisstimulationastheimplicitfine-tuning J.,Edwards,H.,Burda,Y.,Joseph,N.,Brockman,G.,etal.,2021. ofLLMscausedbyDLAPtoadapttothefeatureofthetarget Evaluatinglargelanguagemodelstrainedoncode. arXivpreprint arXiv:2107.03374. project. Because of the limitations of observing the inter- [6] Chen,Y.,Santosa,A.E.,Sharma,A.,Lo,D.,2020. Automatediden- naloutputofLLMs,wecannotstrictlydemonstratethatthe tificationoflibrariesfromvulnerabilitydata,in: Proceedingsofthe stimulation produces gradient descent optimization loss on ACM/IEEE42ndInternationalConferenceonSoftwareEngineering: thetargetclassificationtask. Insteadofperformingademon- SoftwareEngineeringinPractice(SEIP),ACM.pp.90–99. stration mathematically, we show some of our results with [7] Cheshkov, A., Zadorozhny, P., Levichev, R., 2023. Evalua- tion of chatgpt model for vulnerability detection. arXiv preprint advantages and some of the intermediate outputs of some arXiv:2304.07232. contrastsshowingfine-tuning,validatingtheexistenceofthe [8] Chowdhery, A., Narang, S., Devlin, J., Bosma, M., Mishra, G., implicitfine-tuningmechanisminthewayofexperimental Roberts, A., Barham, P., Chung, H.W., Sutton, C., Gehrmann, S., data. These visualized experimental results eliminate the etal.,2023.Palm:Scalinglanguagemodelingwithpathways.Journal constructvalidityofthispapertosomeextent. ofMachineLearningResearch24,1–113. [9] Christakis,M.,Bird,C.,2016. Whatdeveloperswantandneedfrom ExternalValidity. FortheverificationofDLAP,thefi- programanalysis: Anempiricalstudy, in: Proceedingsofthe31st nal effect of our template needs to drive LLM to complete IEEE/ACMInternationalConferenceonAutomatedSoftwareEngi- thevulnerabilitydetection,andtheperformanceofthistask neering(ASE),ACM.pp.332–343. isusedasanevaluationtomeasuretheeffectivenessofour [10] Croft,R.,Babar,M.A.,Kholoosi,M.M.,2023.Dataqualityforsoft- method. So when the LLM is different from the LLM se- warevulnerabilitydatasets,in: Proceedingsofthe45thIEEE/ACM InternationalConferenceonSoftwareEngineering(ICSE),IEEE.pp. lectedinthisexperiment,theresultsofusingDLAPwillbe 121–133. different. WeidentifythechoiceofLLMasanexternalva- [11] Dai,D.,Sun,Y.,Dong,L.,Hao,Y.,Ma,S.,Sui,Z.,Wei,F.,2023. liditythreattothiswork. Consideringbothcostandmodel Whycangptlearnin-context? languagemodelsimplicitlyperform performance,wechosetheleastexpensivemodelofthecur- gradient descent as meta-optimizers, in: Proceedings of the 2023 |
rent state-of-the-art LLMs, GPT-3.5-turbo-0125. By using ICLRWorkshoponMathematicalandEmpiricalUnderstandingof FoundationModels(ME-FoMo),AssociationforComputationalLin- thebestmodel,wemakethebestuseofDLAP.weprovide guistics.pp.4005–4019. specificmodelselection, whichensuresthatotherworkre- [12] Fan,J.,Li,Y.,Wang,S.,Nguyen,T.N.,2020.Ac/c++codevulnera- producesthesamelevelofimprovementwhenusingDLAP. bilitydatasetwithcodechangesandcvesummaries,in:Proceedings ofthe17thIEEE/ACMInternationalConferenceonMiningSoftware Repositories(MSR),ACM.pp.508–512. 8. Conclusion [13] Fu, M., Tantithamthavorn, C., 2022. Linevul: A transformer- basedline-levelvulnerabilityprediction,in:Proceedingsofthe19th In this paper, we propose DLAP, a bespoke prompting IEEE/ACMInternationalConferenceononMiningSoftwareRepos- frameworkforASATtasksthathassuperiorandstableper- itories(MSR),ACM.pp.608–620. formance in software vulnerability detection tasks with re- [14] Gonzalez, D., Zimmermann, T., Godefroid, P., Schäfer, M., 2021. sultseasilyunderstandabletodevelopers. Experimentsshow Anomalicious: Automated detection of anomalous and potentially maliciouscommitsongithub, in: Proceedingsof43rdIEEE/ACM theeffectivenessofaugmentingLLMsbyDLmodelstostim- InternationalConferenceonSoftwareEngineering: SoftwareEngi- ulate adaptive implicit fine-tuning. This progress prompts neeringinPractice(ICSE-SEIP),IEEE.pp.258–267. LLMstoexceedbothstate-of-the-artDLsolutionsandLLMs [15] Haryono, S.A., Kang, H.J., Sharma, A., Sharma, A., Santosa, A., withalternativepromptingframeworksinvulnerabilityde- Yi,A.M.,Lo,D.,2022. Automatedidentificationoflibrariesfrom tection. Through experiments, we also find that the pre- vulnerabilitydata: Canwedobetter?, in: Proceedingsofthe30th IEEE/ACM International Conference on Program Comprehension trainedknowledgeofLLMscombinestheoutputsofallparts (ICPC),ACM.pp.178–189. ofDLAPtoachievegoodperformance. Inthefuture,wewill [16] Hsieh,Y.G.,Niu,G.,Sugiyama,M.,2019. Classificationfrompos- utilizeDLAPinmoreASATtaskstoexplorehowDLAPis itive, unlabeled and biased negative data, in: Proceedings of the generalizedtotheothertasks. 36thACMInternationalConferenceonMachineLearning(ICML), PMLR.pp.2820–2829. [17] Hu,E.J.,Wallis,P.,Allen-Zhu,Z.,Li,Y.,Wang,S.,Wang,L.,Chen, Yang et al.: PreprintsubmittedtoElsevier Page 13 of 15A Deep Learning Augmented Large Language Model Prompting Framework W.,etal.,2022.Lora:Low-rankadaptationoflargelanguagemodels, gineeringWorkshops(ISSREW),IEEE.pp.112–119. in:InternationalConferenceonLearningRepresentations(ICLR). [35] Shi,F.,Chen,X.,Misra,K.,Scales,N.,Dohan,D.,Chi,E.H.,Schärli, [18] Jin, M., Shahriar, S., Tufano, M., Shi, X., Lu, S., Sundaresan, N., N.,Zhou,D.,2023. Largelanguagemodelscanbeeasilydistracted Svyatkovskiy, A., 2023. Inferfix: End-to-endprogramrepairwith byirrelevantcontext,in:Proceedingsofthe40thACMInternational llms.arXivpreprintarXiv:2303.07263. ConferenceonMachineLearning(ICML),PMLR.pp.31210–31227. [19] Kang,H.J.,Aw,K.L.,Lo,D.,2022. Detectingfalsealarmsfromau- [36] Steenhoek,B.,Rahman,M.M.,Jiles,R.,Le,W.,2023. Anempirical tomaticstaticanalysistools:Howfararewe?,in:Proceedingsofthe studyofdeeplearningmodelsforvulnerabilitydetection, in: Pro- 44thIEEE/ACMInternationalConferenceonSoftwareEngineering ceedingsofthe45thIEEE/ACMInternationalConferenceonSoft- (ICSE),ACM.pp.698–709. wareEngineering(ICSE),pp.2237–2248. [20] Katsadouros,E.,Patrikakis,C.Z.,Hurlburt,G.,2023. Canlargelan- [37] Telang, R., Wattal, S., 2007. Anempiricalanalysisoftheimpact guagemodelsbetterpredictsoftwarevulnerability? ITProfessional ofsoftwarevulnerabilityannouncementsonfirmstockprice. IEEE 25,4–8. TransactionsonSoftwareengineering33,544–557. [21] Lewowski,T.,Madeyski,L.,2022.Howfararewefromreproducible [38] Thapa,C.,Jang,S.I.,Ahmed,M.E.,Camtepe,S.,Pieprzyk,J.,Nepal, researchoncodesmelldetection? asystematicliteraturereview. In- S.,2022. Transformer-basedlanguagemodelsforsoftwarevulnera- formationandSoftwareTechnology144,106783. bilitydetection,in:Proceedingsofthe38thACMAnnualComputer [22] Li, J., Li, G., Li, Y., Jin, Z., 2023. Enablingprogrammingthink- SecurityApplicationsConference(ACSAC),ACM.pp.481–496. inginlargelanguagemodelstowardcodegeneration. arXivpreprint [39] Tomas,N.,Li,J.,Huang,H.,2019. Anempiricalstudyonculture, arXiv:2305.06599. automation,measurement,andsharingofdevsecops,in: 2019Inter- [23] Li, X., Chang, X., Board, J.A., Trivedi, K.S., 2017. A novel ap- nationalConferenceonCyberSecurityandProtectionofDigitalSer- |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.