text
stringlengths 64
2.99M
|
---|
instances for testing. Table 1 show the statistics number of predictions made. This metric is cho- for fine-tuning and testing sets. We will use this senbecauseitreflectsthemodel’sabilitytoaccu- datasetforthesubsequentexperiments. ratelypredictbothvulnerableandnon-vulnerable instances. Ideally,thebasemodelshoulddemon- Statistics Fine-tuning Testing strateshighperformanceinbothclasses. Notethat #Instances 8250 750 thetestingsethasbeenbalancedtocontainanequal Average#tokens 271.80 268.12 proportion of vulnerable and non-vulnerable in- #Vulnerable 4110 390 stances,ensuringthataccuracyreflectsanunbiased #Non-vulnerable 4140 360 assessmentofthemodel’spredictivecapabilities. #C 2750 250 #PHP 2750 250 4.3 PreliminaryExperimentResult #Python 2750 250 Model Accuracy Table1: Datasetstatisticsusedinourstudy. "#"repre- CodeLlama 0.535 sents"numberof." CodeT5+ 0.520 CodeT5 0.480 Llama2 0.511 4 ModelSelection Mistral 0.531 Yi 0.480 Beforeconductingtheexperimentstoanswerthe RQs,weperformapreliminaryexperimenttose- Table2:Thepreliminaryexperimentresult. CodeLlama lectthemodelwithastrongbaseperformancefor performsthebestamongtheothers. the vulnerability detection task. We explain the stepsasfollows. 3https://github.com/01-ai/YiTable 2 reveals the results of the preliminary distributionoverthevocabularyforeachtoken,de- experiments. CodeLlamaperformsthebestamong noted as P = ϕ(t′). The prediction of the next theothermodelswithaccuracy0.535. Therefore, token tˆ is determined by tˆ = argmax(P). i+1 i+1 weselectCodeLlamaasthebasemodeltoanswer AlossfunctionL,i.e.,CrossEntropy(Cox,1958), our Research Questions (RQs) in the subsequent is used to compute the loss. It takes as input the experiments. predictedtokentˆ andthegroundtruthnextto- i+1 ken t , yielding the loss value L(tˆ ,t ) to i+1 i+1 i+1 5 StudyExperimentalSetting optimizethemodel. This section discusses the pipeline in the fine- 5.2 Inference tuning and inference stages. Then, this section Given a code snippet c, the transformation func- elaborates the implementation details, including tion Q formats the code snippet c into a specific the model implementation, training setting, and format, resulting in w = Q(c). This process dif- howtogeneratethenaturallanguageinstructions. fers from the training stage, as the label v is not 5.1 Fine-tuning included in the input to Q. Next, the tokenizer functionprocessesw toproduceasequenceofto- Let c represent the input and v its corresponding kenidst = τ(w). Theembeddinglayerconverts labelinthefine-tuningprocess. Atransformation these token ids into vectors t′ = ϵ(t). The trans- isappliedusingatemplatefunctionQ,whichtakes former block takes these vectors and computes a c and a label l as inputs and produces an output probabilitydistributionP = ϕ(t′)overtheentire w. This output is denoted as s = Q(c,l). The vocabulary. Notably, the token with the highest function Q is responsible for formatting c and l probability,argmax(P),maynotmatchthetarget into a specific template. The choice of template labels ’True’ or ’False’. To address this, we only depends on whether the experiment incorporates extract the probabilities of the ’True’ and ’False’ natural language instructions. The details of this tokens,i.e.,P andP . Thefinaloutputisthe True False templatearedepictedinFigure1. tokenwiththegreaterprobability,determinedby max(P ,P ). True False withinstruction <s>[INST]{insertinstruction}[/INST] 5.3 EvaluationMetrics ###Code: {insertcode} WechoosetheF1scoreastheevaluationmetrics ###Answer: {insertlabel}</s> becauseithasbeenwidelyusedtoevaluatevulner- ability detection tools (Fu and Tantithamthavorn, withoutinstruction 2022;Steenhoeketal.,2023,2022). TheF1score ###Code: {insertcode} considers both the precision and the recall of the ###Answer: {insertlabel}</s> testtocomputethescore. Precisionisthenumber of correct positive results divided by the number Figure1: Thedistinctformatsforthetemplatefunction ofallpositiveresults,andrecallisthenumberof Q(c,l),showingvariationswithandwithouttheinclu- correct positive results divided by the number of sionofnaturallanguageinstructions. positives that should have been returned. Here, positiveresultsreferstovulnerablecode. TheF1 Given a sequence s, a function F gen- scoreistheharmonicmeanofprecisionandrecall, erates a set of input-label pairs, F(s) = givingbothmeasuresequalweight. {((s ,...,s ),s ) | 1 ≤ j < n}, where each 1 j j+1 pairconsistsofasubsequenceofsandthefollow- 5.4 ImplementationDetails ing word as the label. These pairs are then pro- 5.4.1 ModelsandTrainingSetting cessedinabatchwhereatokenizerfunctionτ con- vertseachiteminthebatchintoasequenceoftoken Forallmodels,weusetheofficialimplementation idst = τ(w). Anembeddingfunctionϵmapseach fromHuggingFace. Thelinktothemodelcanbe tokenidt ∈ ttoasequenceofvectorst′ = ϵ(t). foundintheAppendix. i Thetransformerarchitecture(eitherencoderorde- We fine-tune the model using the LoRA tech- coder,dependingonthemodel),representedasa nique (Hu et al., 2022) with r = 16 and α = 32, function ϕ, processes t′ to produce a probability where r is chosen as the largest value that doesnottriggeranout-of-memoryerrorandαisdouble PHP→PHPmodelexhibitssimilarperformanceto |
thevalueofr,asrecommended. Toaccommodate thePHP→CmodelandthePHP→Pythonmodel. themodelwithinourhardwarelimitations,weim- A similar trend is also observed in the C→C plementseveraloptimizations: loadingthemodel modelagainsttheC→PHPandC→Pythonmodels with4-bitprecision,limitingthemaximumcontext and Python→Python against the Python→C and lengthto1024tokens,utilizingFlashAttention2, Python→PHPmodels. andapplyinggradientcheckpointing. Thelearning AnswertoRQ1: Themodelsperformbetter rateissetto3×10−4andthebatchsizeischosenas intheintra-lingualthancross-lingualsettings, thelargestmultipleof2thatfitsourmachine,i.e., asshownbyFigure2. However,whilecross- 8. Weemployacosinelearningrateschedulerwith lingual performance is generally lower, the warmupstepsof100tomitigateunstablelearning gap between intra-lingual and cross-lingual thatmayresultfromarbitraryinitializationofopti- resultsisnotablysmall. mizerstates. Weconductourfine-tuningprocess onamachinewithLinux22.04.2andanRTX3090 Figure 2 also presents the evaluation results GPU with 24GB of memory. We perform all ex- for intra-lingual and cross-lingual settings, com- perimentsthreetimeswiththreedifferentrandom paring scenarios with and without natural lan- seedstoavoidaluckytrial. guage instructions. A significant decline in the model’s performance is evident when inputs in- 5.4.2 GeneratingandChoosingInstructions clude natural language instructions. Such a case Initially, we create a set of instructions manually. is indicated by the lower F1 scores of grey bars These initial instructions are then refined using compared to black bars in Figure 2. For in- GPT-4, yielding 15 refined versions. From these, stance, the model fine-tuned and tested on C we select 5 instructions for our final instruction without instructions C→C yields better perfor- pool. Thisselectionisbasedonevaluatingthequal- mance than the model fine-tuned and tested on ity and diversity through human judgment. The Cwithinstructions(C →C ). Thesame instruct instruct rationaleforlimitingto5instructionsistoprevent result is also observed in the Python→Python anexcessivenumberthatmightimpedethemodel’s model against the Python →Python instruct instruct learningability,potentiallytreatingtheinstructions model and the PHP→PHP model against the asnoise. Duringtheexperimentusingnaturallan- PHP →PHP model. The cross-lingual instruct instruct guageinstructions(RQ3),thetemplatefunctionQ setting also exhibits the same trend. This de- choosesoneinstructionfromtheinstructionpool creaseinperformancemightbeduetothenatural usingrandomsampling. language instructions inadvertently diverting the model’s attention from important syntactical and 6 ExperimentalResults semanticfeaturesofthecode,whichareessential Figure2illustratesthemodel’sperformanceinthe topredicttheexistenceofvulnerabilitiesincode. intra-lingualandcross-lingualsettings. Intheintra- AnswertoRQ2: Themodelswithoutnatural lingual context, the C→C model (i.e., the model language instructions outperform those with fine-tuned and tested on C) exhibits the highest instructions across intra-lingual and cross- performance,closelyfollowedbythe(PHP→PHP) lingualsettings. Thisisexemplifiedbyhigher and(Python→Python)models. Whencomparing F1 scores in Figure 2 in scenarios such as intra-lingualtocross-lingualperformanceforeach C→CcomparedtoC →C ,andsim- programming language, models generally show instruct instruct ilarlyforPythonandPHPmodels. better results in the intra-lingual setting. For ex- ample,theC→CmodeloutperformstheC→PHP Figure3demonstratestheimpactoflanguagedi- andC→Pythonmodels. Thispatternisconsistent versityinthefine-tuningsetonthemodel’sperfor- whenthemodelistestedusingPHPandPythonlan- mance. ForthemodeltestedontheClanguage(the guages. Ahigherperformanceintheintra-lingual leftmostchartofFigure3),incorporatingadditional settingisexpected,giventhesyntacticalsimilari- languageswhenfine-tuningthemodelwithoutnat- tiesbetweenthefine-tuningandtestingdatasets. ural language instructions leads to a decrease in However,aninterestingobservationistherela- themodel’sperformance. Thisfindingalignswith tivelysmallperformancegapbetweenintra-lingual our expectations. However, the results show that and cross-lingual settings. For example, the adding relevant languages when fine-tuning theFigure2: TheF1scoresformodelsfine-tunedonC,PHP,andPython,acrossdifferenttestlanguages. Twosetsof barsaredisplayedforeachtestlanguage,representingmodelperformancewith(grey)andwithout(black)natural languageinstructions. Figure3: ComparativePerformanceofModelsinDifferentTestingSets. ThesegroupedbarchartsillustratetheF1 scoresformodelsfine-tunedonvariouslanguagecombinations(C,C-PHP,C-Python,C-Python-PHP)andtestedon C,PHP,andPython. Eachchartcontraststhemodel’sperformancewith(greybars)andwithout(blackbars)natural languageinstructions. modelwithoutnaturallanguageinstructionsdoes illustratedbythehighergreybarsinFigure3when notenhanceperformancebutyieldsasignificantde- the model is fine-tuned using multiple languages cline. Suchacaseisindicatedbythehigherblack than when only using a single language. For in- barsinFigure3whenthemodelisfine-tunedusing stance,theC-Python-PHP→C,C-Python→C,and a single language rather than multiple languages. C-PHP→Cmodelsdemonstratebetterperformance |
Forexample, inthemiddlechartofFigure3, the thantheC→Cmodelasindicatedbytheleftmost C-PHP→PHPmodelexhibitspoorerperformance chartofFigure3. ThePHPandPythonlanguages than the C→PHP model. The same trend is also alsoexhibitthesametrend. observedintheC-Python→Pythonmodelagainst AnswertoRQ3: Addingmultipleprogram- theC→Pythonmodel. Similarly,includingalllan- minglanguagestothefine-tuningsetwithout guages(C,PHP,andPython)whenfine-tuningthe natural language instructions reduces model model without natural language instructions also performance. Thisis shownby higherblack doesnothelpthemodelperformbetterthanonly bars in the charts in Figure 3 for models fine-tuningthemodelusingasinglelanguage. fine-tuned on multiple languages compared tothoseonasinglelanguage. Conversely,in- Conversely, a different pattern emerges when corporatingnaturallanguageinstructionsand fine-tuningusingnaturallanguageinstructionswith multiplelanguagesinthefine-tuningprocess moreprogramminglanguages. Addingmorepro- improves the model’s performance, as indi- gramminglanguagescanimprovethemodel’sper- formanceacrossalltestsets. Thisimprovementiscatedbyhighergreybarsinthecharts. PHPandPythonlanguages. Thispatternsuggests thataddingmoreinstructionsmightdetrimentally 7 Discussion affect model performance. Furthermore, the sig- nificant performance drop highlighted in Table 3 This section discusses the manual case study, indicatesanotablesensitivityofthemodeltothe lessonslearned,andthreatstovalidityofthisstudy. instructions. 7.1 CanLanguage-specificInstructions 7.2 IstheModelFine-tunedonLanguageX ImprovetheModel’sPerformance? AbletoPredictaSimilarVulnerabilityin Onehypothesisforthemodel’spoorperformance LanguageY? when the model is fine-tuned using natural lan- We investigate whether models fine-tuned in one guageinstructionsisthattheinstructionpoollacks programminglanguagecanpredictvulnerabilities language-specificcues,i.e.,thesameinstructions in another by comparing intra-lingual and cross- are used across all languages. To verify this, we lingualsettingswithinthesametestinglanguage. conductedacheckbymodifyingtheexperimental Specifically, we chose the C→C model without setupfromRQ3. Weredotheexperimentsbyex- instructions for intra-lingual comparison due to plicitly stating the programming language of the its superior performance as shown in Figure 2. codesnippetintheinstruction,asillustratedinthe For cross-lingual comparison, we analyze mod- updatedtemplateshowninFigure4. Weconduct els trained on a single language but tested on C, thenewexperimentsthreetimesusingthreediffer- namelythePHP→CandPython→Cmodels. Both entrandomseeds. modelsarealsofine-tunedwithoutinstructions. We comparethetop-3correctpredictionsacrossthese The following code is written in {insert models,whichareconsistentacrossall: CWE-125 language} (outofboundsread),CWE-119(bufferoverflow), <s>[INST]{insertinstruction}[/INST] and CWE-20 (improper input validation), as de- ###Code: {insertcode} tailedinTable4. ###Answer: {insertlabel}</s> CWE-ID C→C PHP→C Python→C CWE-125 36 36 19 Figure4: Theupdatedtemplatethatincorporateslan- CWE-119 23 23 7 guageinformationintheinstruction. CWE-20 20 20 6 Table 4: Number of correct instances for the top-3 Fine-tuningset F1_new F1_ori predicted vulnerabilities for the C→C, PHP→C, and C 0.054 0.156 Python→Cmodels. C-PHP 0.081 0.194 C-Python 0.057 0.283 Furtheranalysisisconductedbyexaminingthe C-Python-PHP 0.045 0.411 fine-tuningdataforoccurrencesoftheCWEslisted inTable4,assummarizedinTable5. Table3: Thecomparisonresultsbetweentheupdated andoldtemplateswhentestedonClanguages. F1_ori CWE-ID C PHP Python istheF1scoreusingtheinstructioninFigure1,while CWE-125 353 1 68 F1_newistheF1scoreusingtheinstructioninFigure4. CWE-119 274 0 26 CWE-20 230 136 153 Table3presentsacomparisonbetweenthenew results obtained using the updated template and Table5: NumberofinstancesforeachCWE-IDinthe the previous results from Figure 3. These results fine-tuningdataset. demonstrateaperformancedeclinewhenemploy- ing the updated template. Specifically, the F1 Thedatarevealsthatmodelscanrecognizesimi- scores for models tested on the C language with larvulnerabilitiesevenwhenfine-tunedindifferent instructions from Figure 4 are lower than those languages. For example, the Python→C model, achievedwiththeoriginalinstructionformatinFig- initiallyfine-tunedwith68instancesofCWE-125 ure1. Asimilartrendisobservedintestsinvolving inPython,successfullydetects19instancesinC,asshowninTable4. Notably,thePHP→Cmodel, threats,wesystematicallyselectthemodelthrough despitebeingfine-tunedwithonlyoneinstanceof initial filtering and preliminary experiments. Ad- CWE-125,matchestheC→Cmodel’sperformance ditionally,weemploytheofficialimplementations incorrectlydetecting36vulnerabilityinstancesin available on HuggingFace to ensure consistency. C language. Moreover, even with no instances Wealsoconducttheexperimentsthreetimes,which of CWE-119 in its fine-tuning data, the PHP→C helpstoavoidbiasthatmightresultinaparticular modelcorrectlypredicts36instanceswhenevalu- trialfavoringspecificsettings. |
ated in C language. We conduct a manual check ExternalValidity: Theselectionofourdataset andfindthatCWE-119andCWE-125areclosely canimpactthestudy. Tominimizethisthreat,we related,i.e.,bufferoverflowcanpotentiallycause choose the dataset systematically. Furthermore, out-of-bound access. This could suggest a corre- the class imbalance between vulnerable and non- lation between CWE-119 and CWE-125, where vulnerablecodeisapotentialissue. Weaddressthis learningtodetectonemayimprovethedetection by carefully preprocessing the dataset to balance oftheother. thenumberofvulnerableandnon-vulnerablecode snippetsacrossdifferentprogramminglanguages. 7.3 LessonsLearned We also ensure equal representation of each pro- Thisstudyyieldsseveralinsightfullessons,which gramming language in the dataset. However, our areoutlinedasfollows: study is limited to three languages: C, PHP, and Instruction-basedfine-tuningmaynotalways Python,consideringthetimeandresourcelimita- enhanceperformance. Ourfindingssuggestthat tions. However,weconsidertheirsyntaxdiversity fine-tuninglanguagemodelsusingnaturallanguage andtheprogrammingusecasestomakesurethat instructionsdoesnotinvariablyleadtoimproved thechosenlanguagesarerepresentative. Moreover, performance. Thisobservationcontrastswithprior thethreatofrandomnessismitigatedbysettinga studies(Weietal.,2022a;Honovichetal.,2023), fixedrandomseedforeachexperimenttrial. whichhighlightedthebenefitsofinstruction-based Construct Validity: The choice of evaluation fine-tuning. Consequently, we encourage the re- metricscanpotentiallythreatenthevalidityofour search community to further investigate this area study,asinappropriatemetricsmaynotaccurately togainadeeperunderstandingofinstruction-based capturetheintendedoutcomes. Tominimizethis fine-tuning. threat,weconductamanualcasestudyalongside Single-language fine-tuning outperforms theautomatedmetricsinSection7. multi-languagefine-tuning. Ourfindingsreveal that training language models with data from 8 ConclusionandFutureWork multiplelanguagescansignificantlydiminishtheir performance. Therefore,itappearsmoreeffective Thisstudyscrutinizesthemodelgeneralizationbe- to fine-tune using data from a single language. yondthelanguagesinthefine-tuningdataandthe This can be attributed to several factors, such as role of natural language instructions in improv- syntaxandgrammaticalcomplexity,limitedmodel inggeneralizationperformanceinthevulnerability capacity,andcross-lingualinterference. detection task. We have conducted experiments Predictingsimilarvulnerabilitiesacrosslan- usingrecentmodelstodetectreal-worldvulnerabil- guages. Ourresultsindicatethefeasibilityofaccu- ities. Ourstudyyieldsthreeinsights. First,models ratelypredictingsimilarvulnerabilitiesindifferent perform more effectively in scenarios where the programminglanguages. Thisapproachcouldbe languageisthesameasinthetrainingdata(intra- particularlyadvantageouswhendealingwithlow- lingual) compared to different languages (cross- resourcelanguages. Analternativestrategymight lingual), although the difference in performance involveidentifyinglanguageswithadiverserange is not substantial. Second, models that do not of vulnerabilities and using these as the basis for usenaturallanguageinstructionsoutperformthose fine-tuning,ratherthanrelyingsolelyondatafrom thatdoinbothintra-lingualandcross-lingualset- low-resourcelanguages. tings. Third, when multiple programming lan- guagesareaddedtothetrainingsetwithoutnatural 7.4 ThreatstoValidity language instructions, there’s a decline in model Internal Validity: The choice of the model can performance. Conversely,whennaturallanguage significantlyinfluenceourstudy. Tominimizesuch instructionsarecombinedwithmultipleprogram-minglanguagesinthefine-tuningprocess,themod- MichaelFu,ChakkritTantithamthavorn,TrungLe,Van elsshowbetterperformance. Nguyen, and Dinh Q. Phung. 2022. Vulrepair: a t5-basedautomatedsoftwarevulnerabilityrepair. In Future Work. Future research can further in- ESEC/SIGSOFTFSE,pages935–947.ACM. vestigate the generalization capabilities of mod- elsinidentifyingdifferenttypesofvulnerabilities. OrHonovich,ThomasScialom,OmerLevy,andTimo Schick. 2023. Unnatural instructions: Tuning lan- Ourinitialcasestudyindicatesthatthemodelcan guagemodelswith(almost)nohumanlabor. InACL generalize from one vulnerability type, such as a (1),pages14409–14428.AssociationforComputa- bufferoverflow,toacloselyrelatedone,likeoutof tionalLinguistics. boundsread. However,comprehensivestudiesare Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan required to confirm its applicability to a broader Allen-Zhu,YuanzhiLi,SheanWang,LuWang,and rangeofvulnerabilities. Additionally,thisworkhas WeizhuChen.2022. Lora: Low-rankadaptationof primarily focused on straightforward fine-tuning largelanguagemodels. InICLR.OpenReview.net. instructions. Exploringadvancedtechniqueslike AlbertQ.Jiang,AlexandreSablayrolles,ArthurMen- Chain-of-Thought(Weietal.,2022b),instruction sch,ChrisBamford,DevendraSinghChaplot,Diego withreasoning(Mitraetal.,2023),orinstruction de Las Casas, Florian Bressand, Gianna Lengyel, withdemonstration(Shaoetal.,2023),whichhave Guillaume Lample, Lucile Saulnier, Lélio Re- |
nard Lavaud, Marie-Anne Lachaux, Pierre Stock, shownpromiseinothertasks,couldoffervaluable TevenLeScao,ThibautLavril,ThomasWang,Timo- insights in the context of vulnerability detection. théeLacroix,andWilliamElSayed.2023. Mistral Finally,expandingexperimentstoincludeadiverse 7b. CoRR,abs/2310.06825. arrayofmodelsandprogramminglanguagescould Justin M. Johnson and Taghi M. Khoshgoftaar. 2019. provide a more comprehensive understanding of Surveyondeeplearningwithclassimbalance. J.Big the model’s effectiveness across different scenar- Data,6:27. ios. Zhen Li, Deqing Zou, Shouhuai Xu, Xinyu Ou, Hai Jin, Sujuan Wang, Zhijun Deng, and Yuyi Zhong. 2018. Vuldeepecker: Adeeplearning-basedsystem References for vulnerability detection. In NDSS. The Internet Society. GuruPrasadBhandari,AmaraNaseer,andLeonMoo- nen.2021. Cvefixes: automatedcollectionofvulner- BinLiu,BinLiu,HongxiaJin,andRameshGovindan. abilitiesandtheirfixesfromopen-sourcesoftware. 2015. Efficientprivilegede-escalationforadlibraries InPROMISE,pages30–39.ACM. inmobileapps. InMobiSys,pages89–103.ACM. Saikat Chakraborty, Rahul Krishna, Yangruibo Ding, ZiangMa,HaoyuWang,YaoGuo,andXiangqunChen. andBaishakhiRay.2022. Deeplearningbasedvul- 2016. Libradar: fastandaccuratedetectionofthird- nerabilitydetection: Arewethereyet? IEEETrans. partylibrariesinandroidapps. InICSE(Companion SoftwareEng.,48(9):3280–3296. Volume),pages653–656.ACM. YizhengChen,ZhoujieDing,LamyaAlowain,Xinyun Swaroop Mishra, Daniel Khashabi, Chitta Baral, and Chen, andDavidA.Wagner.2023. Diversevul: A Hannaneh Hajishirzi. 2022. Cross-task generaliza- newvulnerablesourcecodedatasetfordeeplearning tionvianaturallanguagecrowdsourcinginstructions. basedvulnerabilitydetection. InRAID,pages654– InACL(1),pages3470–3487.AssociationforCom- 668.ACM. putationalLinguistics. DavidRCox.1958. Theregressionanalysisofbinary ArindamMitra, LucianoDelCorro, ShwetiMahajan, sequences. JournaloftheRoyalStatisticalSociety AndrésCodas,ClarisseSimões,SahajAgrawal,Xuxi SeriesB:StatisticalMethodology,20(2):215–232. Chen,AnastasiaRazdaibiedina,ErikJones,KritiAg- garwal,HamidPalangi,GuoqingZheng,CorbyRos- TriDao.2023. Flashattention-2: Fasterattentionwith set,HamedKhanpour,andAhmedAwadallah.2023. better parallelism and work partitioning. CoRR, Orca2: Teachingsmalllanguagemodelshowtorea- abs/2307.08691. son. CoRR,abs/2311.11045. JiahaoFan,YiLi,ShaohuaWang,andTienN.Nguyen. Annamalai Narayanan, Lihui Chen, and Chee Keong 2020. AC/C++codevulnerabilitydatasetwithcode Chan.2014. Addetect: Automateddetectionofan- changesandCVEsummaries. InMSR,pages508– droidadlibrariesusingsemanticanalysis. InISSNIP, 512.ACM. pages1–6.IEEE. MichaelFuandChakkritTantithamthavorn.2022. Line- Georgios Nikitopoulos, Konstantina Dritsa, Panos vul: Atransformer-basedline-levelvulnerabilitypre- Louridas,andDimitrisMitropoulos.2021. Crossvul: diction. InMSR,pages608–620.ACM. across-languagevulnerabilitydatasetwithcommitdata. In ESEC/SIGSOFT FSE, pages 1565–1569. modelswithself-generatedinstructions. InACL(1), ACM. pages13484–13508.AssociationforComputational Linguistics. BaptisteRozière,JonasGehring,FabianGloeckle,Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, YueWang,HungLe,AkhileshDeepakGotmare,Nghi Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom D. Q. Bui, Junnan Li, and Steven C. H. Hoi. Kozhevnikov, Ivan Evtimov, Joanna Bitton, Man- 2023b. Codet5+: Open code large language mod- ishBhatt,CristianCanton-Ferrer,AaronGrattafiori, els for code understanding and generation. CoRR, Wenhan Xiong, Alexandre Défossez, Jade Copet, abs/2305.07922. Faisal Azhar, Hugo Touvron, Louis Martin, Nico- lasUsunier,ThomasScialom,andGabrielSynnaeve. Yue Wang, Weishi Wang, Shafiq R. Joty, and Steven 2023. Codellama:Openfoundationmodelsforcode. C. H. Hoi. 2021. Codet5: Identifier-aware unified CoRR,abs/2308.12950. pre-trainedencoder-decodermodelsforcodeunder- standingandgeneration. InEMNLP(1),pages8696– Rebecca L. Russell, Louis Y. Kim, Lei H. Hamilton, 8708.AssociationforComputationalLinguistics. TomoLazovich,JacobHarer,OnurOzdemir,PaulM. Ellingwood, and Marc W. McConley. 2018. Auto- JasonWei, MaartenBosma, VincentY.Zhao, Kelvin matedvulnerabilitydetectioninsourcecodeusing Guu, Adams Wei Yu, Brian Lester, Nan Du, An- deeprepresentationlearning. InICMLA,pages757– drew M. Dai, and Quoc V. Le. 2022a. Finetuned 762.IEEE. language models are zero-shot learners. In ICLR. OpenReview.net. Zhihong Shao, Yeyun Gong, Yelong Shen, Min- lie Huang, Nan Duan, and Weizhu Chen. 2023. JasonWei,XuezhiWang,DaleSchuurmans,Maarten Synthetic prompting: Generating chain-of-thought Bosma,BrianIchter,FeiXia,EdH.Chi,QuocV.Le, demonstrationsforlargelanguagemodels. InICML, andDennyZhou.2022b. Chain-of-thoughtprompt- |
volume 202 of Proceedings of Machine Learning ing elicits reasoning in large language models. In Research,pages30706–30775.PMLR. NeurIPS. Xian Zhan, Lingling Fan, Sen Chen, Feng Wu, Benjamin Steenhoek, Wei Le, and Hongyang Gao. Tianming Liu, Xiapu Luo, and Yang Liu. 2021. 2022. Deepdfa: Dataflowanalysis-guidedefficient graph learning for vulnerability detection. CoRR, ATVHUNTER: reliable version detection of third- partylibrariesforvulnerabilityidentificationinan- abs/2212.08108. droidapplications. InICSE,pages1695–1707.IEEE. BenjaminSteenhoek,MdMahbuburRahman,Richard Jiexin Zhang, Alastair R. Beresford, and Stephan A. Jiles,andWeiLe.2023. Anempiricalstudyofdeep Kollmann. 2019. Libid: reliable identification of learningmodelsforvulnerabilitydetection. InICSE, obfuscated third-party android libraries. In ISSTA, pages2237–2248.IEEE. pages55–65.ACM. Hugo Touvron, Louis Martin, Kevin Stone, Peter Al- YuanZhang,JiarunDai,XiaohanZhang,SirongHuang, bert, Amjad Almahairi, Yasmine Babaei, Nikolay ZheminYang,MinYang,andHaoChen.2018. De- Bashlykov,SoumyaBatra,PrajjwalBhargava,Shruti tectingthird-partylibrariesinandroidapplications Bhosale,DanBikel,LukasBlecher,CristianCanton- with high precision and recall. In SANER, pages Ferrer,MoyaChen,GuillemCucurull,DavidEsiobu, 141–152.IEEEComputerSociety. JudeFernandes,JeremyFu,WenyinFu,BrianFuller, CynthiaGao,VedanujGoswami,NamanGoyal,An- Yunhui Zheng, Saurabh Pujar, Burn L. Lewis, Luca thonyHartshorn,SagharHosseini,RuiHou,Hakan Buratti, Edward A. Epstein, Bo Yang, Jim Laredo, Inan,MarcinKardas,ViktorKerkez,MadianKhabsa, Alessandro Morari, and Zhong Su. 2021. D2A: A IsabelKloumann,ArtemKorenev,PunitSinghKoura, datasetbuiltforai-basedvulnerabilitydetectionmeth- Marie-AnneLachaux,ThibautLavril,JenyaLee,Di- odsusingdifferentialanalysis. InICSE(SEIP),pages anaLiskovich,YinghaiLu,YuningMao,XavierMar- 111–120.IEEE. tinet,TodorMihaylov,PushkarMishra,IgorMoly- bog, Yixin Nie, Andrew Poulton, Jeremy Reizen- YaqinZhou,ShangqingLiu,JingKaiSiow,Xiaoning stein,RashiRungta,KalyanSaladi,AlanSchelten, Du, and Yang Liu. 2019. Devign: Effective vul- Ruan Silva, Eric Michael Smith, Ranjan Subrama- nerabilityidentificationbylearningcomprehensive nian, Xiaoqing Ellen Tan, Binh Tang, Ross Tay- program semantics via graph neural networks. In lor, Adina Williams, Jian Xiang Kuan, Puxin Xu, NeurIPS,pages10197–10207. ZhengYan,IliyanZarov,YuchenZhang,AngelaFan, Melanie Kambadur, Sharan Narang, Aurélien Ro- driguez,RobertStojnic,SergeyEdunov,andThomas Scialom.2023. Llama2: Openfoundationandfine- tunedchatmodels. CoRR,abs/2307.09288. YizhongWang,YeganehKordi,SwaroopMishra,Alisa Liu,NoahA.Smith,DanielKhashabi,andHannaneh Hajishirzi.2023a. Self-instruct: Aligninglanguage |
2401.08131 1 Game Rewards Vulnerabilities: Software Vulnerability Detection with Zero-Sum Game and Prototype Learning Xin-Cheng Wen†, Cuiyun Gao∗†, Xinchen Wang†, Ruiqi Wang†, Tao Zhang§, and Qing Liao† †Harbin Institute of Technology, Shenzhen, China § Macau University of Science and Technology, Macau, China xiamenwxc@foxmail.com, gaocuiyun@hit.edu.cn, {200111115, 200111606}@stu.hit.edu.cn, tazhang@must.edu.mo, liaoqing@hit.edu.cn Abstract—Recentyearshavewitnessedagrowingfocusonautomatedsoftwarevulnerabilitydetection.Notably,deeplearning (DL)-basedmethods,whichemploysourcecodefortheimplicitacquisitionofvulnerabilitypatterns,havedemonstratedsuperior performancecomparedtootherapproaches.However,theDL-basedapproachesarestillhardtocapturethevulnerability-related informationfromthewholecodesnippet,sincethevulnerablepartsusuallyaccountforonlyasmallproportion.Asevidencedbyour experiments,theapproachestendtoexcessivelyemphasizesemanticinformation,potentiallyleadingtolimitedvulnerabilitydetection performanceinpracticalscenarios.First,theycannotwelldistinguishbetweenthecodesnippetsbefore(i.e.,vulnerablecode)and after(i.e.,non-vulnerablecode)developers’fixesduetotheminimalcodechanges.Besides,substitutinguser-definedidentifierswith placeholders(e.g.,”VAR1”and”FUN1”)inobviousperformancedegradationatupto14.53%withrespecttotheF1score.Tomitigate theseissues,weproposetoleveragethevulnerableandcorrespondingfixedcodesnippets,inwhichtheminimalchangescanprovide hintsaboutsemantic-agnosticfeaturesforvulnerabilitydetection. Inthispaper,weproposeasoftwarevulneRabilitydEteCtionframeworkwithzerO-sumgameandprototypelearNing,namedRECON. InRECON,weproposeazero-sumgameconstructionmodule.Distinguishingthevulnerablecodefromthecorrespondingfixedcode isregardedasoneplayer(i.e.Calibrator),whiletheconventionalvulnerabilitydetectionisanotherplayer(i.e.Detector)inthezero-sum game.Thegoalistocapturethesemantic-agnosticfeaturesofthefirstplayerforenhancingthesecondplayer’sperformancefor vulnerabilitydetection.Tomaintainarelativeequilibriumbetweendifferentplayerswhilelearningthevulnerabilitypatterns,wealso proposeaclass-levelprototypelearningmoduleforcapturingrepresentativevulnerabilitypatterns.Theprototypeissharedbetween thedetectorandcalibrator,whichcapturesvulnerabilitypatternssimultaneously.Inaddition,toensurethestabilityofthetraining process,wealsodesignabalancegap-basedtrainingstrategyforthezero-sumgame.Experimentsonthepublicbenchmarkdataset showthatRECONoutperformsthestate-of-the-artbaselineby6.29%inF1score.Itcanalsoimprovethebestbaselineby3.63%with respecttoaccuracyindistinguishingbetweenthevulnerableandcorrespondingfixedcode,and5.75%intheidentifiersubstitution settingintheF1score. IndexTerms—SoftwareVulnerability;DeepLearning;PrototypeLearning ✦ 1 INTRODUCTION Software vulnerabilities are security issues, which pose more researchers have focused on automated methods for substantialsecuritythreatsandpotentiallycausesignificant softwarevulnerabilitydetection.Wecanbroadlycategorize disruptions within software systems [3]. These vulnerabili- the existing vulnerability detection approaches into two ties represent exploitable weaknesses that, if leveraged by primary domains: Program Analysis (PA)-based [10], [11], attackers, can lead to breaches or violations of systems’ [12], [13] and learning-based methodologies [14], [15], [16], security protocols, such as cross-site scripting (XSS) [4] or [17],[18].PA-basedapproachespredominantlyfocusonpre- SQLinjection[5],iscommonlyencounteredinwebapplica- definedpatternsandoftenrelyonhumanexpertstoconduct tions.Thepresenceofthesevulnerabilitiescanleadtosevere code analysis [19]. These approaches primarily encompass economicconsequences.Forinstance,in2023,theClopgang staticanalysis[20],[21],dynamicprogramanalysis[22],[23], amassed more than $75 million through MOVEit extortion and symbolic execution [24], [25]. However, these methods attacks [6]. Considering the severity of software vulnera- struggletodetectdiversevulnerabilities[18]. bilities, it is important to develop precise and automated Conversely, learning-based approaches exhibit greater techniquesforvulnerabilitydetection. versatility in detecting various types of vulnerabilities. In recent years, with the increase in the number of These approaches typically employ source code or derive software vulnerabilities [7], more than 20,000 vulnerabil- program structures from the source code as their input, ities have been published each year [8] in the National enabling them to implicitly learn the patterns of vulnera- Vulnerability Database (NVD) [9]. As a result, more and bilitieswithincodesnippets.Forexample,ICVH[26]lever- ages bidirectional Recurrent Neural Networks (RNNs) [27] *correspondingauthor. withinformation-theoreticforvulnerabilitydetection.Zhou 4202 naJ 61 ]ES.sc[ 1v13180.1042:viXra2 50 1 static void add_bytes_c(uint8_t *dst, uint8_t *src, int w){ Reveal 2 long i; CodeT5 3 - for(i=0; i<=w-sizeof(long); i+=sizeof(long)){ 40 |
+ for (i=0; i<=w-(int)sizeof(long); i+=sizeof(long)) { 30 4 long a = *(long*)(src+i); 5 long b = *(long*)(dst+i); 20 6 *(long*)(dst+i) = ((a&pb_7f) + (b&pb_7f)) ^ ((a^b)&pb_80); } 10 7 for(; i<w; i++) 8 dst[i+0] += src[i+0]; 0 } Original Identifier change (A)A motivating example from CVE-2013-7010. )%(1F ↓7.95% ↓7.49% (B)Results illustrating that Indentifier change leads to performance drop for vulnerability detection. Fig.1:(A)AmotivatingexamplefromCVE-2013-7010.Thered-coloredcodesarevulnerablecode.Thegreen-coloredcodes arethecorrespondingfixedcode.(B)TherelationshipofReveal[1]andCodeT5[2]betweentheoriginalandtheidentifier changesetting.Inthedatasetwithidentifierssubstituted,wemaptheidentifierstosymbolicnames(e.g.,VAR1andFUN1). et al. [28] propose Devign, which combines the Abstract formation can potentially confound vulnerability detection Syntax Tree (AST) [29], Control Flow Graph (CFG) [30], algorithms, leading to erroneous predictions [38]. We also Data Flow Graph (DFG) [31] and Natural Code Sequence analyze the impact of identifiers on the performance of (NCS) into a joint graph and uses the Gated Graph Neural recentvulnerabilitydetectionmodelssuchasReveal[1]and Networks (GGNNs) [32] to learn the graph representa- CodeT5[2].Specifically,wecreateanewdatasetwithiden- tions. CodeBERT [33] is a Transformer-based pre-trained tifiers substituted, in which the identifiers are mapped to model and is applied to vulnerability detection via Fine- symbolicnames(e.g.,VAR1andFUN1).AsshowninFig.2 tuning [34]. Although these learning-based methods have (B),RevealandCodeT5showobviousdegradationafterthe made progress in software vulnerability detection, they identifier substitution, dropping 7.49% and 7.95% in terms excessivelyemphasizesemanticinformationandshowlim- of the F1 score, respectively. The results also indicate that ited performance in learning the vulnerability patterns, as theexistingmethodstendtofailincapturingvulnerability- evidencedbyourexperiments. relatedinformation. (1) They cannot well distinguish between the code Toaddresstheabovelimitations,weproposeasoftware snippets before (i.e., vulnerable code) and after (i.e., non- vulneRability dEteCtion framework with zerO-sum game vulnerable code) developers’ fixes. This misclassification and prototype learNing, named RECON. RECON mainly occurs because of the minimal code changes between the contains two modules: (1) A zero-sum game construction vulnerable code before and after the fix process. Fig. 2 module, which aims at capturing the semantic-agnostic (A) depicts a motivating example from CVE-2013-7010, features for improving the vulnerability detection perfor- which is the vulnerability type of CWE-189 (Numeric Er- mance. Specifically, in the zero-sum game, one player (i.e. rors) [35]. The red-colored statement in line 3 means it Calibrator) is trained to distinguish the vulnerable code will continue iterating as long as “i” is less than or equal from the corresponding fixed code, while another player to “w − sizeof(long)”. This may result in out-of-bounds (i.e. Detector) is defined for the conventional vulnerability access because it does not consider the case where “w” is detection. The conventional vulnerability detection means notmultiplesof“sizeof(long)”.Actually,thegreen-colored that distinguishes whether the code snippet before the fix statement(vulnerability-fixedversion)onlyaddsonetoken containsavulnerability.(2)Aclass-levelprototypelearning “int” to ensure that the loop will not iterate beyond the module for capturing representative vulnerability patterns bounds. This adjustment embodies a subtle yet critical fix, ineachclass,whichaimstomaintainarelativeequilibrium which is hard to be captured by vulnerability detection inthelearningvulnerabilitypatternsofdiverseplayers.The methods. For example, CodeBERT [33] predicts 92.82% of prototype is shared between the detector and calibrator, the code snippets before and after fixes as the same labels, which captures vulnerability patterns simultaneously. In indicatingthatthemethodmayfailtowelllearnvulnerabil- addition, to ensure the stability of the training process, we itypatterns. also design a balance gap-based training strategy for the (2) Substituting user-defined identifiers leads to an zero-sumgame. obvious performance degradation. User-defined identifier We evaluate the effectiveness of RECON for software names provide rich semantic information of the source vulnerability detection in the popular benchmark dataset code[36].Nonetheless,inmostcases,identifiernames,such - Fan et al. [39]. We compare RECON with three com- as variable and function names, are irrelevant to the code monly used GNN-based methods and three pretrained- vulnerability patterns [37]. This vulnerability-irrelevant in- based methods. The results demonstrate that RECON out-3 performs all the baseline methods. Specifically, RECON staticanalysis[20],[21],dynamicprogramanalysis[22],[23], outperforms0.94%,5.05%,7.37%,and6.29%intermsofthe and symbolic execution [24], [25]. However, these methods |
accuracy,precision,recallandF1scoremetrics,respectively. struggletodetectdiversevulnerabilities[18]. It can also improve by 3.63% with respect to accuracy in Conversely, learning-based approaches exhibit greater distinguishing between the vulnerable and corresponding versatility in detecting various types of vulnerabilities. fixedcode,and5.75%intheidentifiersubstitutionsettingin These approaches typically employ source code or derive theF1score. program structures from the source code as their input, In summary, the major contributions of this paper are enabling them to implicitly learn the patterns of vulnera- summarizedasfollows: bilitieswithincodesnippets.Forexample,ICVH[26]lever- 1) We are the first to focus on the zero-sum game to ages bidirectional Recurrent Neural Networks (RNNs) [27] improvesoftwarevulnerabilitydetectionperformance. withinformation-theoreticforvulnerabilitydetection.Zhou 2) We propose RECON, a novel vulnerability detection et al. [28] propose Devign, which combines the Abstract framework, involves a zero-sum game construction Syntax Tree (AST) [29], Control Flow Graph (CFG) [30], module for capturing the semantic-agnostic features Data Flow Graph (DFG) [31] and Natural Code Sequence and a class-level prototype learning module for cap- (NCS) into a joint graph and uses the Gated Graph Neural turing representative vulnerability patterns. We also Networks (GGNNs) [32] to learn the graph representa- design a balance gap-based training strategy to ensure tions. CodeBERT [33] is a Transformer-based pre-trained thestabilityofthetrainingprocess. model and is applied to vulnerability detection via Fine- 3) We perform an evaluation of RECON in the popular tuning [34]. Although these learning-based methods have benchmark dataset, and the results demonstrate the made progress in software vulnerability detection, they effectiveness of RECON in software vulnerability de- excessivelyemphasizesemanticinformationandshowlim- tection. ited performance in learning the vulnerability patterns, as evidencedbyourexperiments. The remaining sections of this paper are organized as (1) They cannot well distinguish between the code follows. Section 3 introduces the background of the zero- snippets before (i.e., vulnerable code) and after (i.e., non- sum game and prototype learning. Section 4 presents the vulnerable code) developers’ fixes. This misclassification architecture of RECON, which includes two modules: a occurs because of the minimal code changes between the zero-sum game construction module and a class-level pro- vulnerable code before and after the fix process. Fig. 2 totypelearningmodule.Section5describestheexperimen- (A) depicts a motivating example from CVE-2013-7010, tal setup, including dataset, baselines, and experimental which is the vulnerability type of CWE-189 (Numeric Er- settings. Section 6 presents the experimental results and rors) [35]. The red-colored statement in line 3 means it analysis. Section 7 discusses why RECON can effectively will continue iterating as long as “i” is less than or equal detect vulnerability and the threats to validity. Section 9 to “w − sizeof(long)”. This may result in out-of-bounds concludesthepaper. access because it does not consider the case where “w” is notmultiplesof“sizeof(long)”.Actually,thegreen-colored 2 INTRODUCTION statement(vulnerability-fixedversion)onlyaddsonetoken Software vulnerabilities are security issues, which pose “int” to ensure that the loop will not iterate beyond the substantialsecuritythreatsandpotentiallycausesignificant bounds. This adjustment embodies a subtle yet critical fix, disruptions within software systems [3]. These vulnerabili- which is hard to be captured by vulnerability detection ties represent exploitable weaknesses that, if leveraged by methods. For example, CodeBERT [33] predicts 92.82% of attackers, can lead to breaches or violations of systems’ the code snippets before and after fixes as the same labels, security protocols, such as cross-site scripting (XSS) [4] or indicatingthatthemethodmayfailtowelllearnvulnerabil- SQLinjection[5],iscommonlyencounteredinwebapplica- itypatterns. tions.Thepresenceofthesevulnerabilitiescanleadtosevere (2) Substituting user-defined identifiers leads to an economicconsequences.Forinstance,in2023,theClopgang obvious performance degradation. User-defined identifier amassed more than $75 million through MOVEit extortion names provide rich semantic information of the source attacks [6]. Considering the severity of software vulnera- code[36].Nonetheless,inmostcases,identifiernames,such bilities, it is important to develop precise and automated as variable and function names, are irrelevant to the code techniquesforvulnerabilitydetection. vulnerability patterns [37]. This vulnerability-irrelevant in- In recent years, with the increase in the number of formation can potentially confound vulnerability detection software vulnerabilities [7], more than 20,000 vulnerabil- algorithms, leading to erroneous predictions [38]. We also ities have been published each year [8] in the National analyze the impact of identifiers on the performance of Vulnerability Database (NVD) [9]. As a result, more and recentvulnerabilitydetectionmodelssuchasReveal[1]and more researchers have focused on automated methods for CodeT5[2].Specifically,wecreateanewdatasetwithiden- softwarevulnerabilitydetection.Wecanbroadlycategorize tifiers substituted, in which the identifiers are mapped to |
the existing vulnerability detection approaches into two symbolicnames(e.g.,VAR1andFUN1).AsshowninFig.2 primary domains: Program Analysis (PA)-based [10], [11], (B),RevealandCodeT5showobviousdegradationafterthe [12], [13] and learning-based methodologies [14], [15], [16], identifier substitution, dropping 7.49% and 7.95% in terms [17],[18].PA-basedapproachespredominantlyfocusonpre- of the F1 score, respectively. The results also indicate that definedpatternsandoftenrelyonhumanexpertstoconduct theexistingmethodstendtofailincapturingvulnerability- code analysis [19]. These approaches primarily encompass relatedinformation.4 50 1 static void add_bytes_c(uint8_t *dst, uint8_t *src, int w){ Reveal 2 long i; CodeT5 3 - for(i=0; i<=w-sizeof(long); i+=sizeof(long)){ 40 + for (i=0; i<=w-(int)sizeof(long); i+=sizeof(long)) { 30 4 long a = *(long*)(src+i); 5 long b = *(long*)(dst+i); 20 6 *(long*)(dst+i) = ((a&pb_7f) + (b&pb_7f)) ^ ((a^b)&pb_80); } 10 7 for(; i<w; i++) 8 dst[i+0] += src[i+0]; 0 } Original Identifier change (A)A motivating example from CVE-2013-7010. )%(1F ↓7.95% ↓7.49% (B)Results illustrating that Indentifier change leads to performance drop for vulnerability detection. Fig.2:(A)AmotivatingexamplefromCVE-2013-7010.Thered-coloredcodesarevulnerablecode.Thegreen-coloredcodes arethecorrespondingfixedcode.(B)TherelationshipofReveal[1]andCodeT5[2]betweentheoriginalandtheidentifier changesetting.Inthedatasetwithidentifierssubstituted,wemaptheidentifierstosymbolicnames(e.g.,VAR1andFUN1). Toaddresstheabovelimitations,weproposeasoftware framework, involves a zero-sum game construction vulneRability dEteCtion framework with zerO-sum game module for capturing the semantic-agnostic features and prototype learNing, named RECON. RECON mainly and a class-level prototype learning module for cap- contains two modules: (1) A zero-sum game construction turing representative vulnerability patterns. We also module, which aims at capturing the semantic-agnostic design a balance gap-based training strategy to ensure features for improving the vulnerability detection perfor- thestabilityofthetrainingprocess. mance. Specifically, in the zero-sum game, one player (i.e. 3) We perform an evaluation of RECON in the popular Calibrator) is trained to distinguish the vulnerable code benchmark dataset, and the results demonstrate the from the corresponding fixed code, while another player effectiveness of RECON in software vulnerability de- (i.e. Detector) is defined for the conventional vulnerability tection. detection. The conventional vulnerability detection means The remaining sections of this paper are organized as that distinguishes whether the code snippet before the fix follows. Section 3 introduces the background of the zero- containsavulnerability.(2)Aclass-levelprototypelearning sum game and prototype learning. Section 4 presents the module for capturing representative vulnerability patterns architecture of RECON, which includes two modules: a ineachclass,whichaimstomaintainarelativeequilibrium zero-sum game construction module and a class-level pro- inthelearningvulnerabilitypatternsofdiverseplayers.The totypelearningmodule.Section5describestheexperimen- prototype is shared between the detector and calibrator, tal setup, including dataset, baselines, and experimental which captures vulnerability patterns simultaneously. In settings. Section 6 presents the experimental results and addition, to ensure the stability of the training process, we analysis. Section 7 discusses why RECON can effectively also design a balance gap-based training strategy for the detect vulnerability and the threats to validity. Section 9 zero-sumgame. concludesthepaper. We evaluate the effectiveness of RECON for software vulnerability detection in the popular benchmark dataset 3 BACKGROUND - Fan et al. [39]. We compare RECON with three com- monly used GNN-based methods and three pretrained- 3.1 Zero-SumGame based methods. The results demonstrate that RECON out- Game theory [40], [41], [42], [43], [44] is the analysis of performs all the baseline methods. Specifically, RECON cooperationandnon-cooperationactionswithinthecontext outperforms0.94%,5.05%,7.37%,and6.29%intermsofthe of multiple participants, which primarily focuses on the accuracy,precision,recallandF1scoremetrics,respectively. decision-making process [45]. In this paper, we restrict our It can also improve by 3.63% with respect to accuracy in discourse to the simplest case of game theory involving distinguishing between the vulnerable and corresponding merely two agents, commonly referred to as a zero-sum fixedcode,and5.75%intheidentifiersubstitutionsettingin game [46]. In particular, non-cooperation is more popular theF1score. due to its more practicality, particularly when considering In summary, the major contributions of this paper are the presence of competitive dynamics among the partici- summarizedasfollows: pants. The game between participants can be regarded as 1) We are the first to focus on the zero-sum game to a decision-making process, and it can be further regarded improvesoftwarevulnerabilitydetectionperformance. as an optimization problem [47]. Specifically, Lemma 1 |
2) We propose RECON, a novel vulnerability detection introducesthecommondefinitionofthezero-sumgame:5 Zero-sum Game ℱ( ) ↑ ( ) ↓ Prototype Player A Player B (A)Zero-sum Game (B)Prototype Learning Fig.3:(A)Anillustrationofthezero-sumgame.(B)Anexampleoftheprototypelearning. Lemma 1 (Zero-sum game [48]). Consider that two players distance between samples (i.e. red-shaded squares) and are involved in a zero-sum game. The optimal solution prototypes across different classes simultaneously. In this xofplayerAandy ofplayerB canbeformulatedas: paper, we propose a class-level prototype learning module for capturing representative vulnerability patterns in each Max{Min{F(x i}}, (1) class. x Min{Max{G(y j}}, (2) 4 PROPOSED FRAMEWORK y where x = [x 1,x 2,...,x n] and y = [y 1,y 2,...,y n] denote In this section, we introduce the overall architecture of RECON. As shown in Fig. 4, RECON mainly consists of the action sets for the two players, respectively, and each action is greater than or equal to zero. F(x i) and two modules: a zero-sum game construction module and G(y j) denote the payoff functions of the action x i and a class-level prototype learning module. We also design y j,respectively. a balance-gap training strategy for facilitating zero-sum game-based vulnerability detection. We first illustrate our Followingthedefinition,wedesignthedifferentplayers problem formulation and then elaborate on the details of and corresponding payoff methods for the vulnerability eachstepinRECON. detection task. In fact, finding the optimal payoff for both playerAandBisadifficulttask.AsshowninFig.3(A),the improvementofoneplayer’spayofffunctionwillleadtothe 4.1 ProblemFormulation correspondingdecreaseofanotherplayer’spayofffunction. Similar to the previous studies [1], [28], the goal of a zero- It is difficult to maintain a relative equilibrium between sum game for software vulnerability detection is to train different players in the zero-sum game. In this paper, we a binary classifier to determine whether a code snippet constructplayersA(focusingonbeforeandafterthefix)and contains a vulnerability. As shown in Fig. 4, the sample of B (focusing on conventional vulnerability detection) and datainthispaperisformulatedasfollow,s: constructdifferentpayoffmethodsF andG.Sincethereisa commongoalbetweenthetwo,i.e.,tocapturevulnerability- (x i,y j,z k)|x i ∈X,y j ∈Y,z k ∈Z (3) relatedpatterns,wehopetoimprovevulnerabilitydetection where X,Y,Z denote the unchanged code set, vulnera- performancethroughtheconstructionofazero-sumgame. ble code set, and vulnerable-fixed code set, respectively. i∈{1,2,...,n x},j ∈{1,2,...,n y},k ∈{1,2,...,n z}denote 3.2 PrototypeLearning the single sample in different sets, respectively, where n x, Prototype learning has gained much attention within pat- n y, n z denote the numbers of samples in the sets. Each ternrecognitioninrecentyears[49],[50],[51],[52],[53].One sampletisapairofsourcecodeandthecorrespondinglabel of its most classical and representative methods is the k- (i.e.,vulnerableornot),denotedas<s(t),l(t)>.Following nearest-neighbor(KNN)algorithm[54].Itneedstoretrieve the previous methods [1], we annotate the vulnerable code the k-nearest neighbors to a sample, effectively forming y j ∈ Y asvulnerableandthevulnerable-fixedcodez k ∈ Z a local neighborhood. Subsequently, the neighborhood in- asnon-vulnerable.Andweannotatealltheunchangedcode formation is leveraged to make a classification decision x i ∈X asnon-vulnerablesamples. regardingthesample. Basedontheinputdata,RECONaimsatlearningamap- The key aspect of prototype learning revolves around pingf fromsourcecodes(·)toitslabell(·),i.e.,f :s i (cid:55)→l i, the strategies for updating the prototypes. As shown in to predict whether the given code snippet is vulnerable or Fig. 3 (B), we have representative prototypes (i.e. orange- not. The model is trained by a loss function computed as shaded square) for each class. We need to minimize the below: n distance between samples (i.e. blue-shaded squares) and (cid:88) min L(f(s i,l i|{s i})). (4) prototypes within the same class while maximizing the i=16 Code File Code File Fixed Zero-sum Game Construction Class-level Prototype Learning Classifier (NU on nc -h Va un l g Fue nd c) FUN_ <1 /( >){ Unchanged FUN_ <1 /( >){ Pull Vulnerable } } Push Unch anged Befo re Fix Detector FUN_2(){ Fixed FUN_2(){ </> </> Weight Sharing Prototype Sharing Before Fix } After Fix } (Vul Func) (Non-Vul Func) ··· ··· Unchanged Unchanged FUN_ <n /( >){ FUN_ <n /( >){ Pull Push Non-Vulnerable (Non-Vul Func) } } Aft er Fix Befo re Fix Calibrator Fig.4:ThearchitectureofRECON.RECONmainlyconsistsoftwomodules:azero-sumgameconstructionmoduleanda class-levelprototypelearningmodule.ThecodefileandthecorrespondingfixedcodearetheinputofRECON,including theunchangedcodesetX,vulnerablecodesetY,andvulnerable-fixedcodesetZ. ... ... 1 int user_update(struct key *key, struct key_preparsed_payload *prep){ ... ... 10 10 10- upayload = kmalloc(sizeof(*upayload) + datalen, GFP_KERNEL); |
11 if (!upayload) + 11 12 11 12 12 goto error; ... ... ... ... 16 ret = key_payload_reserve(key, datalen); 17 if (ret == 0) { 16 16 18 - zap = key->payload.data[0]; 19 + if ( !test_bit(KEY_FLAG_NEGATIVE, &key->flags)) 17 18 23 24 17 19 21 22 23 24 20 + zap = key->payload.data[0]; 21 + else 22 + zap = NULL; 25 26 20 23 rcu_assign_keypointer(key, upayload); 24 key->expiry = 0;} 27 25 26 25 if (zap) 26 kfree_rcu(zap, rcu); 28 27 27error: 28 return ret;} 28 (A)A example from CVE-2015-8539. (B)The CFG path of before fix code. (C)The CFG path of after fix code. Fig. 5: (A) A example from CVE-2015-8539. The red-colored code is vulnerable code. The green-colored code is the corresponding fixed code. (B) The execution path is derived from the Control Flow Graph (CFG) in the vulnerable code. (C)TheexecutionpathisderivedfromtheCFGinthecorrespondingfixedcode. 4.2 Zero-sumGameConstruction As shown in Fig. 5 (A) and (B), we mark the line number of each statement and encode three distinctive execution Thezero-sumgameconstructionmoduleaimsatleveraging paths, which are shaded in orange, blue and purple. Each the vulnerable and corresponding fixed code snippets, in node represents a line, and the relationship between nodes which the minimal code changes can provide hints about representstheexecutionrelationshipinthesourcecode. semantic-agnostic features for vulnerability detection. As (2) Vulnerability Calibrator C,whichinvolvesvulnera- mentionedinSection3.1,wedesignthetwodifferentplay- blecodesetY andvulnerable-fixedcodesetZ asthetrain- ers: Vulnerability Detector D and Vulnerability Calibrator ing set. In contrast to the Detector D, the primary focus of C. The module’s goal is then transformed to capture the vulnerabilitycalibratorC liesindiscerningalterationsmade semantic-agnostic features from the Calibrator for enhanc- tothevulnerablecodeY,whichcorrespondtothevulnera- ingtheDetector’sperformance. bilityfixes.Thisstrategyisdevisedtometiculouslydiscern (1) Vulnerability Detector D, which involves the un- the subtle disparities between code specimens before and changed code set X and vulnerable code set Y as the after fixes, specifically in the choice of path differences. As training set. Instead of directly using the source code as showninFig.5(C),thegreencircles(includingLine19,Line input, we generate the execution paths derived from the 21 and Line 22) connected by the purple path correspond Control Flow Graph (CFG) [30] for each code snippet. We to the changes in the fixed code. We can observe that the use the tree-sitter [55] to parse the code snippet into CFG. differences between execution paths of the vulnerable and7 fixedcodereflectthevulnerabilitypattern,i.e.,adding“if” statementstorestricttheadditionofkeysthatalreadyexist e−γ||si−ml||2 2 but are negatively instantiated in Linux [56]. By leveraging L =−log(p (s )), p (s )= theexecutionpath,RECONequipsthecalibratorC withan Proto Proto i Proto i (cid:80)C j=0e−γ||si−mj||2 2 effectivemeansofdetectingvulnerabilitiesanddistinguish- (8) ing between the code snippets before and after developers’ where||s i−m l||2 2 denotesthedistancebetweenthefeature fixes. s i andprototypem l inclassl∈C andγ denotesthehyper- Then, we use CodeBERT [33] to initialize the represen- parameter. The p Proto(s i) indicates the probability of an tation of each code sample (I i,l i). Specifically, we obtain input feature s i belonging to the class l. The regularization the CLS token [57] as the sequence vector I i,j for each loss L reg is defined as below for avoiding over-fitting and execution path in Detector and Calibrator. We then use improvingthegeneralization: tT he ext lC ocN aN lin[5 fo8 r] mto atl ie oa nr :n the sample’s feature s i by involving L reg =λ||s i−m nearest||2 2, (9) wherem nearestindicatesthenearestprototypevectortothe s =MaxPool(cid:16)(cid:16) ConcatN (I )(cid:17) ∗Wk(cid:17) ||(cid:16) ConcatN (I )(cid:17)fe ,atures iintheclassl,andλisahyper-parametertocontrol i j=0 i,j j=0 i,j theweightofL reg. (5) where Concat(·) denotes the combination of vectors of different paths and ∗ denotes the convolution operator. 4.4 BalanceGap-basedTrainingStrategy W ∈ RCin×Cout×k denotes the convolution kernel, where C in, C out and k are the input channel, output channel and We propose a Balance Gap (BG)-based training strategy filteroftheconvolution,respectively.Thesymbol||denotes to ensure the stability of the training process. Specifically, RECON can be regarded as a dynamic minimization-vs- the fusion operator between the convolution results and the sequence vector. N is the number of execution paths minimization game process between Detector D and Cali- extractedfromcodesample(I i,l i). bratorC .Thisprocessisformallydefinedas: (cid:26) (cid:27) Min Min{R(θ D,θ C} (10) 4.3 Class-levelPrototypeLearningModule θD θC The class-level prototype learning module aims to cap- whereDetectorD andCalibratorC areparametrizedbyθ D turerepresentativevulnerabilitypatterns.Duringthemodel andθ C,respectively.Theoptimizationofthezero-sumgame |
trainingprocess,anoteworthydisparityarisesincapturing R is optimized via gradient descent during the training vulnerability features between the Detector D and Calibra- epoch.TheBGinthezero-sumgameisdefinedasbelow: tor C, as introduced in Section 3.1. Therefore, we need to Definition 1 (Balance Gap).Consideringthetrainingtarget maintainarelativeequilibriuminlearningthevulnerability R(·,·) for the software vulnerability detection, the pre- patternsbetweenDetectorD andCalibratorC,forvulnera- viousmodelparameters(θ D,θ C)areupdatedto(θˆ D,θˆ C) bilitydetection. viathegradientdescentaftereachepochtraining.Then Specifically, we propose a class-level prototype learning thebalancegapcanbeformulatedas: loss,inwhichthelossfunctionLisdesignedasbelow: (cid:16) (cid:17) BG=L R(θˆ D,θˆ C) −L(R(θ D,θ C)), (11) L=L CE +L Proto+L reg, (6) whereL CEisthetypicalcross-entropylossforclassification, whereLdenotesthelossfunctionofRECON. L Proto is mainly used to mine the relation between the TheBGfundamentallyestablishestherelationsbetween sample’s feature s i and prototypes, and L reg is used to the Detector D and Calibrator C in the training process, improve the generalization performance. We leverage the whichconsistsoftwoupdatingsteps:(1)UpdatingDetector Multi-Layer Perceptron (MLP) [59] classifier and introduce D. In the first step, we freeze the Calibrator’s parameters thetraditionalL CE: θ C and optimize the Detector D problem for minimizing the R(θ D,θ C). (2) Updating Calibrator C. Then, in the second step, we freeze the optimal Detector’s parameters L CE =−log(p CE(s i)), p CE(s i)=Softmax(MLP (tanh(sθ iˆ) D))a.nd optimize the Calibrator C problem for minimizing (7) theR(θˆ D,θ C). The prototypes are trainable vector representations, de- To avoid the large deviation in the training process, we noted as m l and l ∈ C = {0,1} representing the cor- alsousethefollowingboundarystrategy: responding label. The prototypes have the similar vector (cid:16) (cid:17) sizes as the feature s i. The L Proto aims at decreasing the ∥BG∥≤L∥L R(θˆ D,θˆ C) −L(R(θ D,θ C))∥ (12) distance between the features of all vulnerable samples and prototypes, so that the prototypes can be regarded as whereListheLipschitzconstant[60].Followingtheprevi- “vulnerability patterns”. Specifically, we use the distance ous proof [61], we can find that the boundary strategy can ||s i − m l||2 2 to measure the similarities between sample onlybesatisfiedif∥BG∥=0.Constrainedbytheinstability featuresandprototypes.TheprototypelossfunctionL Proto of the training process, we propose to train RECON with canbemeasuredas: max-patience[62].8 5 EXPERIMENTAL SETUP 5.4 ImplementationDetails 5.1 ReasearchQuestions Inallresearchquestions,toensurethefairnessoftheexperi- We evaluate the proposed RECON and aim to answer the ments,weusethesamedatasplittingforalltheapproaches. followingresearchquestions(RQs): We try our best to reproduce all baseline methods from RQ1: How does RECON perform compared with the publicly available source code, and use the same hyper- state-of-the-artvulnerabilitydetectionapproaches? parameters as their original use. In this paper, we experi- RQ2: How effective is RECON in the identifier- ment under four settings in Fan et al. [39] as follows, with substitutionsetting? thedetaileddatastatisticsshowninTable1: RQ3: How does RECON perform in distinguishing the (1) Original Setting (in RQ1): Following the previous vulnerableandthecorrespondingfixedcode? work [1], [28], we randomly split the datasets into disjoint RQ4: How effective is RECON on the training data split training,validation,andtestsetsinaratioof8:1:1. bytime? (2) Identifier-substitution Setting(inRQ2):InRQ2,we RQ5: What is the influence of different modules on the map the user-defined variables and functions to symbolic detectionperformanceofRECON? namesinaone-to-onemanner(e.g.,VAR1andFUN1)tocon- RQ6: Howdothedifferenthyper-parametersimpactthe struct the user-defined identifier-substitution setting. This performanceofRECON? settingutilizesthesamedatasplitasdescribedinRQ1. (3) Vulnerability-fix Pair Setting (in RQ3): In RQ3, we 5.2 Datasets introduce the pair set comprising pairs of the vulnerable Toaddresstheabovequestions,weusethepopularbench- code and corresponding fixed code (denoted as “Pair set”). markdatasetproposedbyFanetal.[39]forsoftwarevulner- Subsequently, we partition the pair set in an 8:1:1 ratio, abilitydetection.Thisdatasetencompasses91distincttypes with a strict constraint that the vulnerable code and the of vulnerabilities extracted from 348 open-source GitHub correspondingfixedcodemustcoexistinthesamepartition, projects. It comprises an extensive repository of approxi- thuspreventinganyinadvertentdataleakage.Furthermore, mately 188,000 data samples, including 10,000 vulnerable we consolidate all code snippets into a combined test set samples. (denoted as “Combine set”), including samples from both WeusetheFanetal.datasetbecauseitistheonlyavail- the original and pair set, for comprehensive evaluation able vulnerability dataset that provides the corresponding purposes. |
fixedversionofthevulnerabilitycode.Inthispaper,limited (4) Time-split Setting (in RQ4): In RQ4, we also utilize bythetextlengthacceptedbyCodeBERT[33],weremoved a temporal splitting approach for software vulnerability all samples with lengths greater than 512 tokens to ensure detection. Specifically, we leverage the ”update date” as the fairness of the experimental results following the prior provided in the Fan et al. dataset. In accordance with this study[2]. criterion, we divide the data into training, validation, and testsets,maintainingan8:1:1ratiowhilestrictlypreserving 5.3 BaselineMethods thetemporalorderofthedatapoints.Thereisnotemporal overlap between these sets, ensuring the integrity of our In our evaluation, we compare RECON with three GNN- evaluationprocess. basedmethodsandthreepre-trainedmodel-basedmethods. Wefine-tunethepre-trainedmodelCodeBERT[33]with 1) Devign [28]: Devign generates a joint graph by AST, a learning rate of 2e−5. We train our method on a server CFG, DFG and Natural Code Sequence (NCS). It then with 2*NVIDIA A100-SXM4-40GB and the batch size is set usesaGGNNandaconvolutionlayerforvulnerability to 16. RECON is trained for a maximum of 24 epochs with detection. 5-epochformax-patience. 2) Reveal [1]: Reveal constructs the code property graph as input and uses a two-step vulnerability detection framework, including a feature extraction process by 5.5 PerformanceMetrics GGNN and a representation learning process by the MLPandtripletloss. Weemployfourwidely-usedmetricstoevaluatetheperfor- 3) DeepWukong [63]: DeepWukong uses the DOC2VEC manceofRECON: to transform the tokens from source code into node Precision: It is defined as Precision = TP . Pre- TP+FP vectorrepresentations,whichleveragesagraphconvo- cision quantifies the proportion of true vulnerabilities cor- lutionalnetworkforvulnerabilitydetection. rectlyidentifiedamongalltheretrievedvulnerabilities.TP 4) CodeBERT [33]: CodeBERT is an encoder-only pre- represents the number of true positives, while FP repre- trainedmodel,whichisbasedonRoberta.Itisapplied sentsthenumberoffalsepositives. tovulnerabilitydetectionbyfine-tuning. Recall: It is defined as Recall = TP . This metric TP+FN 5) CodeT5 [2]: CodeT5 is a Transformer-based model, assesses the percentage of actual vulnerable instances that which regards the task as the sequence-to-sequence have been successfully identified out of all the vulnerable paradigm. It is also applied to vulnerability detection instances. TP denotes the number of true positives, and byfine-tuning. FN denotesthenumberoffalsenegatives. 6) EPVD[64]:EPVDproposesanalgorithmforexecution F1Score:ItisdefinedasF1score=2×Precision×Recall. Precision+Recall path selection. It adopts a pre-trained model and a F1 score serves as the harmonic mean of the precision and convolutional neural network to learn the path repre- recallmetrics,providingabalancedevaluationofourtool’s sentations. performanceintermsofbothprecisionandrecall.9 TABLE 1: Statistics of the datasets. The “Ratio” indicates the percentage of vulnerable samples relative to non-vulnerable samples. Datasetsettings #Vul. #Non-Vul. #Ratio #Train:#Valid:#Test Original/Indentifier-substitution/Time-split 7,238 158074 1:21.84 132,229:16,526:16,557 Vulnerability-fixPair(Pairset) 6,998 6,998 1:1 11,166:1,456:1,374 Vulnerability-fixPair(Combineset) 7,238 165,072 1:22.81 143,395:17,982:17,931 Accuracy:ItisdefinedasAccuracy = TP+TN . 6.2 RQ2. Effectiveness of RECON in the Identifier- TP+TN+FN+FP Accuracy measures the percentage of correctly classified SubstitutionSetting instances out of all instances TP +TN +FN +FP. TN To answer RQ2, we also compare RECON with the six denotesthenumberoftruenegatives. vulnerability detection baseline methods in the identifier- substitution setting, in which user-defined variables and functions are mapped to symbolic names. The results are 6 EXPERIMENTAL RESULTS showninTable2.Weachievethefollowingfindings. The performance shows most serious decline among 6.1 RQ1.EffectivenessofRECONinOriginalSetting vulnerability detectors reliant solely on source code as input. Specifically, CodeBERT and CodeT5 exclusively em- To answer RQ1, we conduct a comprehensive compara- ploysourcecodeastheirmodelinput,andasillustratedin tiveanalysisagainstthesixvulnerabilitydetectionbaseline Table2ontherightcolumn,theirperformancedemonstrates methods across four performance metrics in the original a decline across seven out of eight evaluation cases. the F1 setting, with results shown in Table 2. We can achieve the scores of CodeBERT and CodeT5 experience substantial re- followingobservations. ductions of -13.03% and -7.95%, respectively. DeepWukong The proposed RECON consistently exhibits superior alsoexhibitsasubstantialdecreaseinitsF1score,albeitwith performance contrasted with the baseline methods. As a simultaneous 7.81% increase in accuracy. This suggests delineated in Table 2, it shows that RECON surpasses that following the substitution of user-defined identifier all the baseline methods across all metrics, achieving the names, DeepWukong tends to classify samples as non- |
highest performance in each of the four metrics. Specif- vulnerable,whichisnotconducivetoeffectivevulnerability ically, RECON demonstrates improvements over the best patterndetection. baseline EPVD at 0.94%, 5.05%, 7.37%, and 6.29% across The simultaneous use of both structural paths and the four metrics, respectively. The limited improvement of source code as inputs effectively mitigates the impact theaccuracymetriccouldbeattributedtotheinherentclass of identifier variations. Both RECON and EPVD utilize a imbalancewithinthedataset.Comparedwiththeaverageof combination of structural paths and source code as inputs. previousvulnerabilitydetectionmethods,RECONachieves Followinguser-definedidentifierchanges,theseapproaches an average absolute improvement of 45.71%, 50.18%, and exhibitminimalperformancevariations,withonlymarginal 51.22%inprecision,recallandF1scoremetrics,respectively. decreasesof0.89%and0.35%inF1scores,respectively. Pre-trained models perform better than GNN-based Overall, it is evident that RECON emerges as the best- models. We also discover that pre-trained models, such performing method, showcasing enhancements of 0.87%, as CodeBERT, CodeT5, EPVD, and RECON consistently 2.73%, 8.17%, and 5.75% in the four metrics respectively outperformGNN-basedmodelsacrossvariousperformance whencomparedtothebestbaselinemethods.Itemphasizes indicators. Specifically, these pre-trained models obtain 11 RECON’s proficiency in implicitly capturing vulnerability- outof12Top-3results,comparedtoonlyonefortheGNN- relevantpatterns,evenwithinanidentifier-substitutionset- basedmodel.Whenconsideringtheaverageresults,thefour ting. Furthermore, it demonstrates the potential utility of pre-training-based methods exhibit an average improve- incorporatinggametheoryforsoftwarevulnerabilitydetec- mentof12.24%,44.31%,27.82%,and37.97%acrossthefour tion, even in cases where the code substitutes use-defined metrics when compared to the three GNN-based methods, identifiers. respectively. This observation highlights the advantages of the existing pre-trained models, which enhance the ability AnswertoRQ2:Thesimultaneoususeofbothstruc- forcapturingvulnerabilitypatterns.Furthermore,thecom- tural paths and source code as inputs effectively binationofstructuralinformationandsemanticinformation mitigatestheimpactofidentifiervariations.RECON (harnessed from pre-trained models), as exemplified by consistentlyoutperformsallbaselinemethodsacross EPVDandRECON,yieldedthehighestperformanceamong allmetrics,demonstratingenhancementsoverEPVD allthevulnerabilitydetectionmethods. of 0.87%, 2.73%, 8.17%, and 5.75% in the identifier- substitutionsetting,respectively. Answer to RQ1: RECON outperforms all the base- line methods in all metrics, achieving 0.94%, 5.05%, 6.3 RQ3. Effectiveness of RECON in Vulnerability-Fix 7.37%, and 6.29% improvements over EPVD in Fan PairSetting et al. dataset, respectively. Besides, the pre-trained modelsperformbetterthanGNN-basedmodels. To answer RQ3, we compare RECON with the previous methodsundervulnerability-fixpairsetting.Thissettingis10 TABLE2:ComparisonresultsbetweenRECONandexistingvulnerabilitydetectionmethodsintheoriginalsetting(RQ1) andidentifier-substitutionsetting(RQ2).Thebestresultforeachmetricishighlightedinbold.Thecellsingreyrepresent theperformanceofthetop-3bestmethodsineachmetric,withdarkercolorsrepresentingbetterperformance. Setting RQ1(Original) RQ2(Identifier-substitution) Metrics(%) Method Accuracy Precision Recall F1score Accuracy Precision Recall F1score Devign 92.64 27.68 17.1 21.14 94.36 24.46 17.04 20.09 Reveal 76.85 10.60 40.03 16.68 76.16 5.53 29.15 9.19 DeepWuKong 86.36 40.93 24.58 30.72 94.17 38.26 10.27 16.19 CodeBERT 95.75 50.43 24.89 33.33 96.61 45.85 13.04 20.30 CodeT5 97.21 55.67 30.42 39.34 96.19 43.51 24.55 31.39 EPVD 98.10 85.85 78.77 82.16 98.10 88.17 76.30 81.81 RECON 99.04 90.90 86.14 88.45 98.97 90.90 84.47 87.56 TABLE 3: Comparison results between RECON and existing vulnerability detection methods in the vulnerability-fix pair setting (RQ3). The pair set concentrates on the vulnerable code and the corresponding fixed code, and the combine set encompassesallsourcecodesnippets. Setting RQ3(Pair) RQ3(Combine) Metrics(%) Method Accuracy Precision Recall F1score Accuracy Precision Recall F1score Devign 48.86 57.76 17.73 27.13 93.59 19.11 18.62 18.86 Reveal 47.23 54.85 12.36 19.91 84.45 5.61 18.42 8.55 DeepWuKong 48.86 56.32 8.11 14.18 95.17 26.45 9.34 13.80 CodeBERT 51.37 52.51 28.72 37.13 93.55 36.24 28.72 32.05 CodeT5 52.27 53.48 34.98 42.29 92.86 33.36 34.98 34.15 EPVD 50.81 50.52 78.77 61.56 94.10 46.63 78.77 58.58 RECON 55.90 53.67 86.17 66.15 96.14 51.74 86.42 64.72 dividedintotwomainsub-settings:onethatconcentrateson samples before and after fixes as the same label, poten- the vulnerable code and corresponding fixed code (i.e. Pair tially leading to detrimental consequences in real-world |
set),andanotherthatencompassesallsourcecodesnippets scenarios. Once a developer commits the code snippet to (i.e. Combine set). It presents a substantial challenge to rectifytheidentifiedvulnerability,thevulnerabilitydetector distinguish whether a vulnerability has been effectively isexpectedtoaccuratelyidentifythepatchedcodesegment. fixed or not in the setting. Typically, only a minor part of In fact, although RECON detects more than 52 pairs of the the code undergoes fixing, and the variable names within vulnerable code and corresponding fixed code when com- the code remain nearly identical between vulnerable code paredwithbestbaselinemethods,thereremainssubstantial andcorrespondingfixedcode. potentialforenhancement. As depicted in Table 3, our evaluation compares the performance of RECON with six baseline vulnerability de- Answer to RQ3: In the vulnerability-fix pair set- tectionmethodswithinthevulnerability-fixpairsetting.The ting, RECON performs better than the best baseline results demonstrate RECON’s best performance in 7 out in most cases. RECON outperforms 0.97%, 5.11%, of 8 cases. For example, in the combined test set, RECON 7.65%, and 6.14% than EPVD in the combine set, exhibitssuperioritybyachievingimprovementsof0.97%in respectively. accuracy, 5.11% in precision, 7.65% in recall, and 6.14% in F1scorecomparedtothebestbaselinemethods.Notably,in 6.4 RQ4.EffectivenessofRECONinTime-SplitSetting thepairtestset,GNN-basedmethodsexcelinprecision,al- thoughthisadvantageiscoupledwithsacrificesinaccuracy, In RQ4, we explore a partitioning setting for the Fan et al. recall,andF1scoremetrics. dataset based on time, which is motivated by Jimenez et Our experiment further underscores the potential of al. [65]. It dictates that at a specific point in time, denoted existing vulnerability detection approaches in distinguish- as t 1, only vulnerabilities known up to and including time ing vulnerabilities and the corresponding fixed code. This t 1 should be available for training. Consequently, vulnera- implies that current approaches frequently classify code bilities discovered or made public after time t 1 should not11 (a)Comparisonresultsinthetime-splitsetting. isstillsomeperformancedegradation. Setting RQ4(TimeSplit) Metrics(%) Answer to RQ4: In the time-split setting, RECON Method Accuracy Precision Recall F1score outperforms 0.85%, 2.41%, 5.86%, and 4.24% than best baseline in the four metrics, respectively. The Devign 92.44 26.54 18.53 21.82 methods incorporating graph or path structures Reveal 79.16 10.69 35.71 16.35 consistently outperform those relying solely on se- DeepWuKong 84.69 36.05 26.03 30.23 quencemethodsinthisreal-worldsetting. CodeBERT 93.18 7.35 2.55 3.79 CodeT5 94.45 4.80 2.14 2.96 EPVD 96.63 74.42 68.83 71.51 RECON 97.48 76.83 74.69 75.75 6.5 RQ5.EffectivenessofDifferentModulesinRECON (b)TheablationstudyofRECON. Settings Module Accuracy Precision Recall F1score To answer RQ5, we explore the effectiveness of different 98.41 86.49 74.26 79.91 w/oGame modules on the performance of RECON. Specifically, we (↓0.63%) (↓4.41%) (↓11.88%) (↓8.54%) study the two involved modules including the Zero-sum RQ1 98.82 89.11 84.22 86.60 w/oPrototype (↓0.22%) (↓1.79%) (↓1.92%) (↓1.85%) GameConstruction(i.e.Game)moduleandClass-levelPro- RECON 99.04 90.90 86.14 88.45 totypeLearning(i.e.Prototype)module. 97.96 74.55 79.61 77.00 w/oGame (↓1.01%) (↓16.35%) (↓4.86%) (↓10.56%) RQ2 98.73 85.99 84.69 85.34 6.5.1 Zero-sumGameConstructionModule w/oPrototype (↓0.24%) (↓4.91%) (↑-0.22%) (↓2.22%) RECON 98.97 90.90 84.47 87.56 To understand the impact of this module, we deploy a variantofRECONwithoutthezero-sumgameconstruction TABLE 4: (a) Comparison results between RECON and module (i.e. Game). It only directly uses the source code vulnerability detection methods in the time-split setting asatrainingprocesswithoutconstructingtheDetectorand (RQ4). The best result for each metric is highlighted in Calibrator. bold.(b)Theimpactofthezero-sumgameconstruction(i.e. Table 4 (b) shows the performance of the variant on the Game) module and the class-level prototype learning (i.e. two settings in the Fan et al. dataset. The addition of the Prototype)moduleontheperformanceofRECON(RQ5). Game module yields enhancements across all four metrics in the original and identifier-substitution settings, with an average improvement by 0.82%, 10.38%, 8.37%, and 9.55%, beaccessiblefortrainingpriortotheirappearance.Further- respectively. Specifically, the Game module improves the more, we introduced a distinct time point t 2 to signify that F1 score by 8.54% and 10.56%, respectively. It is worth data between t 1 and t 2 should be treated as the validation mentioning that the Game module achieves a 16.35% im- set, while data originating after t 2 is designated as the test provement in precision, which indicates that it can better set. This rigorous approach effectively mitigates the risk of recognizesimilarcodesintheidentifier-substitutionsetting. |
data leakage. While the time-split setting underscores the Overall,theresultsshowthattheGamemodulecancapture imperative for robust vulnerability detection methods [66], the semantic-agnostic features to enhance the vulnerability it necessitates that the vulnerability detection system pos- detectionperformance. sesses the capability to discern patterns associated with genuinevulnerabilities,withoutbecomingoverlyfixatedon semanticsorirrelevantinformationtothevulnerabilities. 6.5.2 Class-levelPrototypeLearningModule AsdelineatedinTable4(a),RECONcontinuestooutper- form other baseline methods. Specifically, under the time- To explore the contribution of the class-level prototype split setting, RECON exhibits performance enhancements learning module, we also construct a variant of RECON of 0.85%, 2.41%, 5.86%, and 4.24% across the four metrics, without the class-level prototype learning (i.e. Prototype) respectively. This underscores RECON’s superior ability to module. The other setting of this variant is consistent with recognize vulnerability-related information in real-world RECON. scenarios,surpassingitscounterparts. As shown in Table 4 (b), this variant improves the Table 4 (a) also shows that baseline methods incor- RECON performance in 7 out of the 8 cases, which porating graph or path structures (i.e. Devign, Reveal, achievesaverageimprovementsof0.23%,3.35%,0.85%and DeepWuKong, EPVD, and RECON) consistently outper- 2.04% across four metrics, respectively. In the identifier- form those relying solely on sequences (i.e. CodeBERT and substitutionsetting,theclass-levelprototypelearningmod- CodeT5). It is noteworthy that methods utilizing a GNN- ule exhibits a lower performance improvement compared basedmodeldonotexhibitsubstantialperformancedegra- with the original setting. This can be attributed to that this dationunderthetime-splitsetting,withperformancevaria- difficult setting leads to a challenge in prototype learning, tionsof0.68%,-0.33%,and-0.49%,respectively.Incontrast, which makes it more difficult to accurately capture vulner- whileEPVDandRECONperformexceptionallywell,there abilitypatterns.12 7 DISCUSSION Answer to RQ5: The zero-sum game construction and class-level prototype learning module can im- 7.1 WhydoesRECONWorkWell? prove the performance of RECON. The Game mod- (1) The ability to learn the vulnerability representa- ulehasagreatereffectcomparedwiththePrototype tion in original and identifier-substitution settings. The module,whichaveragelyimprovestheperformance proposed zero-sum game construction module and class- of0.82%,10.38%,8.37%,and9.55%acrossfourmet- level prototype learning module greatly contribute to RE- rics,respectively. CON by enabling it to learn vulnerability representations effectively. More specifically, we design a Calibrator to capture semantic-agnostic features, thereby improving the 6.6 RQ6. Influence of Different Hyper-Parameters in performance of the Detector. We employ the T-SNE [67] RECON technique for visualizing the vulnerability representations within CodeBERT (i.e. Fig. 7 (A) and (B)), EPVD (i.e. Fig. 7 To answer RQ6, we explore the impact of different hyper- (C)and(D)),andRECON(i.e.Fig.7(E)and(F)),inboththe parameters,includingthetrainingbatchsizeandtheweight ofregularizationlossL reg. originalandidentifier-substitutionsettings. As illustrated in Fig. 7, it becomes apparent that Code- BERTdoesnotprovideadiscriminativevulnerabilityrepre- 6.6.1 BatchSize sentation.BothEPVDandRECONexhibitanabilitytodis- In the zero-sum game construction module, we propose a cernvulnerabilitycharacteristics.However,itisnoteworthy calibrator C to distinguish the vulnerable code from the that EPVD’s decision-making tends to be overconfidence. corresponding fixed code. We conduct the experiment on This is evidenced by the presence of the orange boxes in how the batch size (i.e., the pair of vulnerable code and Fig. 7 (C) and (D), indicating that EPVD inadvertently ob- the corresponding fixed codes) impacts the performance of fuscates certain semantically similar vulnerability samples. RECON. The challenge posed by the identifier-substitution setting Fig. 6 (A) and (B) show the performance of RECON exacerbates this issue. Conversely, RECON demonstrates a acrossfourmetricswithdifferentbatchsizesinoriginaland discriminativeabilitytoeffectivelydistinguishbetweenthe identifier-substitutionsettings,respectively.Thelargerbatch representations. sizesconsistentlyyieldimprovedperformanceforRECON. (2)ThegamingprocessofRECONleadstoanincrease Whenthebatchsizeissetto16,weachieveperformanceof in the vulnerability detection performance. The proposed 99.06%,91.19%,86.42%,and88.74%acrossthefourmetrics, zero-sumgameconstructionmoduleisdevelopedbasedon respectively. However, for batch sizes exceeding 16, we a zero-sum game framework involving two key players: observe that the RECON’s performance exhibits relative DetectorandCalibrator,whichenhancestheoverallperfor- stability. It is difficult for the model to learn vulnerability- mance of vulnerability detection. Fig. 8 (A) visualizes the related information from the less varied utterances when loss functions of both Detector and Calibrator throughout the batch size is small. Furthermore, our analysis reveals thetrainingprocess. |
thattheidentifier-substitutionsettingismoresusceptibleto As can be seen, during the training phase, the Detector batchsize.Nevertheless,thetrendofimprovedperformance and Calibrator engage in a strategic interaction. Initially, withlargerbatchsizesremainsconsistentacrossbothorigi- they compete with each other, akin to a zero-sum game nalandidentifier-substitutionsettings. in the early stage of the training process (highlighted in red box). During this phase, as one player’s loss increases, 6.6.2 WeightofRegularizationLoss the other player’s loss also experiences a corresponding In the class-level prototype learning module, we propose rise. This dynamic reflects the competitive nature of their a regularization loss L reg to improve the generalization interaction.Duringthelattertrainingprocess,anoteworthy for vulnerability detection. We conduct the experiment to shift occurs, both Detector and Calibrator transition from exploretheeffectofregularizationweight. competition to cooperation, resulting in a mutually bene- AsshowninFig.6(C)and(D),theinfluenceofregular- ficial outcome. In this process, the losses for both players izationweightonRECONperformanceisrelativelymodest, simultaneously decrease. However, it is important to note exhibitingadiscerniblepatternofimprovementfollowedby that the rate of reduction differs between them. It shows adecline.Notably,theoptimalweightvalue,whereperfor- that both players are working towards a shared objective mance is maximized, is found at 0.01. It is noteworthy that (i.e. software vulnerability detection), which captures the theregularization’simpactonperformancevarianceismore semantic-agnostic features of the Calibrator for enhancing pronouncedinacontextwheresemanticrelevanceisabsent. theDetectorperformanceforvulnerabilitydetection. This observation underscores the efficacy of regularization (3)Thezero-sumgamecanbeconstructedindifferent weight in enhancing the ability to detect vulnerability pat- methods. In this paper, we have identified the advantages ternswithinsemanticallysimilarsetting. of constructing the zero-sum game and prototype learning for software vulnerability detection. Another advantage of Answer to RQ6: The performance of RECON is RECON is its flexibility and extensibility. RECON can be influencedbybatchsizeandweightofregularization appliedtotheexistingvulnerabilitydetectionmethods. loss.Ourdefaultsettingshaveundergoneoptimiza- For example, as shown in Fig. 8 (B), we enhance the tiontoyieldoptimalresults. CodeBERT method by incorporating the zero-sum game and prototype learning (i.e., CodeBERT*). The improved13 Accuracy Precision Recall F1 score 100 100 100 100 80 80 60 60 90 90 40 40 80 80 20 20 0 0 70 70 2 4 8 16 32 2 4 8 16 32 0.001 0.005 0.01 0.05 0.1 0.001 0.005 0.01 0.05 0.1 (A) The impact of batch size (B) The impact of batch size in (C) The impact of weight (D) The impact of weight in in original setting identifier-substitution setting in original setting identifier-substitution setting ℒ ℒ Fig.6:(A)and(B)respectivelydenotetheperformanceimpactsofvaryingbatchsizesforRECON,withintheoriginaland identifier-substitution settings. (C) and (D) correspondingly denote the performance resulting from different weights of regularizationlossinRECON. Vulnerable Non-vulnerable (A)CodeBERT in original setting (C)EPVD in original setting (E)RECON in original setting (B)CodeBERT in identifier- (D)EPVD in identifier- (F)RECON in identifier- substitution setting substitution setting substitution setting Fig. 7: T-SNE visualization of CodeBERT in (A) and (B), T-SNE visualization of EPVD in (C) and (D), and T-SNE visualizationofRECONin(E)and(F).Theuppersetoffigures(i.e.(A),(C),and(E))denotetheoriginalsettings,whilethe lowersetoffigures(i.e.(B),(D),and(F))illustrateidentifier-substitutionsettings.Thebluedotsdenotethevulnerableand thereddotsrepresentthevulnerablecodesnippets. CodeBERT* performs better than the original CodeBERT ProgrammingLanguageScope:Anotherthreatpertains by 1.72%, 0.85% and 1.14% across the precision, recall, and to the choice of programming languages considered. All F1 score metrics, respectively. This empirical observation instances of vulnerabilities in our paper have been devel- furtherunderscorestheeffectivenessoftheproposedframe- opedusingtheC/C++programminglanguages.Asaresult, workinvulnerabilitydetection. we have not incorporated other widely used languages, suchasJavaandPython.However,RECONcanbeapplied 7.2 ThreatsandLimitations to the other programming languages. In future research, Dataset Validity Concerns: One potential threat to the we intend to extend our investigations to these alternative validity of our study arises from the dataset we have programminglanguages. constructed. In this paper, owing to constraints in dataset Baseline Implementation: The third threat is the im- availability, we have exclusively relied upon Fan et al. plementation of baseline models. Throughout each exper- dataset, albeit employing diverse segmentation techniques imental configuration, we try our best to utilize the origi- to mitigate potential data leakage. In our future work, we nal source code of the baseline models obtained from the arecommittedtoconstructingamorerepresentativebench- GitHub repositories maintained by the respective authors. markdatasetformorecomprehensiveevaluation. Furthermore, we have maintained consistency by employ-14 CodeBERT CodeBERT* 1.72% ↑ 1.14% 0.85% ↑ ↑ |
Precision Recall F1 score (A)Game Training Process (B)Game in CodeBERT Fig.8:(A)ThelossfunctionsofbothDetectorandCalibratorthroughoutthegametrainingprocess.(B)Theperformance betweentheCodeBERTandCodeBERT*.TheCodeBERT*istheCodeBERTimprovedbythezero-sumgame. ing the identical hyperparameters in the original papers specialize in more fine-grained levels [79]. Specifically, authored. Graph-based methods [80], [81], [82], [83] generate the structural graph from source code and use Graph Neutral 8 RELATED WORK Networks (GNNs) for software vulnerability detection. For example,CPGVA[84]combinestheAST,CFG,andDFGand Software vulnerability detection is a crucial component of generates the code property graph (CPG) to vulnerability software security, which involves the identification and detection. IVDetect [73] utilizes a Program Dependency mitigation of potential security risks. The traditional vul- Graph (PDG) and combines different information into the nerability detection methods are Program Analysis (PA)- vulnerability representation. Cao et al. [85] utilize the PDG based methods [25], [68], [69], [70], which necessitate ex- and propose MVD to detect fine-grained memory-related pert knowledge and extract manual features tailored to vulnerability. LineVD [86] uses the Graph Attention Trans- specific vulnerability types, such as BufferOverflow [71], former (GAT), which characterizes the data and controls [72]. In contrast, recent research has increasingly focused dependency information in statement-level vulnerability on learning-based software vulnerability detection, which detection. offers the capacity to identify more vulnerability types and Inthispaper,wefocusonthezero-sumgametopresent a greater quantity of vulnerabilities [14]. Moreover, these anovelframework,whichinvolvesazero-sumconstruction approacheshavetheabilitytolearnimplicitlyvulnerability module for capturing the semantic-agnostic features and a patterns from historical data pertaining to known vulner- class-level prototype learning module for capturing repre- abilities [73]. Learning-based vulnerability detection tech- sentativevulnerabilitypatterns. niques can be broadly classified into two categories based on the input representations and training model utilized: sequence-basedmethodsandgraph-basedmethods. Sequence-based methods [74], [75], [76], [77] convert 9 CONCLUSION code into token sequences, which first use the Recurrent Inthispaper,weproposeasoftwarevulnerabilitydetection Neural Networks (RNNs) and LSTM to learn the features. framework with a zero-sum game and prototype learning, For example, VulDeepecker [14] generates the code gad- named RECON. It mainly consists of a zero-sum game gets from the source code as the granularity to train the construction module for capturing the semantic-agnostic Bidirectional(Bi)-LSTMnetworkforvulnerabilitydetection. features and a class-level prototype learning module for SySeVR [15] also extracts code gadgets by traversing AST capturingrepresentativevulnerabilitypatterns.Inaddition, andthenleveragesaBi-LSTMnetwork.Inrecentyears,with wealsodesignabalancegap-basedtrainingstrategytoen- theincreaseinthenumberandtypeofsoftwarevulnerabil- surethestabilityoftrainingprocess.Extensiveexperiments ities, the adoption of pre-trained models with subsequent in software vulnerability detection are conducted to evalu- fine-tuning has emerged as an approach for vulnerability ate the effectiveness of RECON in the popular benchmark detection [78]. For example, CodeBERT [33] is an encoder- dataset.Inthefuture,weintendtofurtherevaluateRECON only pre-trained model rooted in the Roberta architecture, onabroaderrangeofdatasetsandprogramminglanguages which is applied to the down-steam task of vulnerability forsoftwarevulnerabilitydetection. detection following a fine-tuning process. CodeT5 [2] is aninnovativeTransformer-basedmodelthatconceptualizes the vulnerability detection task within the sequence-to- DATA AVAILABILITY sequenceframework. Comparedwiththesequence-basedmethods,thegraph- Our source code and experimental data are available at: based methods provide more structural information and https://anonymous.4open.science/r/RECON15 REFERENCES TestingandAnalysis,ISSTA2012,Minneapolis,MN,USA,July15-20, 2012,M.P.E.HeimdahlandZ.Su,Eds. ACM,2012,pp.254–264. [1] S. Chakraborty, R. Krishna, Y. Ding, and B. Ray, “Deep learning [22] Y. Fu, M. Ren, F. Ma, H. Shi, X. Yang, Y. Jiang, H. Li, and based vulnerability detection: Are we there yet?” CoRR, vol. X.Shi,“Evmfuzzer:detectEVMvulnerabilitiesviafuzztesting,” abs/2009.07235,2020. in Proceedings of the ACM Joint Meeting on European Software En- [2] Y.Wang,W.Wang,S.R.Joty,andS.C.H.Hoi,“Codet5:Identifier- gineering Conference and Symposium on the Foundations of Software aware unified pre-trained encoder-decoder models for code un- Engineering,ESEC/SIGSOFTFSE2019,Tallinn,Estonia,August26- derstandingandgeneration,”inProceedingsofthe2021Conference 30,2019,M.Dumas,D.Pfahl,S.Apel,andA.Russo,Eds. ACM, onEmpiricalMethodsinNaturalLanguageProcessing,EMNLP2021, 2019,pp.1110–1114. VirtualEvent/PuntaCana,DominicanRepublic,7-11November,2021, [23] Y. Li, S. Ji, C. Lv, Y. Chen, J. Chen, Q. Gu, and C. Wu, “V- |
M.Moens,X.Huang,L.Specia,andS.W.Yih,Eds. Association fuzz: Vulnerability-oriented evolutionary fuzzing,” CoRR, vol. forComputationalLinguistics,2021,pp.8696–8708. abs/1901.01142,2019. [3] G.McGrawandB.Potter,“Softwaresecuritytesting,”IEEESecur. [24] R.Baldoni,E.Coppa,D.C.D’Elia,C.Demetrescu,andI.Finocchi, Priv.,vol.2,no.5,pp.81–85,2004. “Asurveyofsymbolicexecutiontechniques,”ACMComput.Surv., [4] G.E.Rodr´ıguez,J.G.Torres,P.Flores,andD.E.Benavides,“Cross- vol.51,no.3,pp.50:1–50:39,2018. site scripting (XSS) attacks and mitigation: A survey,” Comput. [25] H.Li,T.Kim,M.Bat-Erdene,andH.Lee,“Softwarevulnerability Networks,vol.166,2020. detectionusingbackwardtraceanalysisandsymbolicexecution,” [5] P.Tang,W.Qiu,Z.Huang,H.Lian,andG.Liu,“DetectionofSQL in2013InternationalConferenceonAvailability,ReliabilityandSecu- injection based on artificial neural network,” Knowl. Based Syst., rity,ARES2013,Regensburg,Germany,September2-6,2013. IEEE vol.190,p.105528,2020. ComputerSociety,2013,pp.446–454. [6] Bleepingcomputer., “Clop gang to earn over $75 [26] V. Nguyen, T. Le, O. Y. de Vel, P. Montague, J. Grundy, and million from moveit extortion attacks,” 2023.7. [Online]. D.Phung,“Information-theoreticsourcecodevulnerabilityhigh- Available: https://www.bleepingcomputer.com/news/security/ lighting,” in International Joint Conference on Neural Networks, clop-gang-to-earn-over-75-million-from-moveit-extortion-attacks/ IJCNN 2021, Shenzhen, China, July 18-22, 2021. IEEE, 2021, pp. [7] G. Lin, S. Wen, Q. Han, J. Zhang, and Y. Xiang, “Software vul- 1–8. nerabilitydetectionusingdeepneuralnetworks:Asurvey,”Proc. [27] M. Schuster and K. K. Paliwal, “Bidirectional recurrent neural IEEE,vol.108,no.10,pp.1825–1848,2020. networks,” IEEE Trans. Signal Process., vol. 45, no. 11, pp. 2673– [8] A. D. Householder, J. Chrabaszcz, T. Novelly, D. Warren, and 2681,1997. J.M.Spring,“Historicalanalysisofexploitavailabilitytimelines,” [28] Y.Zhou,S.Liu,J.K.Siow,X.Du,andY.Liu,“Devign:Effective in 13th USENIX Workshop on Cyber Security Experimentation and vulnerability identification by learning comprehensive program Test,CSET2020,August10,2020,T.DenningandT.Moore,Eds. semantics via graph neural networks,” in Advances in Neural USENIXAssociation,2020. Information Processing Systems 32: Annual Conference on Neural [9] “National vulnerability database,” [n.d.]. [Online]. Available: InformationProcessingSystems2019,NeurIPS2019,2019,pp.10197– https://nvd.nist.gov/ 10207. [10] J. Viega, J. T. Bloch, Y. Kohno, and G. McGraw, “ITS4: A static [29] J. Zhang, X. Wang, H. Zhang, H. Sun, K. Wang, and X. Liu, “A vulnerabilityscannerforCandC++code,”in16thAnnualCom- novelneuralsourcecoderepresentationbasedonabstractsyntax puterSecurityApplicationsConference(ACSAC2000),11-15December tree,”inProceedingsofthe41stInternationalConferenceonSoftware 2000,NewOrleans,Louisiana,USA. IEEEComputerSociety,2000, Engineering, ICSE 2019, Montreal, QC, Canada, May 25-31, 2019, p.257. J. M. Atlee, T. Bultan, and J. Whittle, Eds. IEEE / ACM, 2019, [11] Y.SuiandJ.Xue,“SVF:interproceduralstaticvalue-flowanalysis pp.783–794. in LLVM,” in Proceedings of the 25th International Conference on [30] X.Huo,M.Li,andZ.Zhou,“Controlflowgraphembeddingbased CompilerConstruction,CC2016,Barcelona,Spain,March12-18,2016, on multi-instance decomposition for bug localization,” in The A.ZaksandM.V.Hermenegildo,Eds. ACM,2016,pp.265–266. Thirty-FourthAAAIConferenceonArtificialIntelligence,AAAI2020, [12] Israel, “Checkmarx,” [n.d.]. [Online]. Available: https://www. The Thirty-Second Innovative Applications of Artificial Intelligence checkmarx.com/. Conference, IAAI 2020, The Tenth AAAI Symposium on Educational [13] S.Kim,S.Woo,H.Lee,andH.Oh,“VUDDY:Ascalableapproach Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, forvulnerablecodeclonediscovery,”in2017IEEESymposiumon February7-12,2020. AAAIPress,2020,pp.4223–4230. SecurityandPrivacy,SP2017,SanJose,CA,USA,May22-26,2017. [31] C.Cummins,Z.V.Fisches,T.Ben-Nun,T.Hoefler,M.F.P.O’Boyle, IEEEComputerSociety,2017,pp.595–614. andH.Leather,“Programl:Agraph-basedprogramrepresentation [14] Z.Li,D.Zou,S.Xu,X.Ou,H.Jin,S.Wang,Z.Deng,andY.Zhong, fordataflowanalysisandcompileroptimizations,”inProceedings “Vuldeepecker: A deep learning-based system for vulnerability of the 38th International Conference on Machine Learning, ICML detection,”in25thAnnualNetworkandDistributedSystemSecurity 2021, 18-24 July 2021, Virtual Event, ser. Proceedings of Machine Symposium,NDSS2018,SanDiego,California,USA,February18-21, LearningResearch,M.MeilaandT.Zhang,Eds.,vol.139. PMLR, 2018. TheInternetSociety,2018. 2021,pp.2244–2253. |
[15] Z.Li,D.Zou,S.Xu,H.Jin,Y.Zhu,andZ.Chen,“Sysevr:Aframe- [32] Y. Li, D. Tarlow, M. Brockschmidt, and R. S. Zemel, “Gated workforusingdeeplearningtodetectsoftwarevulnerabilities,” graph sequence neural networks,” in 4th International Conference IEEETrans.DependableSecur.Comput.,vol.19,no.4,pp.2244–2258, onLearningRepresentations,ICLR2016,2016. 2022. [33] Z. Feng, D. Guo, D. Tang, N. Duan, X. Feng, M. Gong, L. Shou, [16] X. Cheng, G. Zhang, H. Wang, and Y. Sui, “Path-sensitive code B. Qin, T. Liu, D. Jiang, and M. Zhou, “Codebert: A pre-trained embeddingviacontrastivelearningforsoftwarevulnerabilityde- model for programming and natural languages,” in Findings of tection,”inISSTA’22:31stACMSIGSOFTInternationalSymposium the Association for Computational Linguistics: EMNLP 2020, Online onSoftwareTestingandAnalysis,VirtualEvent,SouthKorea,July18 Event,16-20November2020,ser.FindingsofACL,T.Cohn,Y.He, -22,2022,S.RyuandY.Smaragdakis,Eds. ACM,2022,pp.519– and Y. Liu, Eds., vol. EMNLP 2020. Association for Computa- 531. tionalLinguistics,2020,pp.1536–1547. [17] V.Nguyen,D.Q.Nguyen,V.Nguyen,T.Le,Q.H.Tran,andD.Q. [34] N.Tajbakhsh,J.Y.Shin,S.R.Gurudu,R.T.Hurst,C.B.Kendall, Phung,“Regvd:Revisitinggraphneuralnetworksforvulnerabil- M.B.Gotway,andJ.Liang,“Convolutionalneuralnetworksfor itydetection,”CoRR,vol.abs/2110.07317,2021. medicalimageanalysis:Fulltrainingorfinetuning?”IEEETrans. [18] X.Cheng,X.Nie,N.Li,H.W.Z.Zheng,andY.Sui,“Howabout MedicalImaging,vol.35,no.5,pp.1299–1312,2016. bug-triggeringpaths?-understandingandcharacterizinglearning- [35] “Numeric errors,” [n.d.]. [Online]. Available: https://cwe.mitre. basedvulnerabilitydetectors.” IEEE,2022. org/data/definitions/189.html [19] R.Luh,S.Marschalek,M.Kaiser,H.Janicke,andS.Schrittwieser, [36] M. R. I. Rabin, N. D. Q. Bui, K. Wang, Y. Yu, L. Jiang, and “Semantics-awaredetectionoftargetedattacks:asurvey,”J.Com- M.A.Alipour,“Onthegeneralizabilityofneuralprogrammodels put.Virol.HackingTech.,vol.13,no.1,pp.47–85,2017. with respect to semantic-preserving program transformations,” [20] Facebook,“Infer,”[n.d.].[Online].Available:https://fbinfer.com/ Inf.Softw.Technol.,vol.135,p.106552,2021. . [37] H. Zhang, Z. Li, G. Li, L. Ma, Y. Liu, and Z. Jin, “Generating [21] Y.Sui,D.Ye,andJ.Xue,“Staticmemoryleakdetectionusingfull- adversarialexamplesforholdingrobustnessofsourcecodepro- sparsevalue-flowanalysis,”inInternationalSymposiumonSoftware cessingmodels,”inTheThirty-FourthAAAIConferenceonArtificial16 Intelligence,AAAI2020,TheThirty-SecondInnovativeApplicationsof [59] ——,“Text-attentionalconvolutionalneuralnetworkforscenetext Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Sym- detection,”IEEETrans.ImageProcess.,vol.25,no.6,pp.2529–2541, posiumonEducationalAdvancesinArtificialIntelligence,EAAI2020, 2016. NewYork,NY,USA,February7-12,2020. AAAIPress,2020,pp. [60] J.D.R,P.C.D,andS.B.E,“Lipschitzianoptimizationwithoutthe 1169–1176. lipschitzconstant,”JournalofoptimizationTheoryandApplications, [38] H.Zhang,Z.Fu,G.Li,L.Ma,Z.Zhao,H.Yang,Y.Sun,Y.Liu,and vol.79,pp.157–181,1993. Z.Jin,“Towardsrobustnessofdeepprogramprocessingmodels- [61] B.Qian,Y.Wang,R.Hong,andM.Wang,“Rethinkingdata-free detection,estimation,andenhancement,”ACMTrans.Softw.Eng. quantization as a zero-sum game,” B. Williams, Y. Chen, and Methodol.,vol.31,no.3,pp.50:1–50:40,2022. J.Neville,Eds. AAAIPress,2023,pp.9489–9497. [39] J.Fan,Y.Li,S.Wang,andT.N.Nguyen,“AC/C++codevulnera- [62] J. Dodge, G. Ilharco, R. Schwartz, A. Farhadi, H. Hajishirzi, bilitydatasetwithcodechangesandCVEsummaries,”inMSR’20: and N. A. Smith, “Fine-tuning pretrained language models: 17thInternationalConferenceonMiningSoftwareRepositories,Seoul, Weightinitializations,dataorders,andearlystopping,”CoRR,vol. RepublicofKorea,29-30June,2020,S.Kim,G.Gousios,S.Nadi,and abs/2002.06305,2020. J.Hejderup,Eds. ACM,2020,pp.508–512. [63] X.Cheng,H.Wang,J.Hua,G.Xu,andY.Sui,“Deepwukong:Stat- [40] S. Duncan, “The game theory of international politics,” World icallydetectingsoftwarevulnerabilitiesusingdeepgraphneural Politics,vol.38,no.1,pp.25–57,1985. network,” ACM Trans. Softw. Eng. Methodol., vol. 30, no. 3, pp. [41] C. Yuh-Wen and L. Moussa, “Two-person zero-sum game ap- 38:1–38:33,2021. |
proach for fuzzy multiple attribute decision making problems,” [64] J.Zhang,Z.Liu,X.Hu,X.Xia,andS.Li,“Vulnerabilitydetection FuzzySetsandSystems,vol.157,no.1,pp.34–51,2006. by learning from syntax-based execution paths of code,” IEEE [42] C.Wu,X.Li,W.Pan,J.Liu,andL.Wu,“Zero-sumgame-based Trans.SoftwareEng.,vol.49,no.8,pp.4196–4212,2023. optimalsecurecontrolunderactuatorattacks,”IEEETrans.Autom. [65] M. Jimenez, R. Rwemalika, M. Papadakis, F. Sarro, Y. L. Traon, Control.,vol.66,no.8,pp.3773–3780,2021. and M. Harman, “The importance of accounting for real-world [43] Q.Giboulot,T.Pevny´,andA.D.Ker,“Thenon-zero-sumgameof labellingwhenpredictingsoftwarevulnerabilities,”inProceedings steganography in heterogeneous environments,” IEEE Trans. Inf. of the ACM Joint Meeting on European Software Engineering Con- ForensicsSecur.,vol.18,pp.4436–4448,2023. ference and Symposium on the Foundations of Software Engineering, [44] B.Qian,Y.Wang,R.Hong,andM.Wang,“Rethinkingdata-free ESEC/SIGSOFT FSE 2019, Tallinn, Estonia, August 26-30, 2019, quantizationasazero-sumgame,”inThirty-SeventhAAAIConfer- M. Dumas, D. Pfahl, S. Apel, and A. Russo, Eds. ACM, 2019, enceonArtificialIntelligence,AAAI2023,Thirty-FifthConferenceon pp.695–705. InnovativeApplicationsofArtificialIntelligence,IAAI2023,Thirteenth [66] A. Garg, R. Degiovanni, M. Jimenez, M. Cordy, M. Papadakis, Symposium on Educational Advances in Artificial Intelligence, EAAI andY.L.Traon,“Learningfromwhatweknow:Howtoperform 2023, Washington, DC, USA, February 7-14, 2023, B. Williams, vulnerabilitypredictionusingnoisyhistoricaldata,”Empir.Softw. Y.Chen,andJ.Neville,Eds. AAAIPress,2023,pp.9489–9497. Eng.,vol.27,no.7,p.169,2022. [45] A. Kelly, Decision making using game theory: an introduction for [67] V.derMaaten,Laurens,andH.Geoffrey,“Visualizingdatausing managers. CambridgeUniversityPress,2003. t-sne.”Journalofmachinelearningresearch,vol.9,no.11,2008. [46] C. Domingo-Enrich, S. Jelassi, A. Mensch, G. M. Rotskoff, and [68] J. Jang, M. Woo, and D. Brumley, “Redebug: Finding unpatched J.Bruna,“Amean-fieldanalysisoftwo-playerzero-sumgames,” codeclonesinentireOSdistributions,”loginUsenixMag.,vol.37, in Advances in Neural Information Processing Systems 33: Annual no.6,2012. ConferenceonNeuralInformationProcessingSystems2020,NeurIPS [69] J.Pewny,F.Schuster,L.Bernhard,T.Holz,andC.Rossow,“Lever- 2020, December 6-12, 2020, virtual, H. Larochelle, M. Ranzato, aging semantic signatures for bug search in binary programs,” R.Hadsell,M.Balcan,andH.Lin,Eds.,2020. in Proceedings of the 30th Annual Computer Security Applications [47] M. Xiao, X. Shao, L. Gao, and Z. Luo, “A new methodology for Conference,ACSAC2014. ACM,2014,pp.406–415. multi-objective multidisciplinary design optimization problems [70] G. Portokalidis, A. Slowinska, and H. Bos, “Argos: an emulator basedongametheory,”ExpertSyst.Appl.,vol.42,no.3,pp.1602– forfingerprintingzero-dayattacksforadvertisedhoneypotswith 1612,2015. automaticsignaturegeneration,”inProceedingsofthe2006EuroSys [48] L.Shapley,“Sometopicsintwo-persongames,”Advancesingame Conference, Leuven, Belgium, April 18-21, 2006. ACM, 2006, pp. theory,vol.52,pp.1–29,1964. 15–27. [49] T.Kohonen,“Theself-organizingmap,”Proc.IEEE,vol.78,no.9, [71] D. Larochelle and D. Evans, “Statically detecting likely buffer pp.1464–1480,1990. overflow vulnerabilities,” in 10th USENIX Security Symposium, [50] ——, “Improved versions of learning vector quantization,” in August 13-17, 2001, Washington, D.C., USA, D. S. Wallach, Ed. IJCNN1990,InternationalJointConferenceonNeuralNetworks,San USENIX,2001. Diego,CA,USA,June17-21,1990. IEEE,1990,pp.545–550. [72] K. Lhee and S. J. Chapin, “Buffer overflow and format string [51] H. Yang, X. Zhang, F. Yin, and C. Liu, “Robust classification overflowvulnerabilities,”Softw.Pract.Exp.,vol.33,no.5,pp.423– with convolutional prototype learning,” in 2018 IEEE Conference 460,2003. on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake [73] Y. Li, S. Wang, and T. N. Nguyen, “Vulnerability detection with City,UT,USA,June18-22,2018. ComputerVisionFoundation/ fine-grainedinterpretations,”inESEC/SIGSOFTFSE. ACM,2021, IEEEComputerSociety,2018,pp.3474–3482. pp.292–303. [52] S.GevaandJ.Sitte,“Adaptivenearestneighborpatternclassifica- [74] F. Wu, J. Wang, J. Liu, and W. Wang, “Vulnerability detection tion,”IEEETrans.NeuralNetworks,vol.2,no.2,pp.318–322,1991. with deep learning,” in 2017 3rd IEEE International Conference on [53] X.Zhang,D.Song,andD.Tao,“Hierarchicalprototypenetworks ComputerandCommunications(ICCC),2017,pp.1298–1302. for continualgraphrepresentationlearning,”IEEETrans.Pattern [75] H.K.Dam,T.Tran,T.Pham,S.W.Ng,J.Grundy,andA.Ghose, |
Anal.Mach.Intell.,vol.45,no.4,pp.4622–4636,2023. “Automatic feature learning for vulnerability prediction,” CoRR, [54] J. J. Valero-Mas, J. Calvo-Zaragoza, and J. R. Rico-Juan, “On the vol.abs/1708.02368,2017. suitability of prototype selection methods for knn classification [76] X. Li, L. Wang, Y. Xin, Y. Yang, and Y. Chen, “Automated vul- with distributed data,” Neurocomputing, vol. 203, pp. 150–160, nerabilitydetectioninsourcecodeusingminimumintermediate 2016. representationlearning,”AppliedSciences,vol.10,no.5,2020. [55] (2023) Tree-sitter. [Online]. Available: https://tree-sitter.github. [77] G. Grieco, G. L. Grinblat, L. C. Uzal, S. Rawat, J. Feist, and io/tree-sitter/ L. Mounier, “Toward large-scale vulnerability discovery using [56] (2016) Cve-2015-8539. [Online]. Available: https://ubuntu.com/ machinelearning,”inCODASPY. ACM,2016,pp.85–96. security/CVE-2015-8539 [78] C.Wang,Y.Yang,C.Gao,Y.Peng,H.Zhang,andM.R.Lyu,“No [57] S. Lu, D. He, C. Xiong, G. Ke, W. Malik, Z. Dou, P. Bennett, more fine-tuning? an experimental evaluation of prompt tuning T.Liu,andA.Overwijk,“Lessismore:Pretrainastrongsiamese incodeintelligence,”inProceedingsofthe30thACMJointEuropean encoderfordensetextretrievalusingaweakdecoder,”M.Moens, SoftwareEngineeringConferenceandSymposiumontheFoundationsof X.Huang,L.Specia,andS.W.Yih,Eds. AssociationforCompu- SoftwareEngineering,ESEC/FSE2022,Singapore,Singapore,Novem- tationalLinguistics,2021,pp.2780–2791. ber 14-18, 2022, A. Roychoudhury, C. Cadar, and M. Kim, Eds. [58] T.He,W.Huang,Y.Qiao,andJ.Yao,“Text-attentional convolu- ACM,2022,pp.382–394. tionalneuralnetworkforscenetextdetection,”IEEETrans.Image [79] H. H. Nguyen, N. Nguyen, C. Xie, Z. Ahmadi, D. Kudendo, Process.,vol.25,no.6,pp.2529–2541,2016. T.Doan,andL.Jiang,“MANDO:multi-levelheterogeneousgraph17 embeddingsforfine-graineddetectionofsmartcontractvulnera- bilities,” in 9th IEEE International Conference on Data Science and Advanced Analytics, DSAA 2022, Shenzhen, China, October 13-16, 2022,J.Z.Huang,Y.Pan,B.Hammer,M.K.Khan,X.Xie,L.Cui, andY.He,Eds. IEEE,2022,pp.1–10. [80] S.Cao,X.Sun,L.Bo,Y.Wei,andB.Li,“BGNN4VD:Constructing bidirectional graph neural-network for vulnerability detection,” Inf.Softw.Technol.,vol.136,p.106576,2021. [81] A.V.Phan,M.L.Nguyen,andL.T.Bui,“Convolutionalneural networksovercontrolflowgraphsforsoftwaredefectprediction,” inICTAI. IEEEComputerSociety,2017,pp.45–52. [82] H.Wang,G.Ye,Z.Tang,S.H.Tan,S.Huang,D.Fang,Y.Feng, L. Bian, and Z. Wang, “Combining graph-based learning with automateddatacollectionforcodevulnerabilitydetection,”IEEE Trans.Inf.ForensicsSecur.,vol.16,pp.1943–1958,2021. [83] X. Duan, J. Wu, S. Ji, Z. Rui, T. Luo, M. Yang, and Y. Wu, “Vul- sniper:Focusyourattentiontoshootfine-grainedvulnerabilities,” inIJCAI. ijcai.org,2019,pp.4665–4671. [84] X. Wang, T. Zhang, R. Wu, W. Xin, and C. Hou, “CPGVA: code property graph based vulnerability analysis by deep learning,” in 10th International Conference on Advanced Infocomm Technology, ICAIT 2018, Stockholm, Sweden, August 12-15, 2018. IEEE, 2018, pp.184–188. [85] S.Cao,X.Sun,L.Bo,R.Wu,B.Li,andC.Tao,“MVD:memory- relatedvulnerabilitydetectionbasedonflow-sensitivegraphneu- ralnetworks,”CoRR,vol.abs/2203.02660,2022. [86] D. Hin, A. Kan, H. Chen, and M. A. Babar, “Linevd: Statement- level vulnerability detection using graph neural networks,” in MSR. ACM,2022,pp.596–607. |
2401.11108 LLM4FUZZ: Guided Fuzzing of Smart Contracts with Large Language Models ChaofanShou JingLiu DoudouLu KoushikSen UCBerkeley UCIrvine FuzzlandInc. UCBerkeley Abstract smartcontractdevelopersandauditors.Fuzzingisaheuristic searchalgorithmforfindingtestinputs(ortestcases)thatre- Asblockchainplatformsgrowexponentially,millionsoflines vealvulnerabilities.Afuzzerkeepsacollectionofinteresting ofsmartcontractcodearebeingdeployedtomanageexten- testcasesandmutatesthemindividuallytoproducenewtest sivedigitalassets.However,vulnerabilitiesinthismission- cases,subsequentlyrunningthesoftwareonthosetestcases. criticalcode have ledto significantexploitations andasset By generating and executing these test cases thousands of losses.Thoroughautomatedsecurityanalysisofsmartcon- timespersecond,fuzzersareadeptatcoveringcoderegions tractsisthusimperative.ThispaperintroducesLLM4FUZZto anddiscoveringcomplexvulnerabilities. optimizeautomatedsmartcontractsecurityanalysisbylever- Still,fuzzingforsmartcontractshaslimitationsbecause aginglargelanguagemodels(LLMs)tointelligentlyguide fuzzers do notunderstandthe semantics ofthe target. Due andprioritizefuzzingcampaigns.Whiletraditionalfuzzing tothis,muchtimeiswastedonexploringcoderegions(i.e., suffersfromlowefficiencyinexploringthevaststatespace, mutatingtestcasesthatcoverthecoderegions)thatarewell- LLM4FUZZ employsLLMstodirectfuzzerstowardshigh- testedandunlikelyto contain vulnerabilities. Additionally, valuecoderegionsandinputsequencesmorelikelytotrigger complexif-statementsmakecertaincoderegionsmuchharder vulnerabilities.Additionally,LLM4FUZZcanleverageLLMs tocoverfully.Yet,fuzzersallocatethesameamountofeffort toguidefuzzersbasedonuser-definedinvariants,reducing toexploringeasyandhard-to-covercoderegions,reducing blindexploration overhead. Evaluations of LLM4FUZZ on their efficacy. Lastly,smart contracts are stateful software. real-worldDeFiprojectsshowsubstantialgainsinefficiency, To trigger a vulnerability, one needs a sequence of inputs coverage,andvulnerabilitydetectioncomparedtobaseline (consistingofafunctioncallandthearguments)insteadof fuzzing.LLM4FUZZalsouncoveredfivecriticalvulnerabili- asingleinput.Eachinputpotentiallymutatesthestate,and tiesthatcanleadtoalossofmorethan$247k. subsequentinputdependsonthemutatedstate.Creatingsuch asequenceisnon-trivialforfuzzersastheyneedtoensure 1 Introduction every input in the sequence is valid, and the order of the sequenceiscorrect[43]. Asdecentralizedapplicationsandblockchainplatformsex- Recent developments in LLMs have shown their ability perience exponentialgrowth [52] [35],millions oflines of tofindvulnerabilitiesinsmartcontractsanddeveloppoten- smartcontractcodetomanagebillionsofdollarsindigitalas- tial exploits [14] [46] [47] [44]. However, directly asking sets[45][3]aredeployed.Regrettably,numeroushigh-profile LLMsabouttheexistenceofvulnerabilitiescommonlysuf- attackshaveexploitedvulnerabilitiesinthismission-critical fersfromhighfalsepositivesandnegatives[16],restricting code,resultinginsignificantlossesthroughlogicbugsthat theirbroaderadoptions.Yettheaccumulatedknowledgeand auditors overlooked. Therefore,itis imperative to perform experience encoded within LLMs can be harnessed for a exhaustivesecurityanalysesofsmartcontractsbeforedeploy- human-likecodesemanticsanalysis.Thispaperintroducesa ment. novelmethodology,calledLLM4FUZZ,tooptimizesmartcon- Traditional manual auditing of vast smart contract code- tractfuzzingwithLLMs.LLM4FUZZusesLLM-generated bases is error-prone andoften overlooks corner-case flaws. metricstoorchestrateandprioritizetheexplorationofcertain Theindustryincreasinglyusesautomatedmethodsliketest- coderegionsofthetargetinfuzzingcampaigns. ing,dynamicanalysis,andformalverificationtoovercome Specifically,we demonstrate that the LLMs are capable theselimitations.Guidedfuzztesting[49][19][48]standsas ofproducingmetricslikecomplexityandvulnerabilitylike- oneofthemostprevalentandreliabletechniquesemployedby lihoodofcoderegionsquiteaccurately. Wealsoshowthat 1 4202 naJ 02 ]RC.sc[ 1v80111.1042:viXraLLMscananalyzeinvariants(manuallyinsertedassertions) necessitates thoroughsecurity analysis before deployment, inthecodeandidentifyinvariant-relatedcoderegions.The giventhatanyvulnerabilitycanleadtoasignificantlossof fuzzers can utilize these metrics to allocate more effort to funds.Forinstance,intheDAOattack[11]ontheEthereum exploring more fruitfulcode regions. In addition,we illus- networkin2016,aflawinasmartcontractledtotheunautho- tratethatLLMscanalsoidentifypotentiallyinterestingse- rizedwithdrawalofover$60million.Thecode’svulnerability quences of function calls. By prioritizing exploring these layinarecursivecallingbug,exploitedbyanattackertodrain sequences,fuzzerscanefficientlyfindandreachinteresting thefunds. andvulnerability-leadingstates. Below, we introduce a set of commonly used services ToharnessthepotentialofLLMs,weextractahierarchical availableonEthereumorEthereum-compatiblechainsimple- |
representationofthesmartcontract,includingsourcecode, mentedas smartcontracts. These services are widelyused controlflowgraphs,datadependencies,andmetricsproduced insideattacks.Knowledgeoftheseservicesiscrucialtoun- by staticanalysis. These elements allow LLMs to perform derstandingthemotivatingexamplesandvulnerabilitieswe semanticsanalysis,comparehistoricalvulnerabilitypatterns, havefound. andpinpointpotentiallyinterestingtransactionsequences.We TokensOneprominentfeatureofEthereumisitssupport embedthisrepresentationandourspecificgoalsasprompts, for tokens, which can be thought of as digital assets or allowingtheLLMstogeneratemetricsforbasicblocksand units of value built on top of the Ethereum blockchain functions.Thisinformationisthenencodedintoschedulers using a smart contract. These tokens can represent a wide offuzzers,guidingthemtoexploretestcasesbasedonthis arrayofdigitalgoods,including,butnotlimitedto,virtual refinedprioritization. currency, assets, voting rights, or even access rights to a We evaluated LLM4FUZZ on real-world complex smart particular application. The ERC-20 token standard [4], contract projects with and without known vulnerabilities. prevalent on the Ethereum blockchain, defines a consis- LLM4FUZZgainssignificantlyhighertestcoverageonthese tent set of rules for fungible tokens. Key methods like contractsandcanfindmorevulnerabilitiesinlesstimethan balanceOf(address _owner) (querying account balance thepreviousstate-of-the-art.Whilescanning600livesmart of token), transfer(address _to, uint256 _value) contractsdeployedonthechain,LLM4FUZZidentifiedcriti- (transferringthetokentoanotheraccount),andothersensure calvulnerabilitiesinfivesmartcontractprojects,whichcan uniforminteractionsacrosstokencontracts. leadtosignificantfinancialloss. LiquidityPoolsfacilitatetokenexchangesor"swaps"in atrustlessmanner.Acommonimplementationofaliquidity pool is Uniswap V2 Pool [8],which is a contract holding 2 Background twotokens,andthepriceofeachtokenintermsoftheother is determined by the ratio of the quantity of these tokens 2.1 SmartContracts in the pool. This automated market maker(AMM) mecha- Smart contracts are programs deployed on blockchain net- nism follows the formula x×y=k,where x and y are the worksthatexecuteautonomousdigitalagreements.Actingas quantity of the two tokens in the pool,and k is a constant self-executingcontracts,theyencodecomplexbusinesslogic value.Userswhowishtoswaponetokenforanotherinteract andhandleextensivefinancialassetswithoutcentralizedin- withtherespectivepool.Theconstantkensuresthatasthe termediaries.Forinstance,asmartcontractmightfacilitate, amountofonetokeninthepoolincreases,theamountofthe verify,orenforceanegotiationorperformanceofacontract, otherdecreases,maintainingabalanceandadjustingprices suchasanautomaticpaymentuponreceiptofgoods. accordingly.Inliquiditypools,liquidityproviderslikeproject Thesecontractsaccepttransactionsfromdistributedpar- ownerssupplytwotokenstothepooltoenableotherstotrade ticipantstoalterthecontractstateontheimmutableledger. againstthispooledliquidity.Inreturnfortheircontribution, Popular platforms for smart contract development include LPsoftenreceiverewards. Ethereum,Solana,andCardano.Inthispaper,wefocuson Ethereum smart contracts only. Developers often use lan- 2.2 Feedback-drivenFuzzing guageslikeSolidityandVyper,whichcompilebytecodeto beexecutedonEthereumvirtualmachines. Fuzzingisanautomatedtestingtechniquewhereinputsare Taketheexampleofadecentralizedfinance(DeFi)appli- generatedrandomly.Feedback-drivenfuzzing[24]incorpo- cationthatenablesuserstolendandborrowcryptocurrency. ratesreal-timefeedbackfromtheprogramexecutiononthe Asmartcontractcanbecreatedtoholdcollateralandrelease randomlygeneratedinputs.Insteadofblindlygeneratingtest funds when certain conditions are met,suchas a borrower cases,feedback-drivenfuzzingmonitorstheexecutionofthe repayingtheloan.Thisoperationisgovernedbycode,which program.Bytrackingmetricssuchascodecoverage,branch automatically enforces the agreementwithouta traditional execution,orparticularconditionswithin thecode,itintel- financialinstitutionoverseeingthetransaction. ligentlyguidesthegenerationofsubsequenttestcasesthat Thepersistentanddecentralizednatureofsmartcontracts try to maximize the metrics. The actual generation of new 2Algorithm1Feedback-DrivenMutationFuzzing comparisonwaypoints[37](runtimemetricsproviders)tode- 1: procedureFEEDBACKDRIVENFUZZING(program) terminestateinterestingness.Moreinterestingstates(derived 2: corpus←initializewithseedinputs fromasequenceofinputs)areprioritizedforexploration. 3: covMap←initializeemptycoveragemap 4: whilewithintimebudgetdo 2.3 LargeLanguageModels 5: t←selectfromcorpus 6: for0..Energy(t)do DerivedfromtheTransformerarchitecture[58],modelslike 7: mutant←mutatet GPT-4[5]arecharacterizedbytheirvastnumberofparam- 8: newCoverage←program(mutant) eters (often in the billions or trillions), enabling them to 9: ifnewCoverageisnovelorinterestingthen capture intricate patterns in language. Trained using unsu- 10: corpus←corpus∪{mutant} pervised learning on extensive textual datasets, LLMs re- 11: covMap←updatewithnewCoverage finetheirparametersbypredictingsubsequentwordsinse- |
12: endif quences,therebygainingproficiencyinlanguagestructure, 13: endfor semantics,andcontext.RecentworkshaveappliedLLMsto 14: endwhile boostfuzzingtraditionalsoftware[57][27][17][18][29][9] 15: endprocedure servingastestcasegeneratorsandmutators.However,their potentialtointegratedeeplyintothefuzzerorenhancesmart contractsecurityanalysisremainsunexplored. testcasesisperformedbymutatinganexistingtestcase.This dynamicadaptationmakesthefuzzingprocessmoretargeted 3 MotivatingExample andefficient. Feedback-drivenfuzzingtypicallyusespowerscheduling 3.1 TheAESExploit [7][13]toallocatemoretimetoexploringfavoredmutants oftestcases.Energiesarecalculatedforeachtestcase,which The following code segment is taken from a well-known quantifiesthefavoritenessofeachtestcase.Tocalculatethe smartcontractprojectAES,whichhasbeenexploited,and energy,fuzzerscommonlyrelyontheexecutiontimeofthe $62kworthofassetshavebeenstolen[1]. testcase,testcaselength,andhowrare(determinedbythe hitrate)thepathexecutedbythetestcaseis. 1 function sellTokenAndFees( 2 address from, Thepseudocodeofthefeedback-drivenfuzzingalgorithm 3 address to, withpowerschedulingisshowninAlgorithm1.Itbeginsby 4 uint256 amount initializingacorpusandcoveragemap(Lines2-3).Withina 5 ) internal { 6 uint256 burnAmount = amount.mul(3).div(100); loop(Lines4-14),itselectsatestcase(Line5)andqueries 7 uint256 otherAmount = amount.mul(1).div(100); itsenergy(Line6).Then,foran"energy"amountoftime,the 8 amount = amount.sub(burnAmount); algorithmmutatesthetestcase(Line7),executesthemutated 9 swapFeeTotal = swapFeeTotal.add(otherAmount); testcase(Line8),andupdatesthecorpusandcoveragemap 10 super._burn(from, burnAmount); 11 super._transfer(from, to, amount); ifneworinterestingpathsarefound(Lines9-11).Theloop 12 } continues until the time budget of the campaign has been 13 exceeded. In general, this algorithm can efficiently guide 14 function distributeFee() public { the fuzzing to uncover code vulnerabilities by prioritizing 15 ... 16 super._transfer(uniswapV2Pair, exploringinterestingtestcases. technologyWallet, swapFeeTotal); Thealgorithmcanbeusedforbothfuzzingtraditionalsoft- 17 ... ware and smart contracts. For smart contracts, a test case 18 swapFeeTotal = 0; would be a sequence of inputs (i.e., transaction calls) and 19 } mutationcouldbeinsertingnewinputintothesequence,re- The function sellTokenAndFees is called every time movinginputsfromthesequence,ormutatingindividualin- usersattempttosellthistoken(checkedbywhetherthetoken put.However,traditionalfeedback-drivenfuzzingusingtest is sentto the liquidity pool). This function takes a fee (de- coverageisineffectivefortestingsmartcontractsduetothe ductedfromtheamounttherecipientreceives)andaddsthe statefulnatureofsmartcontracts.Testcoverageisnotagood feevaluetothestatevariableswapFeeTotal,whichrecords metricshowingwhatinputshallbefollowedbyanotherin- theprofitoftheprojectowner.Intuitively,projectownerstake put. Existing smart contract fuzzers like SMARTIAN [15] taxfromsellingtheirtokens.distributeFeecanbecalled leverage dataflow analysis to determine the "sequential in- byeveryonetodistributetheswapFeeTotalamountoftoken terestingness"(howinterestingasequenceis).Itisdeemed fromtheliquiditypooltoprojectowners.Thiscouldleadtoa interestingwhenasequenceleadstoauniquedataflowpat- priceinflationattack.Anattackercantransfersometokento tern.ITYFUZZ[43]similarlyintroducescustomdataflowand theliquiditypool(recognizedassellingtokensbytheabove 3codeeventhoughitisnot)andasktheliquiditypooltore- 3.2 Challenges fund(supportedbythe liquiditypoolto preventaccidental To summarize,the traditional fuzzing approach faces chal- transfer). Supposetheattackerinitiallyhasvamountofto- ken.Afterthisoperation,theyareleftwithv−burnAmount lengesworkingonprojectsexistingfollowingcharacteristics: token,asthesellingfeeisdeducted.Then,theattackercan calldistributeFeetotransferoutasignificantamountof 1. DistractionsTheprojectundertestmaybeasystemof token(2×burnAmount)fromtheliquiditypool,makingthe tensorevenhundredsofsmartcontractswiththousands poolimbalanced,whichcouldresultinpriceinflationofthe ofpublicfunctions.Traditionalfuzzerscannotidentify token. andallocatemoreenergytothemostvulnerablefunc- Tosuccessfullyinflatethetokenpricesuchthattheearn- tions. ingscouldbegreaterthanthecostofinflatingit,anattacker needstoborrowthetokenanduseatleasteightinputswith 2. TrapsTraditionalfuzzersalsowasteeffortonfunctions correctorderinasequence.Eachinputrequiresvalidargu- that might be well-tested and have no vulnerabilities. ments that the fuzzer takes time to find. To generate such OneexampleisprojectswithUniswapliquiditypools. a profitable exploit,a naïve statefulfuzzerwouldmake on ThecontractsinUniswapliquiditypoolsarewell-fuzzed average 228C attempts,whereC is the average amount of and manually audited. Therefore, there is no need to |
attemptstogetvalidargumentsforoneinput,asthereare22 allocatethesameamountofenergytothosecontracts candidatefunctionstocall.Evenwithmoreadvancedfuzzers asproject-specificcontractsthatarepoorlytested.Tra- likeITYFUZZandSMARTIANwithcomplexheuristics,itstill ditionalfuzzersmayalsogettrappedinfunctionsthat couldnotbeuncoveredwithinareasonabletime. arenotcomplexbutimpossibletocover(e.g.,functions that can be only invoked by certain accounts). Those 1 uint private unlocked = 1; functions are assigned the same or even more energy 2 modifier lock() { 3 require(unlocked == 1, ’Pancake: LOCKED’); thanfunctionsexplorablebyfuzzers. 4 unlocked = 0; 5 _; 3. ComplexSequentialDependenciesTotriggermostvul- 6 unlocked = 1; nerabilitiesorfindexploitstoday,thefuzzerhastobe 7 } abletoproducelong,interestinginputsequences.Tradi- 8 9 function mint(address to) external lock returns ( tionalfuzzersleveragedataflowinformationorcompari- uint liquidity) { soninformationtodeterminethesequences.However, 10 ... inreal-worldprojects,usingsolelythatinformationcan 11 } generatemillionsof"interesting"sequencesthatarenot 12 13 function burn(address to) external lock returns ( actuallyinterestingsemantically. uint amount0, uint amount1) { 14 ... 15 } 4 Methodology 16 17 function swap(uint amount0Out, uint amount1Out, address to, bytes calldata data) external lock WepresentourtechniquesforusingLLMstoguideandprior- { itizethefuzzingofsmartcontractsintelligently. 18 ... 19 } AnotherinterestingobservationisthatITYFUZZ[43]and 4.1 Overview SMARTIAN[15]aresignificantlymoreinterestedintheliquid- itypoolsoftheproject.Afterinspectingthepowerscheduling The workflow of LLM4FUZZ is depicted in Figure 1. process,we identify the reason to be that the Uniswap V2 LLM4FUZZ first converts each smart contract into its ab- liquiditypoolcontractusesastatevariable“unlocked”(Line stract syntax tree (AST) and then performs static analysis. 1)topreventreentrancyattacks,whichdisallowsexternalcalls This stage is important because we want œLLM to make madebysuchacontracttocallcertainfunctionsinitagain. well-informeddecisionsbyprovidingnecessarystaticanal- Specifically,thecontractreadsthe“unlocked”atthestartof ysisresults.Then,theresultsarepassedtoproducers,who theexecution.Ifitisfalse,thentheexecutionwillfail(Line extract scores using LLM from each code snippet created 2). Then,itsets “unlocked” as false atthe startandtrue at from the previous analysis. In the following sections, we theend(Line3-6). ITYFUZZ considerstherearedataflows introducefourproducers:complexity,sequentiallikelihood, amongfunctionsusing“unlocked”.Thus,itwouldprioritize vulnerabilitylikelihood,andinvariantdependencies.There- analyzingthosefunctionsandtheirsequenceseventhough sultsofproducersareencodedtobeenergyforthetestcases suchadataflowismeaninglessandexistsbetween almost inthecorpusofLLM4FUZZ,whichwecollectivelyreferto everyfunctionintheliquiditypools. asconsumers,asitconsumestheresultsfromproducers. 4Static Complexity Vulnerability LLM Scoring Analysis Likelihood Corpus Scheduling Sequential Invariant Context Code Segment Extraction Likelihood Relations LLM Response …. More Usage Parsing …. More Prompts Preparation Producers LLM Invocation Consumers Figure1:WorkflowofLLM4FUZZ 4.2 ParsingandRepresentation 4.3 Producer:Complexity Thecomplexityofcodesegmentswithinaprogramprovides Beyondthesourcecodeofthesmartcontract,LLM4FUZZ valuableinsightsintothechallengesafuzzerfacesincovering providesadditionalcontextualinformationforusewithLLMs, eachsegment.Byunderstandingthiscomplexity,fuzzerscan enabling LLMs to make more accurate and reasonable de- strategicallyallocatemoretimeandresourcestoexploring cisions.Extractingthesecontextsisacomplextaskforthe the code’s more intricate and challenging segments. This LLMs due to supportforlimitedcontextlengths in LLMs, approach enhances the efficiency of fuzzing and ensures a butitbecomesmoremanageablethroughstaticanalysisof morecomprehensiveexaminationofpotentialvulnerabilities. smartcontracts.Specifically,weextractcyclomaticcomplex- Traditionally,thisinformationisextractedusingcyclomatic ity,whichmeasuresthenumberofindependentpathsthrough complexity.However,cyclomaticcomplexityisnotanaccu- thesourcecode;statevariabledependencies,whichprovide rate representation of the complexity of the smart contract insightsintohowdifferentvariableswithinthecontractinter- code.Firstly,cyclomaticcomplexitydoesnotaccountforthe act;externaldependencies,whichexaminehowthecontract statefulnessofasmartcontract.Forinstance,thefollowingis communicateswithotherexternalcomponents;andcontrol themostcomplexfunctioninanNFTsmartcontract,which flowinformation,whichoffersadetailedlookattheorderin transferstheNFTtoanotheraccount. whichtheindividualelementsofthecontractareexecuted. Theseattributesarecarefullyextractedbasedonmethodolo- 1 function _transfer(address from, address to, giesandtechniquesdevelopedinpreviousresearch.Theat- uint256 tokenId) { tributesarethenusedalongwiththesourcecodetocreatethe 2 require(ERC721.ownerOf(tokenId) == from, " ERC721: transfer from incorrect owner"); prompts. |
3 _beforeTokenTransfer(from, to, tokenId); Thetaskofsubmittingthisdetailedinformationtothelan- 4 _approve(address(0), tokenId); guagemodelspresentsitsownsetofchallenges.Themost 5 _balances[from] -= 1; 6 _balances[to] += 1; straightforwardapproachwouldbetoencapsulateallthein- 7 _owners[tokenId] = to; formationoftheentiresmartcontractwithinasingleprompt. 8 emit Transfer(from, to, tokenId); However,mostlanguagemodelshaveatokenlimitationof 9 _afterTokenTransfer(from, to, tokenId); 32Ktokens,whichcreatesasignificantconstraint,asalmost 10 } allsmartcontractcodeexceedsthislimit.Tocircumventthis The complexity mainly comes from the dependencies on limitation,weanalyzethesmartcontractfunctionbyfunction, _balances,_owners,etc.,whichareallstatevariablesthat thereby reducing the overall size of the prompt. We begin cannotbedirectlymodified.Toestimatethecomplexityof byextractingallpublicfunctionswithinthesmartcontract, statedependencies,oneneedstorecursivelylookupthefunc- andforeachfunction,weconductarecursiveanalysis,identi- tionswritingtothosestatevariables,whichisextremelyhard fyingallrelatedfunctionsthatagivenfunctiondependson to findwithoutfalse positives using static analysis orsym- byanalyzingthecallgraphanddatadependencygraph.We bolicexecution.LLMs,ontheotherhand,canreasonwith thenincludethesourcecodeoftheserelatedfunctionsinthe thosestatefuldependenciessemantically,sounrelatedstate prompt. dependencies can be easily filtered based on the inference Whatdistinguishesourapproachisthereductioninprompt capabilityofLLMs. sizeandthecarefulpreventionofinformationloss.Byfocus- Next,cyclomaticcomplexitydoesnotaccountforbranch ing onlyon the information thateachfunction depends on permissiveness(i.e.,thelikelihoodoftakingthebranchwith withregardtocalls,controlflows,anddataflows,wecould uniformlydistributedinputs).Forinstance,inthecodebelow, maintaintheintegrityoftheinformationwhileconformingto cyclomaticcomplexityconcludesthereisonebranch,thusnot thetokenlimitationsofthemodels. complex.However,itisextremelycomplextocoverthetrue 5branch.Recentresearch[38]proposestousemodelcounters 4.4 Producer:SequentialLikelihood to account for branch permissiveness. Yet, it takes a huge amountofcomputationresourcesandtime. ExistingfuzzingapproachesforsmartcontractslikeITYFUZZ andSMARTIANrelyonstaticordynamicanalysistechniques 1 if (pow(input, 2893) == 23947023413443923032) { toderivedependenciesbetweencontractfunctionsandgen- 2 bug(); eratesequencesofrelatedfunctionsastestcases.However, 3 } accuratelytrackingdataflowinsmartcontractsischalleng- ing.Evenwhendataflowinformationisavailable,functions Ontheotherhand,LLMscanunderstandthecodesemanti- readingandwritingthesamestatevariablesmaystillbeinde- callyandseethisasanextremelyunpermissivebranch,which pendentinpractice.Forexample,intheUniswappaircontract, impliesthecodeiscomplex. mostfunctionswriteandaccessthe“lock”variablesimplyto preventreentrancyattacks,notbecausetheyaresemantically Human: How complex is this code segment: if related.Unlikestaticanalysis,LLMshavemorecontextual (pow(input, 2893) == ...) comparing to if knowledgeaboutsmartcontractsemantics.Thus,weleverage (pow(input, 2) == 9) LLMs to generate interesting function call sequences that arelikelytouncovernewbehaviorsbasedonthefollowing GPT4:Thefirstcodesegmentismorecomplexduetothe prompt. largeexponent,whichcouldleadtonumericalissues.The secondcodesegmentissimpler,raisingtheinputtothe powerof2. Suggestaseriesofinterestingsequencesgivenfollowing thepublicSolidityfunctionsandtheircode. Then,rank eachinterestingnessofthesequencewithintherangeof Lastly,smartcontractsfacilitatereentrancy(i.e.,leaking 0to100.Outputonesequenceinalinewiththeformof controltothecallers),hooks,andexternalcalls.Itisextremely <FunctionSignature1>=><FunctionSignature2>:<Inter- hardtoaccountforthesecasesusingstaticanalysislikecy- estingness>. clomaticcomplexity.InourexperiencewithLLMs,wefound thatLLMscanunderstandthosetypesoftransferofcontrols andconclude complexitybasedon comments andcode se- mantics. 4.5 OtherProducers Weconstructaspecializedprompttoanalyzethecomplex- ityoffunctionsinasmartcontractusinglargelanguagemod- Previousresearch[50][51]matchesvulnerabilitypatternsby els.Thispromptisdesignedtoguidethemodelsinevaluating similarityandallocatesmoreresourcestocoderegionsprone complexityonastandardizedscaleof100.Byemployinga tovulnerabilities.However,thismethodwouldfailifthevul- uniformscale,wefacilitateeasyandmeaningfulcomparisons nerablecodeisnovelandnotseenbeforeorifadditionalcode betweendifferentfunctionswithinthecontract. hasbeeninsertedaroundknownvulnerablecodesegments. Morerecentwork[36][59][34]manuallyidentifiesasetof Thepromptisasfollows: commonvulnerabilitypatternsandleveragestaintanalysisto locatepotentiallyvulnerablecode.Yet,itisinaccuratedueto HowcomplexarethefollowingSoliditycodesnippets(i.e., thestatefulnessofsmartcontracts,makingitextremelyhard howhardisittogainhightestcoveragebytryingrandom totrackdataflow.ForLLMs,itcanmatchthevulnerablecode arguments)?Rankwithintherangeof0to100.Outputin |
patternslogicallyandsemantically.Thisinformationisideal theformof<Complexity1>,<Complexity2>,<Complex- forsupplementingfuzzing.Forinstance,atokencontractmay ity3>... haveacustomizedtransferfunctionwithbuytax,selltax,and airdropfeatures.Noaccuratepatterncandescribethistransfer The prompt is carefully structured to ensure the LLMs function’sunintendedorintendedbehavior.However,given adheretothespecifiedformat,eliminatingunnecessaryinfor- thecodesegmentofthefunction,LLMscanaccuratelysum- mation. Itis followedbythe details ofeachfunction to be marizethelogicandidentifywhetheritmaybesusceptible evaluated. tomultipleattacks. Withthisprompt,alanguagemodelreplieswithalistthat Toanalyzethelikelihoodofvulnerabilitiesinsidefunctions denotesthecomplexityofeachfunction.However,werecog- inasmartcontract,wecreatethefollowingprompt: nizethatamodelmayproducevariationsinthecomplexity valuesforthesamefunction.Tomitigatethisinconsistency How likely are the following Solidity snippets to cause andderiveamorereliablemetric,weletthelanguagemodel vulnerabilities(e.g.,logicalissue,reentrancy,etc.)?Rank conducttheinferencethreetimeswithdifferenttemperatures eachintermsof100.Outputintheformof<Likelihood foreachfunction,andthenwecomputetheaverageofthese 1>,<Likelihood2>,<Likelihood3>... threeresponses. 6Wecanalsousethesamemethodtoanalyzetheinvariants allthebasicblocksthatgotexecutedinthefunction.Below (user-definedassertions).Somecommonlyusedinvariantsare istheformulausedtocalculateK (t),whereV(b)denotes vuln thattotalassetsinthesmartcontractshouldbegreaterafter the vulnerability likelihood of the basic block b generated certain calls,and the value of a certain state variable shall byLLM,Function(t)returnsallfunctionscalledbythetest neverchange.Specifically,givenasetofinvariants,wecan caset,andBB(Function(t))returnsallthebasicblocksinthe askLLMstogeneratethelikelihoodofthecoderegionsbeing functionscalledbythetestcase. a dependency of the invariant (i.e.,to violate the invariant, howlikelythecoderegionmustbeexercised). K (t)= ∑ V(b) vuln b∈BB(Function(t)) HowlikelyisfollowingSoliditycodesnippetstocause Sequentialdependencyisaproducerprovidingcompletely {{Invariant}} being violated? Rank each in terms of differentkindsofinformation.Itprovidesalistofsequences 100.Outputintheformof<Likelihood1>,<Likelihood offunctioncallswithcorrespondingscoresindicatinghow 2>,<Likelihood3>... interestingthesequenceis.Foreachtestcase,weextractall subsequencesofthecurrentsequenceandsumthescoreof thesesubsequences.Theformulaofsequentialdependency energy (i.e., K (t)) is shown below, where function S(s) seq 4.6 Consumers:FuzzingPrioritization returnsthescoreofthegivensequence(0whenthesequence isnotpresentintheresponsereturnedbytheLLM),and2Seq(t) Weuseawell-establishedcorpusschedulingalgorithm:power denotesallthesubsequencesofthetestcase. scheduling[7][13]toprioritizecertaintestcasesbasedon complexity,vulnerabilitylikelihood,statedependencies,orin- K (t)= ∑ S(s) variantdependencies.Atestcasewithhigherenergy(i.e.,e(t)) seq s∈2Seq(t) inthecorpusisfavoredformoremutationsduringfuzzing. Wecreatedthefollowingformulaforcomplexityscoresto To combine the energy derived from each producer derivethecomplexityenergy(i.e.,K (t),wheretisthe (K ,K ,K ,K ), we can use the following complexity complexity invariant vuln seq testcase).FromLLM,wecangenerateacomplexityscore formula.Intheformula,e′(t)representstheoriginalenergy foreachbasicblock.Intuitively,atestcaseismoreinteresting calculationmethodwidelyusedbyallfuzzers.Topreventthe (i.e.,havinghigherK (t))ifitsmutationcanleadto finalenergyfrombecomingtoolarge,wecapitwith32times complexity morecomplexcoderegionexplorationasadifferentbranch theoriginalenergy.Thisavoidsassigningtoomuchenergy istaken(i.e.,coveringaneighboringbasicblock).Thus,for toaspecifictestcase,preventingthepossibilityofexploring eachbasicblockthetestcasecancover,LLM4FUZZ sums othercoderegions. the complexity of their neighboring basic blocks to be the complexityscoreofthetestcase.LLM4FUZZthendividesit e cvs(t)=min(e′(t)+ ∑ e′(t)∗K i(t), 32∗e′(t)) bythetotalnumberofbasicblockscoveredbythetestcase Ki∈K tonormalizethecomplexityscores.Followingistheformula Inreal-worldexperiments,wefoundthatthefuzzermay usedtocalculateK (t),whereC(b)denotesthecom- complexity noteffectivelyutilizethecomplexitygeneratedbyanLLM. plexityofthebasicblockbgeneratedbyLLM,BB(t)returns LLMsprovidecomplexityrangingfrom0to100andclusters allbasicblocksexecutedbythetestcaset,andneighbor(b) around a specific value (i.e.,having a small standard devi- isthesetofneighboringbasicblocksofthegivenbasicblock ation). Thus,the energy calculated above also has a small b(i.e.,basicblocksthatsharethesameparentbasicblock). standarddeviation,diminishingtheimpactofcomplexitygen- eratedbyLLMs.Instead,anidealenergydistributionshall ∑ C(neighbor(b)) |
K (t)= b∈BB(t) havealargestandarddeviation(e.g.,morethan100)toreflect complexity |BB(t)| theimportanceofcoderegionsoverothers. Toenhancetheinfluenceofthiscomplexity,weadjustit Invariant likelihood energy (K ) can also be derived invariant usingthefollowingformula,whereC isthecomplexity similarly. fuzzer providedtothefuzzer,andC isthecomplexitygenerated Duringexperiments,weidentifiedthatusingasimilarway LLM byLLM. to calculate vulnerability likelihood energy (i.e., K (t)) vuln C =ACLLM+B leadstopoorresults.Asshowninthemotivatingexample,a fuzzer vulnerablecoderegionmayneedtobeexecutedmorethan AandBareconstantsobtainedfromthehyperparameter onceinsequencetoconstructastatethatcouldtriggeravul- optimizationofseveralcampaigns.Inourcase,A=1.15and nerability.Usingthepreviouswayofcalculationignoresthe B=1200yieldthebestperformance.Similarly,vulnerability interestingnessofrepeatingthesameinputagaininthelater likelihoodshallbeappliedwiththesametechniquebefore partofthesequence.Thus,foreachtestcase,vulnerability providing to the fuzzer,and similarA and B yield the best likelihoodenergyisthesumofthevulnerabilitylikelihoodof performance. 75 Implementation 6.1 TestCoverage WehaveimplementedLLM4FUZZontopofopen-sourceITY- In our assessment of 117 projects, the LLM4FUZZ could FUZZwith244linesofRust.WeuseSlithertoconductstatic coveratotalof1.6Minstructions1,incontrasttothebaseline, analysis,analyzeASTs,andimplementapreprocessorthat whichachievedcoverageofonly1.1Minstructions,asshown queriesLLMwith3klinesofPython.Forfastandcheapinvo- in Figure 2 and Figure ??. We focused on a subset of 12 cationofLLM,weselectedtheLlama270Bmodel[6]hosted projects,withtheirindividualcoveragemetricspresentedin atAnyscaleandReplicate.WedonotuseGPT4becauseit Figure3.Forspecificprojects,notablyBitpaidioandSheep, istooexpensive.Foreachquery,weinvoketheLLMthree theLLM4FUZZrealizedacoveragethatexceededthebaseline timeswithtemperatures0.9,0.95,and1andusetheaverage bymorethantwo-fold.Thislargeimprovementcanlargely oftheresults.Wewillopen-sourcethetooluponacceptance. beattributedtothefactthat,wheninformedbycomplexity, thefuzzerincreasinglyemphasizesmorecomplexfunctions. 6 Evaluation Inthissection,weattempttoanswerthefollowingresearch 1.5 questions: • RQ1(TestCoverage)CanLLM4FUZZgainhigherin- structioncoverageforvariousreal-worldprojects?How 1.0 manymoreinstructionsarecoveredbyLLM4FUZZ? • RQ2(Vulnerabilities)CanLLM4FUZZfindmorevul- 0.5 nerabilities in real-world projects than existing tools? CanLLM4FUZZfindnewundiscoveredvulnerabilities inwell-auditedprojects? 0.0 • RQ3(Invariant)CanLLM4FUZZmakeinvarianttest- 0 250 500 750 1000 1250 ingmoreefficientthanexistingpropertytestingtools? Time (s) • RQ4(LLMSensitivity)CandifferentLLMproducers provideuniqueandusefulinsightsforLLM4FUZZ? Toanswertheseresearchquestions,wecollected117pre- viously exploited smart contract projects, with LoC rang- ing from 100 to 100k, and compared the performance of LLM4FUZZwithITYFUZZ.ITYFUZZisselectedasthebase- lineasitisthecurrentstate-of-the-artandhasshowntobe significantlybetterthan allalternatives withrespectto test coverageandtheabilitytodetectreal-worldvulnerabilities. To minimize the human impact and negligence,when run- ningLLM4FUZZonvulnerableprojects,wedirectlyforked the chain at the immediate block before the exploit is ex- ecuted. We also created multiple ablations of LLM4FUZZ. LLM4FUZZ-C (L-C) onlyuses complexitydata forcorpus scheduling.LLM4FUZZ-V(L-V)onlyusesvulnerabilitylike- lihooddataforcorpusscheduling.LLM4FUZZ-S(L-S)uses sequentiallikelihooddataforstatescheduling. Inaddition,wealsorun LLM4FUZZon600unexploited on-chain projectson BinanceSmartChain,Base,Polygon, andEthereum. AllexperimentswereperformedonanodewithtwoXeon E5-2698V3CPUs(64threads)and256GBRAM.Foreach campaign,werepeatthreetimestominimizetheimpactof randomnessinthefuzzerandreporttheaveragevalue.Unlike ITYFUZZpaper,werunLLM4FUZZonasinglecoretoreduce theimpactofIPC. egarevoC noitcurtsnI 1e6 LLM4Fuzz ItyFuzz Figure2:TotalInstructionCoverageOverTimeforOnchain Projects Furthermore,thesequentiallikelihoodmetricisavaluable guideforthefuzzer.Traditionally,afuzzerwouldrequirean averagetimecomplexityofO(nk)(wherenisthetotalnum- beroffunctions,andkrepresentsthenumberoftransactions requiredtoaccessaspecificcodelocation)tonavigate.Con- sideringscenarioswherencanexceed400andkmightsur- pass40,especiallyinreal-worldprojects,thecomputational demandsonthefuzzercanbeprohibitivelyextensive.Discern- ingtheoptimaltransactionsequencesremainsaformidable challengedespiteintegratingcertainheuristics.Nevertheless, leveraging the sequentialprioritization providedbyLLMs, LLM4FUZZcansidestepasignificantportionofnon-pertinent inputsequences. Itis,however,imperativetoacknowledgethattheperfor- manceoftheLLM4FUZZisnotuniversallysuperior.Incertain projects,itsefficacyiscomparabletooreveneclipsedbythe |
baseline.Suchdeviationspredominantlystemfrominstances wheretheLLMmisinterpretsspecificcodesegments.Arig- orous examination of instances where an LLM potentially mischaracterizesasmartcontractiselucidatedin[47][16]. 1Weevaluateinstructioncoverageasseveralpreviousresearchwork[15] [43]haveidentifiedbranchcoverageisnotagoodrepresentationoftest 8HPAY BNO Bitpaidio BUNN 10k 20k 20k 50k 10k 10k 5k 0 0 0 0 0 500 1000 0 500 1000 0 500 1000 0 500 1000 WGPT Sheep BurgerSwap BabyDogeCoin 100k 50k 40k 40k 25k 50k 20k 20k 0 0 0 0 0 500 1000 0 500 1000 0 500 1000 0 500 1000 PancakeBunny LocalTrader ATK LaunchZone 20k 200k 20k 20k 10k 0 0 0 0 0 500 1000 0 500 1000 0 500 1000 0 500 1000 Figure3:TestCoverageOverTimeForSelectedOnchainProjects(XisInstructionCoverageandYisTime(s),BlueConcrete LineisLLM4FUZZandRedDottedLineisITYFUZZ(Baseline) 6.2 PerformanceonKnownVulnerabilities misunderstand certain parts of the contracts. One possible solutionwouldbeusingamorecomplexbutexpensiveLLM Among117projects,ITYFUZZcangenerateexploitsforvul- likeGPT-4. nerabilities in 45 projects within 259 seconds on average. LLM4FUZZ ontheotherhand,cangenerateexploitsforvul- nerabilities in 82 projects within 106 seconds on average. 6.3 NewVulnerabilitiesFound AsshowninTable1,thevulnerabilitydetectiontimeforthe In addition to the vulnerabilities listed in Table 1,we also LLM4FUZZ anditsablations(LLM4FUZZ-C,LLM4FUZZ- found5projectshavingcriticalvulnerabilitiesamong600 V,LLM4FUZZ-S)isoverallbettercomparedtothebaseline projectsthathavegonethroughauditswithreputableauditing (ITYFUZZ)among15randomlyselectedprojects.Whilesev- firms.Theaffectedprojectsholdatotal$247kworthofas- eralprojects,includingNewFreeDAO,RFB,Sheep,BBOX, sets.Forthosefivecriticalvulnerabilities,existingtoolslike andARA,demonstrateaninfinitedetectiontimeforthebase- line,allLLM4FUZZvariationspresentfiniteandoftenvastly MythrilandEchidna can findnone ofthem. ITYFUZZ can find3 ofthem,withone uncoveredin 30 minutes andtwo reducedtimes,showingtheefficacyofLLM4FUZZevenin scenarioswherethebaselinefaltersentirely. takingdays.Ontheotherhand,LLM4FUZZcanuncoverall oftheminlessthan30minutes. Amongtheablations,LLM4FUZZ-Chastheshortesttime finding vulnerabilities for most projects. LLM4FUZZ also Forinstance,LLM4FUZZ foundaprojectcomprisingan ERC20token,multipleliquiditypools,andgovernancede- performsconsistentlybetterthananyoftheablations.This ployedonBinanceSmartChaintobeexploitable.Thevulner- indicates that a combination of the producers has reduced abilityarisesfromtwominorlogicalbugsthat,ontheirown, detectiontimethanwhatcanbereducedbyasingleproducer. arenotexploitable. Forcertainprojects,likeRES,LLM4FUZZperformsworse orevencompletelyunabletodetectvulnerabilitiescompared The first small bug is in the implementation of the to the baseline approach. From ablations, we can clearly _transferfunction(Line1-12inFigure6.3).Specifically, see that when LLM4FUZZ performs worse, LLM4FUZZ-C feeamountoftokenisminted(i.e.,createdandtransferred) or LLM4FUZZ-Valsoperform worse. Thus,itislikelybe- to the token contract when the usertransfers tokens to the causeLLMprovidedalowcomplexityorlowvulnerability liquiditypool(i.e.,sellingthetoken).Giventhatthefeeis likelihoodscoreforcriticalfunctionstotriggerthevulnera- multiplied by 2, if the attackers transfer multiple times to bilities.Asdiscussedpreviously,itisinevitableforLLMto theliquiditypool(andhavetheliquiditypoolrefundthem), theycanmakethesumoftheirbalanceandthecontractsig- coverageforsmartcontracts. nificantly greater than they initially have. This small bug 9ProjectName Time(Baseline) Time(L-C) Time(L-V) Time(L-S) Time(LLM4FUZZ) NewFreeDAO inf 4.9s(-inf) 12.3s(-inf) 161.7s(-inf) 2.3s(-inf) Axioma 31.9s 15.2s(-16.7s) 4.0s(-27.9s) 17.9s(-14.0s) 1.2s(-30.7s) FAPEN 101.2s 12.6s(-88.6s) 75.6s(-25.6s) 8.0s(-93.2s) 16.9s(-84.3s) cftoken 15.4s 1.9s(-13.6s) 9.2s(-6.2s) 4.4s(-11.0s) 1.5s(-13.9s) THB 21.8s inf(inf) 18.2s(-3.6s) 29.9s(+8.1s) 247.0s(+225.2s) RFB inf 14.7s(-inf) 56.6s(-inf) 88.1s(-inf) 6.4s(-inf) RES 3.1s inf(+inf) inf(+inf) 0.3s(-2.7s) inf(+inf) Yyds 5.2s 11.3s(+6.0s) 12.8s(+7.6s) 5.3s(+0.1s) 11.1s(+5.9s) Sheep inf 18.7s(-inf) 92.7s(-inf) inf 12.5s(-inf) AES 6.6s 3.0s(-3.6s) 4.6s(-2.1s) 2.8s(-3.9s) 1.7s(-4.9s) HEALTH 62.3s 9.6s(-52.7s) 18.7s(-43.7s) 40.1s(-22.3s) 4.0s(-58.3s) ApeDAO 249.6s 42.2s(-207.3s) 1.6s(-247.9s) 39.0s(-210.6s) 2.1s(-247.5s) BBOX inf 42.5s(-inf) 0.8s(-inf) 194.1s(-inf) 1.5s(-inf) |
ARA inf 52.1s(-inf) 175.1s(-inf) 957.8s(-inf) 57.2s(-inf) SEAMAN 1112.3s 72.0s(-1040.3s) 12.3s(-1100.1s) 8.8s(-1103.6s) 3.9s(-1108.4s) Table1:VulnerabilityDetectionTimeforLLM4FUZZ,AblationsofLLM4FUZZ,andITYFUZZ(Baseline) increases the total supply of this token, and the bug itself cannotbeexploitedtostealfunds. Thesecondbugexistsinfunctionfund(Line13-27).The intention ofthisfunction istorewardownerswiththefee 1 function _transfer(address from, address to, uint256 amount) internal override { collected in the contract. Instead of rewarding the owners 2 ... withthattoken,itswapsthosetokenstoUSDT(atokenthat 3 if (isAMM(from, to) && !isAL(from, to)) { candirectlyberedeemedtoUSDollars)andtransfersthemto 4 fee = amount * fs / b; theowners.Duringtheswapprocess,theminimumUSDTto 5 realAmount = amount - fee; receiveisincorrectlysetto0,whichallowssandwichattacks 6 super._mint(address(this), fee * 2); 7 super._burn(from, fee); (theattackersellsalotoftokens,thenthevictimsellstheto- 8 super._transfer(from, to, realAmount); kenbutreceivesalmostnothing,andfinally,theattackerbuys 9 backthe token ata lowerprice,profiting from the selling). 10 } Thesetwobugscombinedcanleadtocriticalvulnerabilities, 11 ... 12 } allowingattackerstostealfunds.Specifically,theattackercan 13 function fund() public { leveragethefirstbugtomintalotoftokensinthecontract 14 address[] memory path = new address[](2); (e.g.,tomintvamountoftokensinthecontract,theattacker 15 uint256 ta = IERC20(address(this)).balanceOf( address(this)); onlyneedstospendv/2tokens)andthenconductsandwich 16 path[0] = address(this); attackstoextractthesetokens. 17 path[1] = USDT; 18 uniswapV2Router.swapExactTokensForTokens( 19 ta / 2, 0, path, However, it is non-trivial to leverage these two bugs to 20 owner1, block.timestamp trigger the vulnerability. Notice that deflating tokens also 21 ); 22 uniswapV2Router.swapExactTokensForTokens( leadstothefeebeingtaken. Itishardtomakeaprofitable 23 ta / 2, 0, path, attackwithoutintricatecalculationsofhowmanytransfers 24 owner2, block.timestamp arerequiredandhowmanytokenstoborrow.ITYFUZZcan- 25 ); 26 ... notfindaprofitableexploitin24hours.Ontheotherhand, 27 } LLM4FUZZcanidentifyaprofitableexploitconsistingofat least28transactionsin1.9hoursonaverage.Specifically,the Figure4:Snippetofthevulnerablesmartcontract. LLM has assigned high complexity and high vulnerability likelihoodtotransfer,transferFrom,andfund.Inaddi- tion,the sequential likelihoodfortransfer,skim (having liquidity pool refund),fund is the highest among all other sequences. 106.4 InvariantViolationDetectionEfficiency metrics. In Figure 6,we furtherdemonstrate thatthe LLM cangeneratedifferentmetricsregardingdifferentinvariants ToanswerRQ3,wemanuallycraftedfourinvariantsthatcan onthesamecoderegions.ThisconfirmsthattheLLMcan beviolatedunderextremeconditions(i.e.,cannotbeviolated alsodistinguishbetweeninvariants. with simple invariant testing tools like Echidna [26]). We showtheresultsinTable6.4.WhileITYFUZZtakesminutes or times out to find invariant violations, most of the time, LLM4FUZZ can find the violations in less than a minute. 6.6 CostandTime ByexaminingtracesofITYFUZZ,weidentifythatITYFUZZ commonly got stuck in certain functions that are not at all WeusedLlama270Bforallqueries.Thecostrangesfrom$1 relatedtotheviolations.Forinstance,inUniswapV2,ITY- per1Mtokens(Anyscale)to>$3per1Mtokens(Replicate). FUZZspendsasignificantamountoftimeexploringmintand Onaverage,LLM4FUZZspends$0.15foreachprojectand burn,whileLLM4FUZZspendsmostofthetimeonfinding takes19secondsforallqueriestoberespondedto. thecorrectswaptransaction,whichisnecessarytoexecute beforetriggeringtheinvariantviolation. InvariantName ITYFUZZ LLM4FUZZ UniswapV2Flashloan 459.1s 14.2s UniswapV3Flashloan Timeout 92.0s TeamFinance 139.4s 6.5s SublimePoolMiscalculation 190.1s 58.8s Table 2: Invariants Violation Time for ITYFUZZ and LLM4FUZZ WealsotriedLLM4FUZZonDaedaluzz,abenchmarkfor smart contract property testing tools. However,there is no significantimprovementbecausetheinvariantsaremanually insertedintoageneratedpuzzle.ItishardforLLMtoextract 0 10 20 30 40 50 60 70 80 90 complexityorinvariantsscoringinformation.Byintuition,if Vulnerability Likelihood LLMcanextractthisinformationoncodewithoutsemantics, thenitcanbeaconstant-timereplacementofconstraintsolver ormodelcounter,whichisimpossible.However,werecognize thatLLM-guidedconcolicexecutionforhybridfuzzingmay help.Weyieldthistofuturework. 6.5 ImpactoftheLLM Some may consider that LLMs cannot analyze programs; thatis,LLMsgeneratethesamedummymetricsfordiffer- entproducersinthesamecoderegions.Weshowthisclaim isincorrectbycontrastingthecomplexityandvulnerability likelihood generated by the LLM for each code region. In Figure5,aheatmapshowingthenumberofcoderegionsthat havecertaincomplexityscoresandvulnerabilitylikelihood, |
wedemonstratethatcomplexityscoreandvulnerabilitylike- lihood forthe same code regions are mostly different (i.e., notformingastraightlineontheheatmap).Thoughthereare lotsofcoderegionshavingbothcomplexityandvulnerability likelihoodtobe10(expectedassomecoderegionsdonoth- ing,sotheyarenotcomplexandnotvulnerable),generally, thesetwometricsareweaklycorrelated,especiallyforcode regionshavingcomplexityorvulnerabilitylikelihoodtobe greaterthan10.TheseobservationsimplythattheLLMcan distinguishthese two producers withoutproviding dummy erocS ytixelpmoC 09 08 07 06 05 04 03 02 01 0 250 200 150 100 50 0 Figure5:AmountofCodeRegionswithCertainComplexity andVulnerabilityLikelihood 0 10 20 30 40 50 60 70 80 90 Invariant 2 Score erocS 1 tnairavnI 0 01 02 03 04 05 06 07 08 09 20 15 10 5 0 Figure6:AmountofCodeRegionswithCertainInvariant1 and2DependencyScore 116.7 ExtendingtoTraditionalSoftware statetoguidefuzzingtowardsnewprogramstates.NYX[39] employsarapidsnapshottingapproachattheoperatingsys- TodemonstratetheideasinLLM4FUZZarealsofeasiblefor temlevel,whileitsextension,NYX-NET[40],utilizesthese generalsoftware,wehaveadaptedalgorithmsofLLM4FUZZ snapshotting methods to revert to prior states for efficient forAFL++[24].TheschedulerbiasesAFL++towardstest statefulfuzzingfornetworkapplications.Comparably,Dong caseswithhigherscores,promotingtheexplorationofmore etal.[21]leverageincrementalsnapshottingoftheAndroid complexandpotentiallyvulnerablecodepathsidentifiedby OSwhenfuzzingAndroidapplicationssothatthefuzzercan theLLM.Figure7demonstratesthatAFL++withLLMguid- travel back to a previous application state. However,none ancegainsmorecoveragethansimplyusingAFL++onbinu- ofthesestatefulfuzzershavecomplexstatefulguidanceas tilsprograms.Futureworkcanconductmoreresearchonthis in ITYFUZZ or SMARTIAN due to the cost and complex- topic. ity of tracking runtime information. Techniques proposed in LLM4FUZZ can potentially be integrated into all those 7 RelatedWork statefulfuzzerstoreduceoverheadonexploringuninteresting statesorsequencesofinputs. Smart Contract Testing. Prior work like Echidna [26] ILF[28]andHarvey[53]arefuzzersthatfindvulnerabilities triggeredbyasingleorasequenceofinputs.Theyareguided GuidedFuzzing. MTFuzz[41]usesmulti-taskneuralnet- by test coverage or other static, dynamic analysis. SMAR- workstocompactinputembeddingsacrossdiversetasks,en- TIAN [15] and ITYFUZZ [43] extends these prior work by ablingthefuzzertofocusmutationsoninfluentialbytesthat usingdataflowandcomparisonstoschedulesequentialmuta- canincreasecoverage.K-Scheduler[42]improvesefficiency tionsefficiently.However,accuratedataflowtrackingscales bybuildinganedgehorizongraphfromtheCFGandusing poorly,andprogramsoftenhavesemanticallymeaningless Katz centrality to prioritize seeds reaching more unvisited dataflow. edges. GreyOne [25] leverages lightweight fuzzing-driven TherearealsoconcolictestingtoolslikeMythril[20]and taintinferencetoguidemutationstowardsinputbytesaffect- static analysis tools like Slither [23]. Yet,none of them is ing more untouched branches. LLM4FUZZ can be used in as performantas fuzzers like ITYFUZZ and SMARTIAN as conjunctionwiththesetechniques. demonstrated in their evaluations. Recent work [47] com- binesLLMswithstaticanalysisforsmartcontracts,leverag- 8 Discussions ingstaticanalysistofilterLLMvulnerabilityreportsforfalse positives.Comparedtofuzzing,ithashigherfalsenegatives andhastroubledetectingmoresubtlelogicalvulnerabilities AdditionalConsumers. Werecognizethatanadditional thatcouldleadtofundloss. consumeroftheLLM-generatedcomplexityandvulnerability likelihoodcouldbeconcolicexecution.Bysolvingbranches thatare more complexandlikelyto leadto vulnerabilities, LLMandFuzzing. Recentresearchactivitieshaveprimar- there could be more synergy between fuzzers in charge of ily been focusing on using LLM to generate a corpus for exploringsimpleprogrampathsandconcolicexecutionex- fuzzing[27][29][9].TheyletanLLMgenerateinteresting ploringcomplexprogrampaths. testcasesandprovidethemasseedsforthefuzzerbeforethe fuzzerstartsorduringtheruntime.Dengetal.[17][18]and Xiaetal.[54]havealsotreatedLLMasafuzzer.Theyinstruct FineTuning. Fine-tuningtheLLMsonsmartcontractcode LLMtogeneratestructuralinputswithrespecttoprograms andknownvulnerabilitieswouldenhancetheirspecialized andmutatethemduringfuzzing.TheyobservethatanLLM knowledgeinthisdomain.WithgreaterexposuretoSolidity itself is a structural fuzzer that can outcompete traditional programmingpatterns,historicalexploits,executiontraces, structuralfuzzersthatgenerateinputsusinggrammar.Other andothersmartcontractdata,themodelcouldbuildamore researchwork[31][57][2]alsoleveragesLLMtogenerate comprehensive understanding of code semantics, security fuzzer drivers and invariants that allow fuzzers to identify flaws,andstate transitions. This wouldempowerLLMs to deeperlogicalissues.Lastly,LLMshavebeenwidelyusedin conductmorenuancedanalysiswhenguidingafuzzer. programdebuggingandrepair[22][33][55][30][32].Yet, nonehasintegratedanLLMwithafuzzerlikeLLM4FUZZto |
createaprioritizationschemetofacilitatefuzzerexploration. ConsensusReachingAmongLLMs. Inourexperiment, we only used one LLM. We recognize there are multiple StatefulFuzzing. ITYFUZZ[43]andSMARTIAN[15]asde- LLMsavailablethatcangiveanaccurateresponsetoqueries scribedpreviouslyarecomplexstatefulfuzzers.Intraditional fromLLM4FUZZ.Byreachingaconsensus(e.g.,averaging softwaretesting,multiplestatefulfuzzers[12][39][40][21] response) among multiple LLMs,the producers can likely [10]havebeenproposed.Forinstance,IJON[10]exposesthe generateavaluethatislesslikelytobebiased. 128000 6000 4000 2000 0 0h 8h 16h 24h 32h 40h 48h Time egarevoC egdE 3000 2000 1000 AFL++ LLM + AFL++ 0 0h 8h 16h 24h 32h 40h 48h Time (a) coverageofreadelf egarevoC egdE AFL++ LLM + AFL++ (b) coverageofnm-new Figure7:binutilscoverage SmallVariations on Prompt. We have notattemptedto 08/ai-powered-fuzzing-breaking-bug-hunting. findtheoptimizedpromptsforeachproducer.Futurework html. canleveragetechniqueslike[56]tocreateabetterprompt. [3] AllChainsTVL. https://defillama.com/chains. [4] ERC-20 Token Standard. https://ethereum. 9 EthicalConsideration org/en/developers/docs/standards/tokens/ erc-20/. Wehavedisclosedallvulnerabilitiestoprojectowners.Dueto thenatureofblockchain,itisimpossibletoupdatethesmart [5] GPT-4. https://openai.com/research/gpt-4. contractcodeoncedeployed.Projectownersareencouraged [6] Llama270BModel. https://ai.meta.com/llama/. towhitehathacktheirprojects. [7] Power Schedules. https://aflplus.plus/docs/ power_schedules/. 10 Conclusion [8] Uniswap-v2 Contract Walk-Through. https: Thispaperpresents LLM4FUZZ,amethodologythatlever- //ethereum.org/en/developers/tutorials/ ageslargelanguagemodelstoguideandprioritizethefuzzing uniswap-v2-annotated-code/. ofsmartcontracts.Byextractingcodeattributesandcrafting [9] JoshuaAckermanandGeorgeCybenko.Largelanguage specializedprompts,LLM4FUZZproducesmetricsforcode modelsforfuzzingparsers(registeredreport). InPro- complexity,vulnerabilitylikelihood,invariantrelations,and ceedings of the 2nd International Fuzzing Workshop, inputsequences.Thesemetricsareencodedintothecorpus pages31–38,2023. schedulerofthefuzzertofocustestingonmorevaluablecode regions.EvaluationsshowLLM4FUZZachievessubstantially [10] Cornelius Aschermann,Sergej Schumilo,Ali Abbasi, highercoverageanddetectsmorevulnerabilitiesthanstate-of- andThorstenHolz. Ijon:Exploringdeepstatespaces the-arttools,includingfindingcriticalflawsinlivecontracts. viafuzzing. In2020IEEESymposiumonSecurityand LLM4FUZZovercomestraditionalfuzzinglimitationsbyhar- Privacy(SP),pages1597–1612.IEEE,2020. nessingLLMs’semanticreasoningpowertoexploresmart contractsmoreefficiently.Asblockchainadoptionincreases, [11] NicolaAtzei,MassimoBartoletti,andTizianaCimoli. LLM-guidedfuzzingcanprovideimpactfulassistanceinse- Asurveyofattacksonethereumsmartcontracts(sok). curingsmartcontracts. In Principles ofSecurityandTrust: 6thInternational Conference,POST2017,HeldasPartoftheEuropean JointConferencesonTheoryandPracticeofSoftware, References ETAPS2017,Uppsala,Sweden,April22-29,2017,Pro- ceedings6,pages164–186.Springer,2017. [1] AES Attack Explanation. https://twitter.com/ BlockSecTeam/status/1600442137811689473. [12] JinshengBa,MarcelBöhme,ZahraMirzamomen,and AbhikRoychoudhury. Statefulgreyboxfuzzing. In31st [2] AI-PoweredFuzzing:BreakingtheBugHuntingBar- USENIX Security Symposium (USENIX Security 22), rier. https://security.googleblog.com/2023/ pages3255–3272,2022. 13[13] MarcelBöhme,Van-ThuanPham,andAbhikRoychoud- Liu,Shwetak Patel,and Vikram Iyer. Exploring and hury. Coverage-basedgreyboxfuzzingasmarkovchain. characterizing large language models for embedded InProceedingsofthe2016ACMSIGSACConferenceon system development and debugging. arXiv preprint ComputerandCommunicationsSecurity,CCS’16,page arXiv:2307.03817,2023. 1032–1043,NewYork,NY,USA,2016.Associationfor [23] JosselinFeist,GustavoGrieco,andAlexGroce. Slither: ComputingMachinery. astaticanalysisframeworkforsmartcontracts. In2019 [14] ChongChen,JianzhongSu,JiachiChen,YanlinWang, IEEE/ACM2ndInternationalWorkshoponEmerging TingtingBi,YanliWang,XingweiLin,TingChen,and TrendsinSoftwareEngineeringforBlockchain(WET- ZibinZheng. Whenchatgptmeetssmartcontractvul- SEB),pages8–15.IEEE,2019. nerability detection: How farare we? arXiv preprint arXiv:2309.05520,2023. [24] AndreaFioraldi,DominikMaier,HeikoEißfeldt,and MarcHeuse. {AFL++}:Combiningincrementalsteps [15] Jaeseung Choi, Doyeon Kim, Soomin Kim, Gustavo of fuzzing research. In 14th USENIX Workshop on Grieco,AlexGroce,andSangKilCha. Smartian:En- OffensiveTechnologies(WOOT20),2020. hancingsmartcontractfuzzingwithstaticanddynamic data-flowanalyses. In202136thIEEE/ACMInterna- [25] ShuitaoGan,ChaoZhang,PengChen,BodongZhao,Xi- |
tionalConferenceonAutomatedSoftwareEngineering aojunQin,DongWu,andZuoningChen.{GREYONE}: (ASE),pages227–239.IEEE,2021. Dataflowsensitivefuzzing. In29thUSENIXsecurity symposium (USENIX Security 20),pages 2577–2594, [16] Isaac David, Liyi Zhou, Kaihua Qin, Dawn Song, 2020. Lorenzo Cavallaro,and ArthurGervais. Do you still need a manual smart contract audit? arXiv preprint [26] GustavoGrieco,WillSong,ArturCygan,JosselinFeist, arXiv:2306.12338,2023. and Alex Groce. Echidna: effective,usable,and fast fuzzingforsmartcontracts. InProceedingsofthe29th [17] Yinlin Deng, Chunqiu Steven Xia, Haoran Peng, ACMSIGSOFTInternationalSymposiumonSoftware ChenyuanYang,andLingmingZhang. Fuzzingdeep- TestingandAnalysis,pages557–560,2020. learning libraries via large language models. arXiv preprintarXiv:2212.14834,2022. [27] Qiuhan Gu. Llm-based code generation method for golangcompilertesting. 2023. [18] Yinlin Deng, Chunqiu Steven Xia, Haoran Peng, ChenyuanYang,andLingmingZhang. Largelanguage [28] Jingxuan He, Mislav Balunovic´, Nodar Ambroladze, modelsarezero-shotfuzzers:Fuzzingdeep-learningli- Petar Tsankov,and Martin Vechev. Learning to fuzz brariesvialargelanguagemodels. InProceedingsof fromsymbolicexecutionwithapplicationtosmartcon- the32ndACMSIGSOFTInternationalSymposiumon tracts. InProceedingsofthe2019ACMSIGSACConfer- SoftwareTestingandAnalysis,pages423–435,2023. enceonComputerandCommunicationsSecurity,pages 531–548,2019. [19] MonikaDiAngeloandGernotSalzer. Asurveyoftools foranalyzingethereumsmartcontracts. In2019IEEE [29] Jie Hu, Qian Zhang, and Heng Yin. Augmenting internationalconferenceondecentralizedapplications greybox fuzzing with generative ai. arXiv preprint andinfrastructures(DAPPCON),pages69–78.IEEE, arXiv:2306.06782,2023. 2019. [30] MatthewJin,SyedShahriar,MicheleTufano,XinShi, [20] MonikaDiAngeloandGernotSalzer. Asurveyoftools ShuaiLu,NeelSundaresan,andAlexeySvyatkovskiy. foranalyzingethereumsmartcontracts. In2019IEEE Inferfix: End-to-endprogram repairwithllms. arXiv internationalconferenceondecentralizedapplications preprintarXiv:2303.07263,2023. andinfrastructures(DAPPCON),pages69–78.IEEE, [31] RahulKande,HammondPearce,BenjaminTan,Bren- 2019. danDolan-Gavitt,ShailjaThakur,RameshKarri,and [21] ZhenDong,MarcelBöhme,LuciaCojocaru,andAbhik JeyavijayanRajendran. Llm-assistedgenerationofhard- Roychoudhury. Time-travel testing of android apps. wareassertions. arXivpreprintarXiv:2306.14027,2023. In Proceedings of the ACM/IEEE 42nd International [32] Sungmin Kang, Juyeon Yoon, and Shin Yoo. Large ConferenceonSoftwareEngineering,pages481–492, language models are few-shot testers: Exploring llm- 2020. basedgeneral bug reproduction. In 2023 IEEE/ACM [22] Zachary Englhardt, Richard Li, Dilini Nissanka, Zhi- 45thInternationalConferenceonSoftwareEngineering hanZhang,GirishNarayanswamy,JosephBreda,Xin (ICSE),pages2312–2323.IEEE,2023. 14[33] Haonan Li, Yu Hao, Yizhuo Zhai, and Zhiyun Qian. [43] ChaofanShou,ShangyinTan,andKoushikSen. Ityfuzz: The hitchhiker’s guide to program analysis: A jour- Snapshot-basedfuzzerforsmartcontract. InProceed- ney with large language models. arXiv preprint ingsofthe32ndACMSIGSOFTInternationalSympo- arXiv:2308.00245,2023. siumonSoftwareTestingandAnalysis,pages322–333, 2023. [34] YuweiLi,ShoulingJi,ChenyangLv,YuanChen,Jian- hai Chen, Qinchen Gu, and Chunming Wu. V-fuzz: [44] Sunbeom So, Seongjoon Hong, and Hakjoo Oh. Vulnerability-oriented evolutionary fuzzing. arXiv {SmarTest}:Effectivelyhuntingvulnerabletransaction preprintarXiv:1901.01142,2019. sequencesinsmartcontractsthroughlanguage{Model- Guided}symbolicexecution. In30thUSENIXSecurity [35] AhmedAfifMonrat,OlovSchelén,andKarlAndersson. Symposium(USENIXSecurity21),pages1361–1378, Asurveyofblockchainfromtheperspectivesofappli- 2021. cations, challenges,and opportunities. IEEE Access, 7:117134–117151,2019. [45] ViktorijaStepanovaandIngarsErinš. Reviewofdecen- , tralizedfinanceapplicationsandtheirtotalvaluelocked. [36] SebastianÖsterlund,KavehRazavi,HerbertBos,and TEMJournal,10(1),2021. CristianoGiuffrida.{ParmeSan}:Sanitizer-guidedgrey- box fuzzing. In 29th USENIX Security Symposium [46] AndréStorhaug,JingyueLi,andTianyuanHu. Efficient (USENIXSecurity20),pages2289–2306,2020. avoidance of vulnerabilities in auto-completed smart contractcodeusingvulnerability-constraineddecoding. [37] RohanPadhye,CarolineLemieux,KoushikSen,Lau- arXivpreprintarXiv:2309.09826,2023. rentSimon,andHayawardhVijayakumar. Fuzzfactory: domain-specificfuzzingwithwaypoints.Proceedingsof [47] YuqiangSun,DaoyuanWu,YueXue,HanLiu,Haijun theACMonProgrammingLanguages,3(OOPSLA):1– Wang,ZhengziXu,XiaofeiXie,andYangLiu. When 29,2019. gptmeetsprogramanalysis:Towardsintelligentdetec- tion ofsmartcontractlogicvulnerabilities in gptscan. [38] SeemantaSaha,LaboniSarker,MdShafiuzzaman,Chao- |
arXivpreprintarXiv:2308.03314,2023. fanShou,AlbertLi,GaneshSankaran,andTevfikBultan. Rarepathguidedfuzzing. InProceedingsofthe32nd [48] PalinaTolmach,YiLi,Shang-WeiLin,YangLiu,and ACMSIGSOFTInternationalSymposiumonSoftware ZengxiangLi. Asurveyofsmartcontractformalspec- Testing and Analysis, ISSTA 2023, page 1295–1306, ification and verification. ACM Computing Surveys NewYork,NY,USA,2023.AssociationforComputing (CSUR),54(7):1–38,2021. Machinery. [49] Anna Vacca, Andrea Di Sorbo, Corrado A Visaggio, [39] Sergej Schumilo,Cornelius Aschermann,Ali Abbasi, andGerardoCanfora. Asystematicliteraturereviewof SimonWörner,andThorstenHolz. Nyx:Greyboxhy- blockchainandsmartcontractdevelopment:Techniques, pervisorfuzzingusingfastsnapshotsandaffinetypes. tools, and open challenges. Journal of Systems and In30thUSENIXSecuritySymposium(USENIXSecurity Software,174:110891,2021. 21),pages2597–2614,2021. [40] SergejSchumilo,CorneliusAschermann,AndreaJem- [50] Yanhao Wang, Xiangkun Jia, Yuwei Liu, Kyle Zeng, mett,AliAbbasi,andThorstenHolz. Nyx-net:network TiffanyBao,DinghaoWu,andPuruiSu. Notallcov- fuzzing with incremental snapshots. In Proceedings erage measurements are equal: Fuzzing by coverage oftheSeventeenthEuropeanConferenceonComputer accountingforinputprioritization. InNDSS,2020. Systems,pages166–180,2022. [51] Cheng Wen, Haijun Wang, Yuekang Li, Shengchao [41] Dongdong She,Rahul Krishna,Lu Yan,Suman Jana, Qin,YangLiu,ZhiwuXu,HongxuChen,XiaofeiXie, andBaishakhiRay. Mtfuzz:fuzzingwithamulti-task Geguang Pu, and Ting Liu. Memlock: Memory us- neuralnetwork. InProceedingsofthe28thACMjoint ageguidedfuzzing. InProceedingsoftheACM/IEEE meetingonEuropeansoftwareengineeringconference 42ndInternationalConferenceonSoftwareEngineer- andsymposiumonthefoundationsofsoftwareengineer- ing,pages765–777,2020. ing,pages737–749,2020. [52] Sam Werner, Daniel Perez, Lewis Gudgeon, Ariah [42] DongdongShe,AbhishekShah,andSumanJana. Effec- Klages-Mundt, Dominik Harz, and William Knotten- tiveseedschedulingforfuzzingwithgraphcentrality belt. Sok:Decentralizedfinance(defi). InProceedings analysis. In 2022 IEEE Symposium on Security and ofthe4thACMConferenceonAdvancesinFinancial Privacy(SP),pages2194–2211.IEEE,2022. Technologies,pages30–46,2022. 15[53] Valentin Wüstholz and Maria Christakis. Harvey: A greyboxfuzzerforsmartcontracts. InProceedingsof the28thACMJointMeetingonEuropeanSoftwareEngi- neeringConferenceandSymposiumontheFoundations ofSoftwareEngineering,pages1398–1409,2020. [54] Chunqiu Steven Xia, Matteo Paltenghi, Jia Le Tian, Michael Pradel, and Lingming Zhang. Universal fuzzing via large language models. arXiv preprint arXiv:2308.04748,2023. [55] Chunqiu Steven Xia and Lingming Zhang. Conver- sational automated program repair. arXiv preprint arXiv:2301.13246,2023. [56] JDZamfirescu-Pereira,RichmondYWong,BjoernHart- mann,andQianYang. Whyjohnnycan’tprompt:how non-aiexpertstry(andfail)todesignllmprompts. In Proceedings of the 2023 CHI Conference on Human FactorsinComputingSystems,pages1–21,2023. [57] CenZhang,MingqiangBai,YaowenZheng,YetingLi, XiaofeiXie,YuekangLi,WeiMa,LiminSun,andYang Liu. Understandinglargelanguagemodelbasedfuzz driver generation. arXiv preprint arXiv:2307.12469, 2023. [58] WayneXinZhao,KunZhou,JunyiLi,TianyiTang,Xi- aoleiWang,YupengHou,YingqianMin,BeichenZhang, JunjieZhang,ZicanDong,etal. Asurveyoflargelan- guagemodels. arXivpreprintarXiv:2303.18223,2023. [59] XiaogangZhu,ShengWen,SeyitCamtepe,andYang Xiang.Fuzzing:asurveyforroadmap.ACMComputing Surveys(CSUR),54(11s):1–36,2022. 16 |
2401.13169 ReposVul: A Repository-Level High-Quality Vulnerability Dataset XinchenWang★ RuidaHu★ CuiyunGao∗ HarbinInstituteofTechnology, HarbinInstituteofTechnology, HarbinInstituteofTechnology, Shenzhen,China Shenzhen,China Shenzhen,China 200111115@stu.hit.edu.cn 200111107@stu.hit.edu.cn gaocuiyun@hit.edu.cn Xin-ChengWen YujiaChen QingLiao HarbinInstituteofTechnology, HarbinInstituteofTechnology, HarbinInstituteofTechnology, Shenzhen,China Shenzhen,China Shenzhen,China xiamenwxc@foxmail.com yujiachen@stu.hit.edu.cn liaoqing@hit.edu.cn ABSTRACT CCSCONCEPTS Open-SourceSoftware(OSS)vulnerabilitiesbringgreatchallenges •Softwareanditsengineering→Softwaredefectanalysis. tothesoftwaresecurityandposepotentialriskstooursociety. Enormouseffortshavebeendevotedintoautomatedvulnerabil- KEYWORDS itydetection,amongwhichdeeplearning(DL)-basedapproaches Open-SourceSoftware,SoftwareVulnerabilityDatasets,DataQual- haveproventobethemosteffective.However,theperformance ity oftheDL-basedapproachesgenerallyreliesonthequantityand qualityoflabeleddata,andthecurrentlabeleddatapresentthe 1 INTRODUCTION followinglimitations:(1)TangledPatches:Developersmaysub- mitcodechangesunrelatedtovulnerabilityfixeswithinpatches, Inrecentyears,withtheincreasingsizeandcomplexityofOpen- leadingtotangledpatches.(2)LackingInter-proceduralVul- SourceSoftware(OSS),theimpactofOSSvulnerabilitieshasalso nerabilities:Theexistingvulnerabilitydatasetstypicallycontain amplifiedandcancausegreatlossestooursociety.Forexample, function-levelandfile-levelvulnerabilities,ignoringtherelations CiscodiscoveredasecurityvulnerabilityintheWebUI,identified betweenfunctions,thusrenderingtheapproachesunabletodetect asCVE-2023-20198[1]in2023.Thisvulnerabilityallowedunautho- theinter-proceduralvulnerabilities.(3)OutdatedPatches:The rizedremoteattackerstogainelevatedprivileges.Currently,over existingdatasetsusuallycontainoutdatedpatches,whichmaybias 41,000relateddeviceshavebeencompromised,resultingingreat themodelduringtraining. losses for enterprises. Identifying vulnerabilities in an accurate To address the above limitations, in this paper, we propose andtimelymannerisbeneficialformitigatingthepotentialrisks, anautomateddatacollectionframeworkandconstructthefirst andhasgainedintenseattentionfromindustryandacademia.The repository-levelhigh-qualityvulnerabilitydatasetnamedReposVul. existingvulnerabilitydetectionmethodscanbecoarselygrouped Theproposedframeworkmainlycontainsthreemodules:(1)Avul- intotwocategories:programanalysis-basedmethods[2–5]and nerabilityuntanglingmodule,aimingatdistinguishingvulnerability- deeplearning(DL)-basedmethods[6–9],amongwhichDL-based fixingrelatedcodechangesfromtangledpatches,inwhichthe methodshaveproventobemoreeffective.Despitethesuccessof LargeLanguageModels(LLMs)andstaticanalysistoolsarejointly theDL-basedmethods,theirperformancetendstobelimitedby employed.(2)Amulti-granularitydependencyextractionmodule, thetrainedvulnerabilitydatasets.Forexample,Croftetal.[10]find aimingatcapturingtheinter-proceduralcallrelationshipsofvulner- thatthewidely-usedvulnerabilitydatasetssuchasDevign[11]and abilities,inwhichweconstructmultiple-granularityinformation BigVul[12]containnoisy,incompleteandoutdateddata.Thelow- foreachvulnerabilitypatch,includingrepository-level,file-level, qualitydatamaybiasthemodeltrainingandevaluationprocess. function-level,andline-level.(3)Atrace-basedfilteringmodule, Therefore,ahigh-qualityreal-worldvulnerabilitydatasetisimpor- aimingatfilteringtheoutdatedpatches,whichleveragesthefile tantyetunder-exploredforthevulnerabilitydetectiontask.Inthe pathtrace-basedfilterandcommittimetrace-basedfiltertocon- paper,wefocusonbuildingthehigh-qualitydatasetbymitigating structanup-to-datedataset. thefollowinglimitationsoftheexistingdatasets: Theconstructedrepository-levelReposVulencompasses6,134 (1)TangledPatches:Vulnerabilitypatchesmaycontainvulnerability- CVE entries representing 236 CWE types across 1,491 projects fixing unrelated code changes, resulting in tangled patches. Ex- and four programming languages. Thorough data analysis and isting datasets [11–13] generally consider all the code changes manualcheckingdemonstratethatReposVulishighinqualityand inonepatchsubmissionrelatedtothevulnerabilities,introduc- alleviatestheproblemsoftangledandoutdatedpatchesinprevious ingnaturaldatanoise.FortheexampleshowninFigure1(a),the vulnerabilitydatasets. patchfromCVE-2012-0030includesthecodechangeaboutpath modification, in which the request path parameter of the func- tionwebob.Request.blank()hasbeenchanged(Lines2-3).Such vulnerability-unrelatedchangesmaybeconcernedwithcoderefac- toringornewfeatureimplementation,andarehardtobedistin- ★Theseauthorscontributetotheworkequallyandareco-firstauthorsofthepaper. ∗Correspondingauthor.TheauthorisalsoaffiliatedwithPengChengLaboratoryand guished. Therefore, identifying vulnerability-fixing related files |
GuangdongProvincialKeyLaboratoryofNovelSecurityIntelligenceTechnologies. frommultiplefilesinonepatchpresentsachallenge. 4202 beF 8 ]RC.sc[ 2v96131.1042:viXraICSE,2024,April2024,Lisbon,Portugal XinchenWang★,RuidaHu★,CuiyunGao∗,Xin-ChengWen,YujiaChen,andQingLiao 1 def test_keypair_list(self): 1 Commit message: “fix out-of-bounds read in ttm_put_pages() v2” _ 2 - req = webob.Request.blank('/v1.1/123/os-keypairs’) _ 2 Commit ID: a66477b0efe511d98dde3e4aaeb189790e6f0a39 _ 3 + req = webob.Request.blank('/v1.1/fake/os-keypairs’) _ 3 Parent ID: d47703d43ecaa9189d70fb5d151a6883cc44afd3 _ 4 res = req.get_response(fakes.wsgi_app()) 4 File Path: drivers/gpu/drm/ttm/ttm_page_alloc.c _ 5 self.assertEqual(res.status_int, 200) 5 6 res_dict = json.loads(res.body) 6 - if(!(flags & TTM_PAGE_FLAG_DMA32)) { _ 7 response = {'keypairs': [{'keypair': fake_keypair('FAKE')}]} 7 + if (!(flags & TTM_PAGE_FLAG_DMA32) && (npages - i) >= HPAGE_PMD_NR) { _ 8 self.assertEqual(res_dict, response) 8 for (j = 0; j < HPAGE_PMD_NR; ++j) 9 ... 9 if (p++ != pages[i + j]) 10 break; 11 ... (a) An example of the tangled patch from CVE-2012-0030. 12 } 1 void* checked_xmalloc(size_tsize){ 2 alloc_limit_assert ("checked_xmalloc", size); 1 Commit message: “fix start page for huge page check in ttm_put_pages()” _ 3 return xmalloc(size); 2 Commit ID: ac1e516d5a4c56bf0cb4a3dfc0672f689131cfd4 _ 4 } 3 Parent ID: a66477b0efe511d98dde3e4aaeb189790e6f0a39 _ 5 4 File Path: drivers/gpu/drm/ttm/ttm_page_alloc.c _ 6 void* xmalloc(size_t size){ 5 7 void *ptr= malloc (size); 6 if (!(flags & TTM_PAGE_FLAG_DMA32) && (npages - i) >= HPAGE_PMD_NR) { 8 if(!ptr&& (size != 0)){ 7 - for (j = 0; j < HPAGE_PMD_NR; ++j) _ 9 perror("xmalloc: Memory allocation failure"); 8 + for (j = 1; j < HPAGE_PMD_NR; ++j) _ 10 abort(); 9 if (p++ != pages[i + j]) 11 } 10 break; 12 return ptr; 11 ... 13 } 12 } (b) An example of the inter-procedural vulnerability from (c) An example of the outdated patch from CVE-2019-19927. CVE-2017-6308. Figure1:Examplesforillustratingthechallengesofexistingdatasets.Lineshighlightedingreendenoteaddedcontent,red indicatesdeletedcontent,yellowrepresentscommitinformation,andblueidentifiesthecallerandcallee. (2)LackingInter-proceduralVulnerabilities:Vulnerabili- datasetnamedReposVultoaddresstheaforementionedlimita- tiesinreal-worldscenariosusuallyinvolvecallsbetweenmultiple tions.Ourframeworkconsistsofthreemodules:○1 Avulnerabil- files and functions, whereas individual functions alone are not ityuntanglingmodule:Weproposetointegratethedecisions necessarilyvulnerable.Existingdatasets[11,13]mainlyfocuson ofLargeLanguageModels(LLMs)andstaticanalysistoolstodis- function-levelgranularity,ignoringthecallinformation.Figure1(b) tinguishthevulnerability-fixingrelatedfileswithinthepatches, illustratesanexampleofinter-proceduralvulnerability(CVE-2017- giventheirstrongcontextualunderstandingcapabilityanddomain 6308). The return value of the function checked_xmalloc() is knowledge,respectively.○2 Amulti-granularitydependency passedbythefunctionxmalloc()(Line3)inthisexample.Given extractionmodule:Weextracttheinter-proceduralcallrelation- that the parameter of size could overflow, there is a potential shipsofvulnerabilitiesamongthewholerepository,aimingtocon- risk of triggering CWE-190 (Integer Overflow or Wraparound). structmulti-granularityinformationforeachvulnerabilitypatch, However,thefunctionschecked_xmalloc()andxmalloc()are includingfile-level,function-level,andline-levelinformation.○3 flawlessthemselves,whileexistinglabelingmethodsmaymark Atrace-basedfilteringmodule:Wefirsttrackthesubmission themasvulnerable.Modelstrainedonthevulnerabledatawith- historyofpatchesbasedonfilepathsandcommittime.Through outconsideringinter-proceduralcallinformationwouldbebiased, analyzinghistoricalinformationonthepatches,wethenidentify limitingtheirperformanceinpracticalscenarios. outdatedpatchesbytracingtheircommitdiffs. (3)OutdatedPatches:Patchesmayintroducenewvulnerabili- Insummary,ourcontributionscanbeoutlinedasfollows: tiesandbecomeoutdated,whilecurrentdatasetsdonotconsider (1) Weintroduceanautomateddatacollectionframeworkfor thetimelinessofthepatches.Figure1(c)showsanoriginalpatch obtainingvulnerabilitydata.Ourframeworkconsistsofavul- anditschildpatchbothfromCVE-2019-19927.Intheoriginalpatch, nerabilityuntanglingmoduletoidentifyvulnerability-fixing developers add stronger constraints to the existing conditional relatedfileswithintangledpatches,amulti-granularityde- statementtoavoidCWE-125(Out-of-boundsRead)[14](Lines6-7). |
pendencyextractionmoduletoconstructinter-procedural However,inthenextimmediatepatch,theloopstatementafter vulnerabilities,andatrace-basedfilteringmoduletorecog- theconditionalstatementischanged(Lines7-8),withthecommit nizeoutdatedpatches. messagestating“fixstartpageforhugepagecheck”.Clearly,this (2) ReposVulisthefirstrepository-levelvulnerabilitydataset, additionalfixisduetotheincompletenessoftheoriginalpatch, including large-scale CVE entries representing 236 CWE andthustheoriginalpatchisoutdatedandshouldbefilteredout. typesacross1,491projectsandfourprogramminglanguages Inthispaper,weproposeanautomateddatacollectionframe- withdetailedmulti-granularitypatchinformation. workandconstructarepository-levelhigh-qualityvulnerability (3) Throughmanualcheckinganddataanalysis,ReposVulis highinqualityandalleviatesthelimitationsoftheexisting vulnerabilitydatasets.WehavepubliclyreleasedthesourceReposVul:ARepository-LevelHigh-QualityVulnerabilityDataset ICSE,2024,April2024,Lisbon,Portugal codeaswellasReposVulat:https://github.com/Eshe0922/ Table1:BasicinformationforeachentryinReposVul,in- ReposVul. cludingvulnerabilityentryinformation,patchinformation, andrelatedfileinformation. Theremainingsectionsofthispaperareorganizedasfollows. Section2introducestheframeworktocollectReposVul.Section 3presentstheevaluationandexperimentalresults.Section4dis- Features Description cussesthedataapplicationandlimitationsofReposVul.Section5 VulnerabilityEntryInformation introducesthebackgroundoftheOSSvulnerabilitydatasetsand detectionmethods.Section6concludesthepaper. CVE-ID TheCVEthattheentrybelongsto CWE-ID TheCWEthattheentrybelongsto 2 FRAMEWORK Language TheprogramminglanguageoftheCVE Figure2presentsanoverviewofourdatacollectionframework, Resource InvolvedlinksoftheCVE whichcontainsfourprocedurestoconstructthevulnerabilitydataset. CVEDescription ThedescriptionoftheCVE Theframeworkbeginswiththerawdatacrawlingstep,designedto Publish-Date ThepublishdateoftheCVE gathervulnerabilityentriesandassociatedpatches,resultinginan CVSS TheCVSSscoreoftheCVE initialdataset.Thisdatasetisthenprocessedthroughthefollowing CVE-AV TheattackvectoroftheCVE threekeymodulestoyieldthefinaldatasetReposVul:(1)Thevul- CVE-AC TheattackcomplexityoftheCVE nerabilityuntanglingmoduleaimsatautomaticallyidentifying CVE-PR TheprivilegesrequiredoftheCVE vulnerability-fixingrelatedfileswithintangledpatches,byjointly CVE-UI TheuserinteractionoftheCVE consideringthedecisionsfromLLMsandstaticanalysistools.(2) CVE-S ThescopeoftheCVE Themulti-granularitydependencyextractionmoduleextracts CVE-C TheconfidentialityoftheCVE inter-proceduralcallrelationshipsassociatedwithvulnerabilities CVE-I TheintegrityoftheCVE throughouttherepository.(3)Thetrace-basedfilteringmod- CVE-A TheavailabilityoftheCVE uletracksthesubmissionhistoryofpatchesbasedonfilepaths CWEDescription ThedescriptionoftheCWE andcommittime,aimingtoanalyzecommitdiffscorrespondingto CWESolution PotentialsolutionsoftheCWE patches. CWEConsequence CommonconsequencesoftheCWE CWEMethod DetectionmethodsoftheCWE 2.1 RawDataCrawling PatchInformation Thisphaseaimstocollecttheextensiverawvulnerabilitydata,with detailedentryinformationillustratedinTable1.Thecreationofthe Commit-ID Thecommitidofthepatch initialdatasetinvolvesthreesteps:1)crawlingvulnerabilityentries Commit-Message Thecommitmessageofthepatch fromopen-sourcedatabases,2)fetchingpatchesassociatedwith Commit-Date Thecommitdateofthepatch thevulnerabilityentryfrommultipleplatforms,and3)obtaining Project Theprojectthatthepatchbelongsto detailedinformationonchangedfilesinvolvedinthepatch. ParentPatch Theparentpatchofthepatch ChildPatch Thechildpatchofthepatch 2.1.1 VulnerabilityEntryCollection. Wecollectopen-sourcevul- URL TheAPI-URLofthepatch nerabilitiesfromMend[15]whichinvolvesbothpopularandunder- Html-URL TheHtml-URLofthepatch the-radarcommunityresourceswithextensivevulnerabilityentries. Duringthecollection,weretrieveCVEentriesinchronologicalor- RelatedFileInformation derfortheidentificationofoutdatedpatches.Wethenstorethese File-Name Thenameofthefile entriesinastructuredformat,asillustratedinthe“Vulnerability File-Language Theprogramminglanguageofthefile EntryInformation”partofTable1.Eachentryencompasseses- Code-Before Thecontentofthefilebeforefixes sentialfeaturessuchastheCVE-ID,CVEdescription,associated Code-After Thecontentofthefileafterfixes CWE-ID,andotherrelevantinformation. Code-Change Thecodechangesofthefile 2.1.2 PatchCollection. Tocomprehensivelyanalyzeeachvulnera- Html-URL TheHtml-URLofthefile bilityentry,wecollectthecorrespondingpatches.Forthemajority of the projects, we collect these patches from GitHub [16] and recordtheirCommit-IDandCommit-Message.Additionally,for detailsofeachrelatedfile,includingkeyfeatureslikeFile-Name, twospecialprojects,includingAndroidandChrome,weconduct Code-Before,andCode-After. patchcollectiononGoogleGit[17]andbugs.chromium[18],re- spectively,assomeoftheirpatchesarenotreleasedonGitHub. 2.2 VulnerabilityUntanglingModule Detailedpatch-levelinformationissummarizedinTable1. |
Thevulnerabilityuntanglingmoduleaimstoremovevulnerability- 2.1.3 RelatedFileCollection. Toextractvulnerabilitycodesnippets fixingunrelatedcodechangesfrompatches.WeemployLLMsfor atfile-levelandrepository-level,wedownloadtheentirerepository evaluatingtherelevancebetweencodechangesandvulnerability of the parent patch and child patch associated with each patch fixes,andstaticanalysistoolsforcheckingvulnerabilitiesinthe usingitsuniqueCommit-ID.Foreachfileinthepatch,weretrieve code changes, separately. Jointly considering their outputs, we itscontentbeforeandaftercodechanges.InTable1,wepresentthe determinewhetherachangedfileisvulnerability-fixingrelated.ICSE,2024,April2024,Lisbon,Portugal XinchenWang★,RuidaHu★,CuiyunGao∗,Xin-ChengWen,YujiaChen,andQingLiao 1. Raw Data Crawling 2. Vulnerability Untangling (1) LLMs Evaluation CWE-ID Vulnerability Entry Information CWE Description Func 1 Func n Prompt Language Code-Change Commit-ID Platforms Re …so …urce Code-Before Tree-sitter Tongyi (3) Joint Decision URL Commit-Message Patch Information CWE Description, Commit-Message… CVE Database Pare …nt … Patch CF oi dle e- -N Ba em foe re (2) Static Analysis Tool Checking Vulnerability-Fixing Code-Change Code-Before Related Files … Related File Information Code-After Projects …… 4. Trace-based Filtering 3. Multi-granularity Dependency Extraction (1) File Path Trace-based Filter 1 void func1(){ cflow Code 2 - result = func2(); ReposVul Change3 + ··· Compare Records Create Dictionary 4 } in the Dictionary Recording Latest Time Suffix-based Filter Extract Java-all-call-graph 1 int func2(){ (2) Commit Time Trace-based Filter Callee2 ···} PyCG 1 int func3(){ Caller 2 func1(); Assess the Overlap Check Parent/Child Get Commit 3 ···} in Changed Files Patches Changes Figure2:Thearchitectureofourautomaticdatacollectionframework. 2.2.1 LLMs Evaluation. LLMs possess strong contextual under- (1)System prompt You are now an expert in code vulnerability and patch fixes. standingcapabilityandextensiveapplicabilityacrossprogramming [CWE description] The product writes data past the end, or before the beginning, of languages.TheadaptabilityofLLMstodifferentprogramminglan- the intended buffer. guagesalsoensuresthatLLMs-basedevaluationcanbeconducted + [CWE solution] Double check that the buffer is as large as specified. acrossvariouscodesyntaxesandstructures.WeoptforTongyi[19] Check buffer boundaries if accessing the buffer in a loop. [Commit message] toevaluatetherelevancebetweencodechangesandvulnerability This patch solves the buffer overflow problem. fixeseffectively,consideringitsaccessiblefreeAPIandsupportfor (2) Contextual prompt [ 1Fu cn hc at rio *n s] tring_crypt(const char *key, const char *salt){ longcontextinputs. 2 ... 3 static constexpr size_t maxSaltLength = 123; ToeffectivelyleverageLLMs’contextualunderstandingcapabil- 4 char paddedSalt[maxSaltLength + 1]; 5 paddedSalt[0] = paddedSalt[maxSaltLength] = '\0'; ity,wecraftatask-specificprompttoevaluatetherelevanceofcode 6 memset(&paddedSalt[1], '$', maxSaltLength - 1); + 7 memcpy(paddedSalt, salt, changestothecorrespondingvulnerabilityfixes,asdepictedin 8 std::min(maxSaltLength, saltLen)); 9 paddedSalt[saltLen] = '\0'; Figure3.Thispromptconsistsofseveralcomponents:(1)System 10 ...} prompt:TheLLMactsasanexpert,analyzingcodevulnerabilities 1 @@ -9,1+9,1 andtheircorrespondingfixes.(2)Contextualprompt:Theprompt (3) Input Code Change 2 - paddedSalt[saltLen] = '\0'; 3 + paddedSalt[std::min(maxSaltLength, saltLen)] = '\0'; involvesfourtypesofcontextualinformation.○1 CWEdescription: + BasedontheCWE-IDassociatedwitheachpatch,weprovidea (4) Answer prompt Whether the code changes in the file related to vulnerability fixes? briefvulnerabilitydescription.○2 CWEsolution:Weoffertherec- Figure3:AsamplepromptforLLMstoevaluatetherelevance ommendedsolutionsaccordingtotheCWE-ID,whichhelpsLLMs ofcodechangesinonefiletothevulnerabilityfixes. assessthealignmentofcodechangeswiththevulnerabilityso- lutions.○3 Commitmessage:Thecommitmessagesfrompatches aidLLMsincomprehendingthepurposeofthefixes.○4 Function: WeutilizetheTree-sittertool[20]toextractfunctionsaffectedby 2.2.2 StaticAnalysisToolChecking. BesidesLLMs,wealsoem- thecodechangesinthefile.Thispromptcontextualizesthecode ploystaticanalysistoolsforvulnerabilitychecking.Staticanalysis changes,aidinginapreciseanalysis.(3)InputCodeChange: The toolsdetectvulnerabilitiesincodebyextractingsourcecodemod- specificcodechangesmadeinthefile.(4)Answerprompt:The els[21]andemployingdiversevulnerabilityrules,ensuringhigh LLMsevaluatetherelevanceofcodechangesinthefiletovulnera- codecoverageandlowfalse-negativerates.Weemployvarious bilityfixesandoutput“YES”or“NO”indicatingwhetherthecode staticanalysistoolstocheckforvulnerabilitiesincodechanges, changesarevulnerability-fixingrelated.Giventhepromptwith whichintegratethevulnerabilityrulesandexpertknowledgefrom detailedvulnerabilityandpatchinformation,LLMscaneffectively differenttools. |
determinetherelevanceofcodechangestovulnerabilityfixesbased Wechoosefourwell-establishedtools,includingCppcheck[22], onthecodecontext. Flawfinder[23],RATS[24],andSemgrep[25],consideringtheir open-source availability and flexible configuration options, forReposVul:ARepository-LevelHigh-QualityVulnerabilityDataset ICSE,2024,April2024,Lisbon,Portugal Algorithm1:Repository-levelDependencyExtraction 1 void* checked_xmalloc(size_t size){ Input :𝑠𝑒𝑙𝐿𝑎𝑛𝑔;//selectedprogramminglanguage 2 size_t res; 𝑉𝑢𝑙𝐹𝑖𝑙𝑒;//allvulnerability-fixingrelatedfilesinaproject 3 if (check_mul_overflow(size, &res)) Output:𝑐𝑎𝑙𝑙𝑒𝑟𝑇𝑟𝑒𝑒;//callertreesintheproject 4 abort(); 𝑐𝑎𝑙𝑙𝑒𝑒𝑇𝑟𝑒𝑒;//calleetreesintheproject 5 alloc_limit_assert ("checked_xmalloc", res); 1 //initializethe𝑐𝑎𝑙𝑙𝑒𝑟𝑇𝑟𝑒𝑒and𝑐𝑎𝑙𝑙𝑒𝑒𝑇𝑟𝑒𝑒; 6 return xmalloc(size); 2 𝑐𝑎𝑙𝑙𝑒𝑟𝑅𝑜𝑜𝑡←∅;𝑐𝑎𝑙𝑙𝑒𝑒𝑅𝑜𝑜𝑡←∅; 7 } 3 foreach𝑓𝑖𝑙𝑒∈𝑉𝑢𝑙𝐹𝑖𝑙𝑒do 4 𝑝𝑟𝑜𝑗𝑒𝑐𝑡←getParentProject(𝑓𝑖𝑙𝑒); (a) Code Changes of Caller Function. 5 𝑎𝑙𝑙𝐹𝑖𝑙𝑒←getAllFile(𝑝𝑟𝑜𝑗𝑒𝑐𝑡,𝑠𝑒𝑙𝐿𝑎𝑛𝑔); 1 2 void* si zx em _a tllo rc es( ;size_t size){ 6 7 8 𝑉 fo𝑢 r𝑙 e𝑆 a 𝑐𝑛 c 𝑎h𝑖 𝑙𝑝 𝑙𝑒𝑠← 𝑟𝑛 𝑅𝑖𝑝 𝑜g 𝑜e ∈ 𝑡tS 𝑉 ←n 𝑢ip g𝑙𝑆 e( 𝑛𝑓 tN𝑖 𝑖𝑙 𝑝 e𝑒 ad) r; o Func(𝑠𝑛𝑖𝑝,𝑓𝑖𝑙𝑒); 3 4 if (c ah be oc rk t_ (m )u ;l_overflow(size, &res)) 9 𝑐𝑎𝑙𝑙𝑒𝑒𝑅𝑜𝑜𝑡←getOutFunc(𝑠𝑛𝑖𝑝,𝑓𝑖𝑙𝑒); 5 6 7 v io fid ( p!* p erp t rt r or r&= & (m ( "a s xl i ml z ao e lc l ! o( = cr :e 0s M)) e); m{ ory allocation failure"); 11 1 20 1 𝑐𝑐// 𝑎𝑎a 𝑙𝑙 𝑙𝑙d 𝑒𝑒o 𝑟 𝑒p 𝑇 𝑇t 𝑟 𝑟st 𝑒 𝑒a 𝑒 𝑒t . .i a ac d da d dn ( (a S Sl t ty a as t tis i ic cto T To o ol o os l lt E Eo x xe t tx r rt a ara c cc t tt o oc r ra ( (𝑐 𝑐ll 𝑎 𝑎er 𝑙 𝑙𝑙 𝑙a 𝑒 𝑒n 𝑟 𝑒d 𝑅 𝑅c 𝑜 𝑜a 𝑜 𝑜l 𝑡 𝑡le , ,e 𝑎 𝑎𝑙 𝑙c 𝑙 𝑙h 𝐹 𝐹a 𝑖 𝑖i 𝑙 𝑙n 𝑒 𝑒s , ,; 0 1) )) ); ; 8 abort(); 13 end 9 } 14 end 10 return ptr; 11 } Algorithm2:RootExtractionforCallerandCalleeTrees (b) Code Changes of Callee Function. 1 //extracttherootofthefunctioncalleetree; Figure4:Twosolutionstofixtheinter-proceduralvulnera- 2 3 Func 𝑐t 𝑎io 𝑙𝑙n 𝑒𝑒g 𝐹et 𝑢O 𝑛u 𝑐tF ←un ∅c( ;𝑠𝑛𝑖𝑝,𝑓𝑖𝑙𝑒): bility.Thetokenshighlightedingreenindicatecodechanges 4 //extractallthefirstlayerfunctionof𝑓𝑖𝑙𝑒; relatedtovulnerabilityfixing. 5 𝑜𝑢𝑡𝐹𝑢𝑛𝑐←getTopLevelFunc(𝑓𝑖𝑙𝑒); 6 foreach 𝑓𝑢𝑛𝑐∈𝑜𝑢𝑡𝐹𝑢𝑛𝑐do 7 if𝑠𝑛𝑖𝑝∩𝑓𝑢𝑛𝑐≠∅then 8 𝑐𝑎𝑙𝑙𝑒𝑒𝐹𝑢𝑛𝑐.add(𝑠𝑛𝑖𝑝); achievingeffectivevulnerabilitydetectionacrossmultipleprogram- 9 end minglanguages.Fordifferentlanguages,weemploydifferentstatic 10 end 11 return𝑐𝑎𝑙𝑙𝑒𝑒𝐹𝑢𝑛𝑐 analysistoolsbasedontheapplicabilityofthetools.Specifically, 12 //extracttherootofthefunctioncallertree; forCandC++files,weuseallthefourtools.ForPythonfiles,our 13 FunctiongetNearFunc(𝑠𝑛𝑖𝑝,𝑓𝑖𝑙𝑒): analysisisconductedusingRATSandSemgrep;whileforJavafiles, 14 //expandthesearchscopetothetop-levelfunctions; we only use Semgrep. For each file in a patch, we first employ 15 𝑐𝑎𝑙𝑙𝑒𝑟𝑆𝑛𝑖𝑝←getOutFunc(𝑠𝑛𝑖𝑝,𝑓𝑖𝑙𝑒); 16 returngetAPI(𝑐𝑎𝑙𝑙𝑒𝑟𝑆𝑛𝑖𝑝,𝑓𝑖𝑙𝑒) staticanalysistoolstocheckwhetherthebefore-fixingfilecontains vulnerabilities.Wethendetermineonefileasvulnerability-fixing relatedonlyifthevulnerabilitiesdetectedinthebefore-fixingver- sion1correspondtocodechangesinthepatch. 2.3.1 Pilot Experience. Figure 1(a) illustrates an example of an inter-proceduralCWE-190(IntegerOverfloworWraparound)vul- 2.2.3 JointDecision. Wecombinethecomprehensioncapabilities nerability[26].Figure4(a)andFigure4(b)showtworeal-world ofLLMsandthedomainknowledgeofstaticanalysistoolstoiden- solutionsimplementedacrosssubsequentcommits,whichinvolve tify vulnerability-fixing related files in one patch. The files are changingthecallerfunctionandadjustingthecalleefunction,re- labeledasrelatedornotonlyifbothLLMsandstaticanalysistools spectively.Specifically,Figure4(a)demonstratesthefirstsolution, reachthesamedecisions.Fileswithconflictingresultsfromthe wherethefunctioncheck_mul_overflow()isusedtoverifyifthe LLMs and static analysis tools are excluded for the subsequent variable𝑠𝑖𝑧𝑒isoverlylargebeforethexcalloc()functioniscalled processing. (Lines3-4).Figure4(b)illustratesthesecondsolution,whichin- volvesacheckwithinthexcalloc()functionitselftoassessthe 2.3 Multi-granularityDependencyExtraction sizeof𝑠𝑖𝑧𝑒(Lines3-4).Therefore,forthecodesnippets,whether Module functionascallersorcallees,inter-proceduralvulnerabilitiescan Existingvulnerabilitydatasetsmainlyfocusonfunction-levelvul- beintroduced.NotethatcodechangesofFigure4(a)donotcover nerabilities, ignoring the rich context information. To mitigate thesixthline,whichcontainstheinter-proceduralvulnerability thischallenge,weextracttheinter-proceduralcallrelationshipsof API,xmalloc().Therefore,whencapturingtheinter-procedural vulnerabilitiesthroughouttherepositoryandconstructmultiple- vulnerabilityfromacodesnippet,weneedtoexpandthesearch granularity information for each vulnerability patch, including scopetoitstop-levelfunctions. informationatrepositorylevel,filelevel,functionlevel,andline 2.3.2 Repository-levelDependencyExtractionAlgorithm. Wede- level. velopadependencyextractionalgorithmcomprisingtwocompo- nents:Repository-levelDependencyExtraction(Algorithm1)and 1Weaggregatethevulnerabilitiesdetectedbythemultiplestaticanalysistoolsduring RootofCallerandCalleeTreeExtraction(Algorithm2).Algorithm1 |
theprocessing. servesastheprimaryprocessfordependencyextraction,whileICSE,2024,April2024,Lisbon,Portugal XinchenWang★,RuidaHu★,CuiyunGao∗,Xin-ChengWen,YujiaChen,andQingLiao Table 2: Multi-granularity information for each patch in Repository-level:Foreachvulnerability-fixingrelatedfile,we ReposVul,includingfile-level,function-level,file-level,and employAlgorithm1toextracttheinter-proceduralcallrelation- repository-levelinformation. shipsofvulnerabilitiesamongthewholerepository. File-level:Weconsiderthevulnerability-fixingrelatedfilesbe- Features Description foreandaftercodechangesasvulnerableandnon-vulnerable,re- spectively.Forvulnerability-fixingunrelatedfiles,weconsiderboth Line-level thefilesbeforeandaftercodechangesasnon-vulnerable. Line Thecontentofthecodeline Function-level:Foreachfunctionaffectedbycodechanges, Line-Number Thelinenumberofthecodeline ifthefunctionisdefinedinvulnerability-fixingrelatedfiles,we consider the function before and after code changes as vulner- Function-level ableandnon-vulnerable,respectively;ifthefunctionisdefined Function Thecontentofthefunction invulnerability-fixingunrelatedfiles,weconsiderboththefunc- Target Thevulnerabilitylabelofthefunction tionbeforeandaftercodechangesasnon-vulnerable.Foreach functionunaffectedbycodechanges,weconsiderthefunctionas File-level non-vulnerable. Line-level:Weextractthelinechangesandtheirlinenumbers LLMs-Evaluate LLMsevaluationresults fromcodechangesinthepatchandleveragetheselinechangesas Static-Check Staticanalysistoolscheckingresults aprecisedetectiontarget. Target Thevulnerabilitylabelofthefile Repository-level 2.4 Trace-basedFilteringModule Inter-proceduralCodeThecontentoftheinter-proceduralcode Patchesmayintroducevulnerabilitiesandbecomeoutdated,but Target Thevulnerabilitylabelofthecodesnippet theexistingdatasetsdonotdistinguishoutdatedpatches,posing apotentialrisktodataquality.Inthismodule,weinitiallytrack thesubmissionhistoryofpatchesbasedonfilepathsandcommit time.Throughanalyzingpatches’historicalinformation,wethen Algorithm2specificallyaddressestheextractionoftherootsof recognizeoutdatedpatchesbytracingtheircommitdiffs. functioncallerandcalleetrees. Algorithm1presentsourrepository-leveldependencyextrac- 2.4.1 FilePathTrace-basedFilter. Weinitiallyfilternoisefilesac- tionframework.Ittakestheselectedprogramminglanguageandall cordingtothesuffixes.Forexample,somefilessuchasdescription vulnerability-fixingrelatedfilesinaprojectasinputandproduces twooutputs:thefunctioncallertree(𝑐𝑎𝑙𝑙𝑒𝑟𝑇𝑟𝑒𝑒)andthefunction documentation(suffixedwith.md,.rst),data(suffixedwith.json, calleetree(𝑐𝑎𝑙𝑙𝑒𝑒𝑇𝑟𝑒𝑒).Thealgorithmworksasfollows:foreach .svg),changelogs(suffixedwith.ChangeLog),andoutputfiles(suf- fixedwith.out),aregenerallyunrelatedtofunctionalityimplemen- changedcodesnippetinavulnerability-fixingrelatedfile,weini- tation.Fortheremainedfiles,wecreateadictionaryassociating tiallyidentifytherootfunctionsforbothcallerandcalleetrees, named𝑐𝑎𝑙𝑙𝑒𝑟𝑅𝑜𝑜𝑡 and𝑐𝑎𝑙𝑙𝑒𝑒𝑅𝑜𝑜𝑡,respectively(Lines8-9).Subse- filepathswiththeirmostrecentsubmissiondatesinourcollected vulnerabilitypatches.Wethenreviewthesubmissiondateofeach quently,aspecificstaticanalysistoolisusedtobuildthecallertrees from𝑐𝑎𝑙𝑙𝑒𝑟𝑅𝑜𝑜𝑡 andcalleetreesfrom𝑐𝑎𝑙𝑙𝑒𝑒𝑅𝑜𝑜𝑡 (Lines11-12).For file,comparingittothelatestdaterecordedinthedictionary.If thefile’ssubmissiondateisnotthemostrecent,weretainthefile differentprogramminglanguages,weusedifferenttools:cflow[27] forthesubsequentcommittimetrace-basedfilter.Otherwise,we for C/C++, Java-all-call-graph [28] for Java, and PyCG [29] for filteritout.AsshowninFigure1(c),thefilettm_page_alloc.cstill Python,consideringthesetoolsarespecificallydesignedforcertain containsvulnerabilitiesafterthefirstcodechanges.Wecreatethe programminglanguages. entryinthedictionarybasedonthefile’spath“ttm_page_alloc.c” Algorithm2isdesignedtoidentifytherootsoffunctioncaller anditslatestsubmissiondate.Accordingtothefile’ssubmission andcalleetrees.Fortherootofthefunctioncalleetreeextraction, dates,weconsidertheearliersubmittedfile(commitida66477b)as weidentifyallthetop-levelfunctionswithinthefile(Line5).Next, vulnerable. weretrievethetop-levelfunctionsthatoverlapwiththeprovided codesnippet(Lines6-10).Fortherootofthefunctioncallertree extraction,wefirstexpandthescopeofAPIextractiontothetop- 2.4.2 CommitTimeTrace-basedFilter. Wefirstretrieveeachorig- levelfunctionscontainingthegivencodesnippet(Line15).Wethen inal patch’s parent patch and child patch based on its commit leveragetheTree-sittertool[20]toextractallrelevantAPIsinthe time.Wethenassesswhetherthefileschangedwithintheparent scope(Line16). patchandchildpatchoverlapwiththosechangedbytheoriginal patch.Ifthereisanoverlapwithintheretainedfilesbythefile 2.3.3 Multi-granularityCodeSnippet. Toidentifyinter-procedural path trace-based filter, we recognize the original patch as out- andintra-proceduralvulnerabilitiesandfacilitatethelocalization dated.AsillustratedinFigure1(c),theoriginalpatch(commitid ofvulnerabilities,weconstructmultiple-granularityinformation a66477b)anditschildpatch(commitidac1e516)changethesame |
foreachpatch,includingrepository-level,file-level,function-level, filettm_page_alloc.cduetotheincompletenessoftheoriginalpatch. and line-level. As shown in Table 2, each patch contains multi- Bycomparingtheoverlappingbetweenthechangedfilesinthe granularityinformation. originalpatchanditschildpatch,wecanfindthesamechangedReposVul:ARepository-LevelHigh-QualityVulnerabilityDataset ICSE,2024,April2024,Lisbon,Portugal filettm_page_alloc.c.Sincethefileisretainedbythefilepathtrace- analysisinformationillustratedinTable2.Comprehensiveinforma- basedfilter,weconsidertheoriginalpatchasoutdated. tiononvulnerabilitiesenablesdevelopersandresearcherstotake effectivemeasuresforvulnerabilitydetection. 3 EVALUATIONANDEXPERIMENTAL RESULTS AnswertoRQ1:Comparedtotheexistingdatasets,Re- Inthissection,weillustratetheadvantagesofReposVulandfocus posVulincorporatesmulti-granularitycodesnippetsand onthefollowingfourResearchQuestions(RQs): themostextensiverangeofCWEtypes.Italsoemploysan effectivemethodforlabelingvulnerabilitydataandpro- RQ1: WhataretheadvantagesofReposVulcomparedtothe videsannotationsforoutdatedpatches,alongwithother existingvulnerabilitydatasets? richadditionalinformation. RQ2: WhatisthequalityofdatalabelsinReposVul? RQ3: To what extent do the selection of LLMs and prompt designinthevulnerabilityuntanglingmoduleaffectthe 3.2 RQ2:LabelQualityofReposVul labelquality? RQ4: HowdoesReposVulperforminfilteringoutdatedpatches? To answer the RQ2, we first conduct experiments to assess the labeling method of ReposVul (i.e., the vulnerability untangling moduleinthedatacollectionframework).Then,wecomparethe 3.1 RQ1:AdvantagesofReposVul labelqualityofReposVulwiththatofthepreviousdatasets. ToanswerRQ1,wecompareReposVulwithsixwidely-usedvul- ComparisonwithLLMsandstaticanalysistools:Wecom- nerabilitydetectiondatasets[11–13,30–32]fromseveralaspects, pare the label quality of ReposVul with that obtained by LLMs includingthegranularityofvulnerabilities,numberofCWEtypes, andstaticanalysistoolsseparately.Theresultsarepresentedin vulnerabilitylabelingmethods,outdatedpatchesrecognition,and Table5.Specifically,werandomlyselect50CVEcasesforeach additional information. As shown in Table 3, ReposVul has the programminglanguage.Werecruitthreeacademicresearchersas followingadvantagescomparedtocurrentdatasets: participants,andeachofthempossessesoverfiveyearsofsoftware Multi-granularityinformation: Comparedtootherdatasets vulnerabilitydetectionexperience.Accordingtothevulnerability thatonlycontainvulnerabilitiesattheline-level,function-level, descriptionandthecommitmessageofthepatch,theparticipants andfile-level,ReposVulcontainsmorecomprehensivegranularities, manuallylabelthecodechangesofthefileinthepatchas“Yes” includingrepository-level,file-level,function-level,andline-level or“No”,correspondingtowhetherthecodechangesofthefileare vulnerabilities,whichconsidersinter-proceduralvulnerabilitiesand relevanttothevulnerabilityfixornot,respectively.Afterassessing, providesinformationthatismorethanasinglepatch.Asshown participantsreachagreementson96%forthecases.Fortheremain- in Table 4, ReposVul comprises 14,706 files from 6,897 patches. ingdiscrepancies,participantsnegotiateandreachaconsensus.As Specifically,ReposVulencompasses212,790functionsinC,20,302 showninTable5,weobservethattheproposedlabelingmethod inC++,2,816inJava,and26,308inPython,respectively. achievesthehighestaccuracyof85%acrossthefourprogramming ExtensiveCWEcoverage: AsshowninTable3,ReposVul languagesonaverage,exceedingLLMsby10%andstaticanalysis coversmoreCWEtypesthanallotherdatasets.ReposVulcovers toolsby5.5%. 149CWEtypesinC,105inC++,129inJava,and159inPython, Comparisonwiththeexistingdatasets:Thepreviouswork[10] respectively.ReposVulencompassesCWEtypesthatextendacross hasshownthat20-71%ofvulnerabilitylabelsareinaccurateinthe variousprogramminglanguagessincesomeCWEtypesarenot state-of-the-artOSSvulnerabilitydatasets.Analyzingtheresultsin language-specific.ItindicatesthatReposVulprovidesmorecom- Table5,weobservethatReposVulshowsthelabelingaccuracyat prehensivedatathanexistingbenchmarks. 85%,90%,85%,and80%onC,C++,Java,andPython,respectively. Effectivelabelingmethods: Thepreviouswork[10]hasiden- ComparedwiththeexistingOSSvulnerabilitydatasetsasreported tifiedthenoisydataproblemintheexistingdatasetsbythecurrent in[10],ReposVulachievesrelativelyhigheraccuracyacrossthe labelingmethod.Inthispaper,ReposVulproposesthevulnerability fourprogramminglanguages,especiallyfortheC++programming untanglingmoduleforimprovingthevulnerabilitydataquality.It languagewithanoutstandingaccuracyat90%.Ourobservationsin- involvesthevulnerabilityrulesanddomainknowledgefromstatic dicatethatReposVul’slabelqualityisbetterthanpreviousdatasets analysistoolsandthestrongcontextualunderstandingcapability duetotheintegrationofcontextualunderstandingcapabilityfrom fromLLMs. LLMsanddomainknowledgefromstaticanalysistools. Recognitionofoutdatedpatches:Currentvulnerabilitydatasets |
donotdistinguishoutdatedpatches.ReposVulemploysthetrace- basedfilteringmoduletorecognizepotentialoutdatedpatches.The AnswertoRQ2:Aftermanualcheckingofasubsetof trace-basedfilteringmoduleintegratesfilepathtrace-basedfilter labelleddata,thelabelqualityofReposVuloutperforms andcommittimetrace-basedfiltertoprovidelabelsforoutdated theexistingdatasets,achievingaccuracyof85%,90%,85%, patches. and80%onC,C++,Java,andPython,respectively.The Specificrichnessofadditionalinformation: ReposVulcon- proposedlabelingmethodalsoperformsbetterthanLLMs tainstherichestadditionalinformation,includingCVEdescriptions, andstaticanalysistoolsappliedseparately. CVSS,andpatchsubmissionhistoryillustratedinTable1,andstaticICSE,2024,April2024,Lisbon,Portugal XinchenWang★,RuidaHu★,CuiyunGao∗,Xin-ChengWen,YujiaChen,andQingLiao Table3:ComparisonbetweenReposVulandsixwidely-usedvulnerabilitydetectiondatasets. Granularity OutdatedPatch CWETypes LabelingMethod AdditionalInformation Baseline Line Function File Repository Recognition BigVul[12] (cid:33) (cid:33) (cid:37) (cid:37) 91 CommitCodeDiff (cid:37) (cid:33) D2A[30] (cid:33) (cid:33) (cid:37) (cid:37) - StaticAnalysis (cid:37) (cid:33) Devign[11] (cid:37) (cid:33) (cid:37) (cid:37) - Manually (cid:37) (cid:37) Reveal[13] (cid:37) (cid:33) (cid:37) (cid:37) - CommitCodeDiff (cid:37) (cid:37) CrossVul[31] (cid:37) (cid:37) (cid:33) (cid:37) 168 CommitCodeDiff (cid:37) (cid:33) DiverseVul[32] (cid:37) (cid:33) (cid:37) (cid:37) 150 CommitCodeDiff (cid:37) (cid:33) ReposVul (cid:33) (cid:33) (cid:33) (cid:33) 236 LLMs+StaticAnalysis (cid:33) (cid:33) Table4:StatisticsofReposVul. Table6:PerformanceofdifferentLLMsevaluationin20CWE cases.Thenumberssplitby“/”inthe“Relevant”(or“Irrele- vant”)columnrepresentthenumberofcorrectandincorrect Languages CVEEntries Patches Files Functions responsesinrelevant(orirrelevant)codechanges.Thesym- C 3,565 4,010 7,515 212,790 bol“-”indicatesthatthecorrespondingstatisticisunknown. C++ 631 689 1,506 20,302 Java 786 888 2,925 2,816 Model Size Relevant Irrelevant Accuracy Python 1,152 1,310 2,760 26,308 Llama2-7b[35] 7B 4/0 0/16 20% Total 6,134 6,897 14,706 262,216 Llama2-13b[35] 13B 4/0 0/16 20% Baichuan2[34] 7B 1/3 9/7 50% Table5:Resultsofmanualexaminationin50casesperpro- CodeLlama-7b[37] 7B 3/1 9/7 60% gramminglanguage. CodeLlama-13b[37] 13B 3/1 10/6 65% ChatGLM[33] 6B 2/2 11/5 65% ChatGPT[38] 20B 3/1 12/4 75% Language Method Accuracy GPT-4[39] - 4/0 13/3 85% LLMs 70% Tongyi[19] - 4/0 13/3 85% C Staticanalysistool 82% ReposVul 85% LLMs 80% LLMselection: WepresentthenineinvestigatedLLMsinTa- C++ Staticanalysistool 84% ble6.ChatGLM[33]andBaichuan2[34]areopen-sourceLLMs ReposVul 90% fromTHUDMandbaichuan-ai,respectively,trainedonChinese LLMs 78% andEnglishcorpora.Llama2[35]boastsalargertrainingdataset Java Staticanalysistool 78% comparedtoLlama[36].CodeLlama[37]istrainedandfine-tuned ReposVul 85% usingcode-relateddatabasedonLlama2.ChatGPT[38]andGPT- 4[39]arecommercialLLMsfromOpenAI.Tongyi[19]isdeveloped LLMs 72% by Alibaba Cloud, demonstrating excellent performance across Python Staticanalysistool 74% multipletasks.TheexperimentalresultsarepresentedinTable6. ReposVul 80% TheresultsshowthatGPT-4andTongyiachievethehighestaccu- racyof85%,identifyingallcodechangesrelatedtovulnerability fixesandthemajorityofcodechangesunrelatedtovulnerability 3.3 RQ3:InfluenceofDifferentLLMsand fixes.Llama2,ontheotherhand,considersallcodechangesasrele- PromptDesignonLabelQuality vanttovulnerabilityfixesandthuscannoteffectivelydistinguish ToanswertheRQ3,weconducttheexperimentstoanalyzethe tangledpatches.TheotherLLMs’accuracydoesnotexceed80%. impactofLLMandpromptinthevulnerabilityuntanglingmodule ConsideringthatTongyi[19]hasthehighestaccuracywithfree onthelabelqualityofReposVul.Inthissection,werandomlyselect APIaccessibility,wechoosethisLLMforlabeling. 20CVEcases,encompassingcodechangesthatmayormaynot Promptdesign: Thepreviouswork[6]hasdemonstratedthat beassociatedwithvulnerabilityfixes.Werecruitthesamethree the code-related task performance is largely influenced by the academicresearcherswhoparticipateinthemanualannotation prompt.Wealsoconstructfourvariations,includingpromptwith- ofSection3.2forlabeling.Participantsmanuallylabelthecode outCWEdescription(i.e.w/oCWEDescription),CWEsolution(i.e. changesas“Relevant”and“Irrelevant”,respectively.Afterassessing, w/oCWESolution),commitmessage(i.e.w/oCommitMessage), |
participantsreachagreementson100%ofthe20cases. andfunctionofthecodechanges(i.e.w/oFunction).AsillustratedReposVul:ARepository-LevelHigh-QualityVulnerabilityDataset ICSE,2024,April2024,Lisbon,Portugal Table7:Performanceofdifferentpromptsevaluationin20 to10.42%in2022.Thisdeclineindicatesaheightenedemphasisby CWEcases.Thenumberssplitby“/”inthe“Relevant”(or developersonenhancingtherobustnessofpatches. “Irrelevant”)columnrepresentthenumberofcorrectand Projects:Figure5(c)presentstheTop-10projectsrankedbythe incorrectresponsesinrelevant(orirrelevant)codechanges. numberofoutdatedpatches.Theproportionofoutdatedpatchesis highestintheLinuxproject,attainingarateof16.05%.Following Prompt Relevant Irrelevant Accuracy closelyareImageMagickandVim,exhibitingproportionsof10.26% and6.58%,respectively.TheheightenedprevalenceinLinuxcan w/oCWEDescription 3/1 8/8 55% beattributedtoitsexpansiveprojectscopeandthevastnumberof w/oCWESolution 3/1 11/5 70% files,makingitsusceptibletotheintroductionofnewvulnerabilities w/oCommitMessage 3/1 13/3 80% duringthesubmissionofpatches.Notably,ImageMagickandVim w/oFunction 4/0 12/4 80% manifestaconsiderablenumberofoutdatedpatchesdespitetheir ReposVul 4/0 13/3 85% smallerprojectsizes.Itmaybeduetodelaysofpatchinformation maintainedbythesecurityvendors. Programminglanguages: Figure5(d)illustratesthedistribu- tionofoutdatedpatchescategorizedbyprogramminglanguages. inTable7,thepromptusedbyReposVulachievesthehighestac- Amongtheselanguages,Cconstitutesthepredominantshareat curacyof85%.Thefourvariationsresultinaccuracydecreasesof 70.7%,followedbyJavaat14.6%,Pythonat8.1%,andC++at6.6%. 30%,15%,5%,and5%,respectively.Theresultsindicatethatthe TheheightenedoccurrencewithintheCprogramminglanguage CWEdescriptionhasthegreatestimpactonLLMsowingtoits isassociatedwithimproperbufferoperationsrelatedtoCWE-119, richvulnerabilityinformationcomparedtotheCWEsolutionand CWE-787,andCWE-125.Thevulnerabilitiesarenotablyattributed thefunctionofthecodechanges.Therelativelysmallimpactof totheincorporationofflexiblearraysandtheabsenceofbuilt-in thecommitmessageontheLLMsmaybeduetoinaccurateand boundary-checkingmechanisms. redundantinformationinthecommitmessages. Answer to RQ3: Among different LLMs, GPT-4 and AnswertoRQ4:Withintheidentifiedoutdatedpatches, Tongyiachievethebestaccuracyof85%inlabelingthe thoseassociatedwithbufferoperationscomprisethema- relevanceofcodechangestovulnerabilityfixes.Inprompt jority. The quantity of outdated patches increases over design, CWE description and CWE solution have a rel- time,yettheirproportioninthetotalpatchesdiminishes. ativelyconsiderableimpactontheLLMs’decision.The Linux and C have the highest proportions of outdated proposedpromptachievesthehighestaccuracyof85%. patchesby16.05%and70.7%,respectively,acrossprojects andprogramminglanguages. 3.4 RQ4:PerformanceofFilteringOutdated PatchesinReposVul ToanswertheRQ4,wepresentthestatisticsofrecognizedoutdated 4 DISCUSSION patchesbyReposVulfromdifferentaspects,includingCWEs,time, 4.1 DataApplication projects,andprogramminglanguages. CWEs:Figure5(a)showsthedistributionofoutdatedpatches ReposVulisthefirstrepository-levelvulnerabilitydatasetacross andtotalpatchesoftheTop-10CWEsbythenumberofoutdated multipleprogramminglanguages.ReposVulcanbeusedforaddress- patches.WeobservethatCWE-119(ImproperRestrictionofOpera- ingarangeofOSSvulnerability-relatedtasks.Weadvocateforthe tionswithintheBoundsofaMemoryBuffer)[40],CWE-787(Out- utilizationofReposVulasabenchmark,promotingastandardized of-boundsWrite)[41],andCWE-125(Out-of-boundsRead)[14] andpracticalevaluationofmodelperformance. containthemostoutdatedpatches,including121,108,and97out- Multi-granularityvulnerabilitydetection:ReposVulcovers datedpatches,respectively.TheseCWEsareassociatedwithbuffer multi-granularityinformation,includingrepository-level,file-level, operations,indicatingthatdevelopersaremorepronetointroduc- function-level,andline-levelfeatures.Researcherscanleverage ingnewvulnerabilitieswhenmodifyingcodesnippetsrelatedto thesefeaturestodetectinter-proceduralandintra-proceduralvul- buffers,therebyresultinginoutdatedpatches. nerabilities.Inaddition,theexperimentshavedemonstratedthat Time: Figure5(b)depictsthedistributionofoutdatedpatches ReposVuloutperformstheexistingstate-of-the-artdatasetsinlabel andtotalpatchesthroughoutthepastdecade.Itrevealsaconsistent quality.ItsupportsDL-basedvulnerabilitydetectionmethodsfor upwardtrendintheoverallnumberofpatchesassociatedwith bettertraininginthefuture. OSS,ascendingfrom264in2014to1,132in2022.Concurrently,the Patchmanagement:ReposVulcoversarichsetofpatchinfor- countofoutdatedpatcheshasseenamodestincrease,progress- mationincludingthesubmissiondate,parentpatches,andhistori- |
ingfrom53in2014to118in2022.Notably,in2023,bothmetrics calsubmissioninformationofvulnerabilitypatches.Researchers exhibitadecline,whichcanbeattributedtoundisclosedvulner- andpractitionerscanutilizethistimelyinformationtolearnthe abilities.Moreover,theratioofoutdatedpatchestototalpatches patchingprocessofexistingsoftwarevulnerabilitiesandidentify demonstratesanotabledecrease,plummetingfrom20.07%in2014 outdatedpatches.ICSE,2024,April2024,Lisbon,Portugal XinchenWang★,RuidaHu★,CuiyunGao∗,Xin-ChengWen,YujiaChen,andQingLiao 70.7% 6.6% 8.1% 14.6% (a) Outdated and all patches of the (b) Outdated and all patches over (c) Top-10 projects by the number (d) Outdated patches of four top-10 CWEs. the past decade. of outdated patches. programming languages. Figure5:OutdatedpatchesaboutCWEs,time,projects,andprogramminglanguages. Vulnerability repair: ReposVul provides CVE descriptions, tools[22,24,50]tolabelvulnerabilitydata.Forexample,Russell CWEsolutions,CWEconsequences,andCWEmethods.Thisinfor- etal.[51]integratetheresultsofmultiplestaticanalysistoolsto mationaidsdeveloperstobetterunderstandthecausesandsolu- determinewhethervulnerabilitiesexistincodesnippets.D2A[30] tionsofvulnerabilities.Futureresearchisexpectedtoincorporate collectsvulnerabilitiesfromsixopen-sourceprojectsandemploys therichcontextualinformationforautomaticallyrepairingOSS theInfer[50]tolabeldata. vulnerabilities. However,alltheseexistingvulnerabilitydatasetsfacechallenges inlabelqualityandlackinter-proceduralvulnerability.Inthispaper, 4.2 ThreatsandLimitations weproposethevulnerabilityuntanglingmodulefordistinguishing vulnerability-fixingrelatedcodechangesfromtangledpatches.We Onethreattovaliditycomesfromthecollectingsourceplatforms. alsoproposethemulti-granularitydependencyextractionmodule Duringthecollectingprocess,wecollectReposVulfromGitHub, forcapturingtheinter-proceduralcallrelationships. GoogleGit,andbugs.chromium,whichleadstomissingprojects thatarehostedonotherplatformsinReposVul. 5.2 OSSVulnerabilityDetection Thesecondthreattovalidityistheprogramminglanguage.We onlyextractrepository-leveldependencyforfourwidely-usedpro- OSSvulnerabilitydetectionisessentialtoidentifysecurityflawsand gramminglanguagesduetothelanguage-specificfeatures.How- maintainsoftwaresecurity.TheexistingOSSvulnerabilitydetection ever,vulnerabilitiesalsoexistinotherlanguages,suchasJavaScript, methodsconsistofprogramanalysis-basedmethods[4,5,21,52] Go,andPHP.Weplantoextractrepository-leveldependencyfor andlearning-basedvulnerabilitydetectionmethods[7,8,53–56]. morelanguagesinfuturework. Theprogramanalysis-basedmethodsutilizeexpertknowledge Anothervaliditytothethreatcomesfromthecollectingtime, in extracting features to identify vulnerabilities, including data weonlycollecttheCVEsfrom2010andthepreviousCVEsarenot flowanalysis[57]andsymbolicexecution[58].Dataflowanaly- includedinReposVul,whichcausessomevulnerabilitiesmaybe sis[23,24,59–61]tracksthedataflowalongtheexecutionpathsof discoveredandfixedinpreviousyears. theprogramtoobtainstatusinformationatprogrampoints,thus detectingvulnerabilitiesbasedonthesecurityofprogrampoints. 5 RELATEDWORK Symbolicexecution[62–64]employssymbolicinputsinsteadof actualinputsanddetectsprogramvulnerabilitiesbydetermining 5.1 OSSVulnerabilityDataset whethersymbolicexpressionssatisfyconstraints. ThepreviousworksutilizedifferentmethodsforconstructingOSS The learning-based methods can be classified into sequence- vulnerabilitydatasets.Itconsistsofmanualchecking-based,com- based[6,9,65,66]andgraph-basedmethods[67–70]duetothe mitcodediff-based,andstaticanalysistoolgeneration.Manual representationofthesourcecode.Sequence-basedmethodstrans- checking-based datasets [42–45] utilize test cases crafted artifi- formcodeintotokensequences.Forexample,VulDeePecker[71] cially. For example, SARD [46] includes samples from student- utilizescodegadgetstorepresentprogramsandemploysaBidirec- authoredandindustrialproduction.Pradeletal.[47]employcode tionalLongShort-TermMemory(BiLSTM)networkfortraining. transformationtoconvertnon-vulnerablesamplesintovulnerable 𝜇VulDeePecker[72]combinesthreeBiLSTMnetworks,enabling ones.However,themanualchecking-basedmethodsfacelimita- thedetectionofvarioustypesofvulnerabilities.Russelletal.[51] tionsinlabelingefficiency.Commitcodediff-baseddatasets[12, integrateconvolutionalneuralnetworksandrecurrentneuralnet- 13,32]gatherpatchesfromopen-sourcerepositoriesandextract worksforfeatureextraction,utilizingarandomforestclassifier vulnerabilitydatafromcodechangesinthepatches.CVEfixes[48] forcapturingvulnerabilitypatterns.Graph-basedmethodsrepre- fetchesvulnerableentriesfromtheNationalVulnerabilityDatabase sentcodeasgraphsandusegraphneutralnetworksforsoftware (NVD)[49]andcorrespondingfixesbasedonentries’reference vulnerabilitydetection.Qianetal.[73]employattributedcontrol |
links.CrossVul[31]generatesanauthenticdatasetspanning40 flowgraphstoconstructavulnerabilitysearchengine.Devign[11] programminglanguagesand1,675projects,whileitonlyprovides adoptsgatedgraphneutralnetworkstoprocessmultipledirected file-levelsourcecode.Inaddition,somedatasetsusestaticanalysis graphsgeneratedfromsourcecode.ReposVul:ARepository-LevelHigh-QualityVulnerabilityDataset ICSE,2024,April2024,Lisbon,Portugal However,alltheselearning-basedmethodsrequirealargeamount [22] Cppcheck-team.“Cppcheck”,[n.d.].http://cppcheck.sourceforge.net/. ofhigh-qualitylabeledsamplestoachievegoodperformance.In [23] D.A.Wheeler.“Flawfinder”,[n.d.].https://dwheeler.com/flawfinder/. [24] andrewd.“rough-auditing-tool-for-security”,2023.https://github.com/andrew- this paper, we construct a repository-level high-quality dataset d/rough-auditing-tool-for-security. namedReposVultopromotethemodeltraining. [25] r2c.“Semgrep”,2021.https://semgrep.dev. [26] CWE-190:IntegerOverfloworWraparound,[n.d.].https://cwe.mitre.org/data/ definitions/190.html. 6 CONCLUSION [27] SergeyPoznyakoff."GNUcflow",2005.https://www.gnu.org/software/cflow/. [28] "java-all-call-graph", [n.d.]. https://github.com/Adrninistrator/java-all-call- Inthispaper,weproposeanautomateddatacollectionframework graph. andconstructthefirstrepository-levelvulnerabilitydatasetnamed [29] VitalisSalis,ThodorisSotiropoulos,PanosLouridas,DiomidisSpinellis,and ReposVul.Ourframeworkconsistsofavulnerabilityuntangling DimitrisMitropoulos.Pycg:Practicalcallgraphgenerationinpython.InICSE, pages1646–1657.IEEE,2021. moduleforidentifyingtangledpatches,amulti-granularitydepen- [30] YunhuiZheng,SaurabhPujar,BurnL.Lewis,LucaBuratti,EdwardA.Epstein, dencyextractionmoduleforextractinginter-proceduralvulnerabil- BoYang,JimLaredo,AlessandroMorari,andZhongSu.D2A:Adatasetbuilt ities,andatrace-basedfilteringmoduleforrecognizingoutdated forai-basedvulnerabilitydetectionmethodsusingdifferentialanalysis.InICSE (SEIP),pages111–120.IEEE,2021. patches.ReposVulcovers6,134CVEentriesacross1,491projects [31] Georgios Nikitopoulos, Konstantina Dritsa, Panos Louridas, and Dimitris andfourprogramminglanguages.Aftercomprehensivedataanaly- Mitropoulos. Crossvul:across-languagevulnerabilitydatasetwithcommit data.InESEC/SIGSOFTFSE,pages1565–1569.ACM,2021. sisandmanualchecking,ReposVulprovestobehigh-qualityand [32] YizhengChen,ZhoujieDing,LamyaAlowain,XinyunChen,andDavidA.Wag- widelyapplicablecomparedwiththeexistingvulnerabilitydataset. ner.Diversevul:Anewvulnerablesourcecodedatasetfordeeplearningbased Our source code, as well as ReposVul, are available at https: vulnerabilitydetection.InRAID,pages654–668.ACM,2023. [33] AohanZeng,XiaoLiu,ZhengxiaoDu,ZihanWang,HanyuLai,MingDing, //github.com/Eshe0922/ReposVul. ZhuoyiYang,YifanXu,WendiZheng,XiaoXia,WengLamTam,ZixuanMa, YufeiXue,JidongZhai,WenguangChen,PengZhang,YuxiaoDong,andJieTang. REFERENCES GLM-130B:anopenbilingualpre-trainedmodel.CoRR,abs/2210.02414,2022. [34] AiyuanYang,BinXiao,BingningWang,BorongZhang,CeBian,ChaoYin, [1] Cisco."CiscoIOSXESoftwareWebUIElevationofPrivilegeVulnerability",2023. ChenxuLv,DaPan,DianWang,DongYan,FanYang,FeiDeng,FengWang,Feng https://nvd.nist.gov/vuln/detail/CVE-2023-20198. Liu,GuangweiAi,GuoshengDong,HaizhouZhao,HangXu,HaozeSun,Hongda [2] RobertoBaldoni,EmilioCoppa,DanieleConoD’Elia,CamilDemetrescu,and Zhang,HuiLiu,JiamingJi,JianXie,JunTaoDai,KunFang,LeiSu,LiangSong, IreneFinocchi.Asurveyofsymbolicexecutiontechniques.ACMComput.Surv., LifengLiu,LiyunRu,LuyaoMa,MangWang,MickelLiu,MingAnLin,Nuolan 51(3):50:1–50:39,2018. Nie,PeidongGuo,RuiyangSun,TaoZhang,TianpengLi,TianyuLi,WeiCheng, [3] RichardBonett,KaushalKafle,KevinMoran,AdwaitNadkarni,andDenysPoshy- WeipengChen,XiangrongZeng,XiaochuanWang,XiaoxiChen,XinMen,Xin vanyk.Discoveringflawsinsecurity-focusedstaticanalysistoolsforandroid Yu,XuehaiPan,YanjunShen,YidingWang,YiyuLi,YouxinJiang,YuchenGao, usingsystematicmutation. InUSENIXSecuritySymposium,pages1263–1280. YupengZhang,ZenanZhou,andZhiyingWu. Baichuan2:Openlarge-scale USENIXAssociation,2018. languagemodels,2023. [4] ShuitaoGan,ChaoZhang,PengChen,BodongZhao,XiaojunQin,DongWu, [35] HugoTouvron,LouisMartin,KevinStone,PeterAlbert,AmjadAlmahairi,Yas- andZuoningChen.GREYONE:dataflowsensitivefuzzing.InUSENIXSecurity mineBabaei,NikolayBashlykov,SoumyaBatra,PrajjwalBhargava,ShrutiBhos- |
Symposium,pages2577–2594.USENIXAssociation,2020. ale,DanBikel,LukasBlecher,CristianCantonFerrer,MoyaChen,GuillemCucu- [5] CraigBeaman,MichaelRedbourne,J.DarrenMummery,andSaqibHakak. rull,DavidEsiobu,JudeFernandes,JeremyFu,WenyinFu,BrianFuller,Cynthia Fuzzingvulnerabilitydiscoverytechniques:Survey,challengesandfuturedirec- Gao,VedanujGoswami,NamanGoyal,AnthonyHartshorn,SagharHosseini, tions.Comput.Secur.,120:102813,2022. RuiHou,HakanInan,MarcinKardas,ViktorKerkez,MadianKhabsa,Isabel [6] ChaozhengWang,YuanhangYang,CuiyunGao,YunPeng,HongyuZhang,and Kloumann,ArtemKorenev,PunitSinghKoura,Marie-AnneLachaux,Thibaut MichaelR.Lyu. Nomorefine-tuning?anexperimentalevaluationofprompt Lavril,JenyaLee,DianaLiskovich,YinghaiLu,YuningMao,XavierMartinet, tuningincodeintelligence.InAbhikRoychoudhury,CristianCadar,andMiryung TodorMihaylov,PushkarMishra,IgorMolybog,YixinNie,AndrewPoulton, Kim,editors,Proceedingsofthe30thACMJointEuropeanSoftwareEngineering JeremyReizenstein,RashiRungta,KalyanSaladi,AlanSchelten,RuanSilva, ConferenceandSymposiumontheFoundationsofSoftwareEngineering,ESEC/FSE EricMichaelSmith,RanjanSubramanian,XiaoqingEllenTan,BinhTang,Ross 2022,Singapore,Singapore,November14-18,2022,pages382–394.ACM,2022. Taylor,AdinaWilliams,JianXiangKuan,PuxinXu,ZhengYan,IliyanZarov, [7] ZhenLi,DeqingZou,ShouhuaiXu,HaiJin,YaweiZhu,andZhaoxuanChen. YuchenZhang,AngelaFan,MelanieKambadur,SharanNarang,AurelienRo- Sysevr:Aframeworkforusingdeeplearningtodetectsoftwarevulnerabilities. driguez,RobertStojnic,SergeyEdunov,andThomasScialom. Llama2:Open IEEETrans.DependableSecur.Comput.,19(4):2244–2258,2022. foundationandfine-tunedchatmodels,2023. [8] SongWang,TaiyueLiu,andLinTan.Automaticallylearningsemanticfeatures [36] HugoTouvron,ThibautLavril,GautierIzacard,XavierMartinet,Marie-Anne fordefectprediction.InICSE,pages297–308.ACM,2016. Lachaux,TimothéeLacroix,BaptisteRozière,NamanGoyal,EricHambro,Faisal [9] HoaKhanhDam,TruyenTran,TrangPham,ShienWeeNg,JohnGrundy,and Azhar,AurelienRodriguez,ArmandJoulin,EdouardGrave,andGuillaumeLam- AdityaGhose. Automaticfeaturelearningforpredictingvulnerablesoftware ple.Llama:Openandefficientfoundationlanguagemodels,2023. components.IEEETrans.SoftwareEng.,47(1):67–85,2021. [37] BaptisteRozière,JonasGehring,FabianGloeckle,StenSootla,ItaiGat,Xi- [10] RolandCroft,MuhammadAliBabar,andM.MehdiKholoosi.Dataqualityfor aoqingEllenTan,YossiAdi,JingyuLiu,TalRemez,JérémyRapin,Artyom softwarevulnerabilitydatasets.InICSE,pages121–133.IEEE,2023. Kozhevnikov,IvanEvtimov,JoannaBitton,ManishBhatt,CristianCantonFerrer, [11] YaqinZhou,ShangqingLiu,JingKaiSiow,XiaoningDu,andYangLiu. De- AaronGrattafiori,WenhanXiong,AlexandreDéfossez,JadeCopet,FaisalAzhar, vign:Effectivevulnerabilityidentificationbylearningcomprehensiveprogram HugoTouvron,LouisMartin,NicolasUsunier,ThomasScialom,andGabriel semanticsviagraphneuralnetworks.InNeurIPS,pages10197–10207,2019. Synnaeve.Codellama:Openfoundationmodelsforcode,2023. [12] JiahaoFan,YiLi,ShaohuaWang,andTienN.Nguyen.AC/C++codevulnerability [38] OpenAI.“ChatGPT”,[n.d.].https://openai.com/blog/chatgpt. datasetwithcodechangesandCVEsummaries.InMSR,pages508–512.ACM, [39] OpenAI.“GPT4”,[n.d.].https://openai.com/research/gpt-4. 2020. [40] CWE-119:ImproperRestrictionofOperationswithintheBoundsofaMemory [13] SaikatChakraborty,RahulKrishna,YangruiboDing,andBaishakhiRay.Deep Buffer,[n.d.].https://cwe.mitre.org/data/definitions/119.html. learningbasedvulnerabilitydetection:Arewethereyet?IEEETrans.Software [41] CWE-787:Out-of-boundsWrite,[n.d.].https://cwe.mitre.org/data/definitions/ Eng.,48(9):3280–3296,2022. 787.html. [14] CWE-125:Out-of-boundsRead,[n.d.].https://cwe.mitre.org/data/definitions/125. [42] AidongXu,TaoDai,HuajunChen,ZheMing,andWeiningLi. Vulnerability html. detectionforsourcecodeusingcontextualLSTM. InICSAI,pages1225–1230. [15] WhiteSource.“Mendbolt”,2023.https://www.mend.io/free-developer-tools/. IEEE,2018. [16] “GitHub”,2023.https://github.com/. [43] XuDuan,JingzhengWu,ShoulingJi,ZhiqingRui,TianyueLuo,MutianYang,and [17] “GoogleGit”,2023.https://code.googlesource.com/git/. YanjunWu.Vulsniper:Focusyourattentiontoshootfine-grainedvulnerabilities. [18] “Bugs.chromium”,2023.https://bugs.chromium.org/p/chromium/issues/list. InIJCAI,pages4665–4671.ijcai.org,2019. |
[19] AlibabaCloud.“TongyiQianwen”,[n.d.].https://qianwen.aliyun.com/. [44] NicholasSaccente,JoshDehlinger,LinDeng,SuranjanChakraborty,andYin [20] “Tree-sitter”,[n.d.].https://tree-sitter.github.io/tree-sitter/. Xiong. Projectachilles:Aprototypetoolforstaticmethod-levelvulnerability [21] MelinaKulenovicandDzenanaDonko.Asurveyofstaticcodeanalysismethods detectionofjavasourcecodeusingarecurrentneuralnetwork.InASEWorkshops, forsecurityvulnerabilitiesdetection.InMIPRO,pages1381–1386.IEEE,2014. pages114–121.IEEE,2019.ICSE,2024,April2024,Lisbon,Portugal XinchenWang★,RuidaHu★,CuiyunGao∗,Xin-ChengWen,YujiaChen,andQingLiao [45] HantaoFeng,XiaotongFu,HongyuSun,HeWang,andYuqingZhang.Efficient [72] DeqingZou,SujuanWang,ShouhuaiXu,ZhenLi,andHaiJin.𝜇vuldeepecker: vulnerabilitydetectionbasedonabstractsyntaxtreeanddeeplearning. In Adeeplearning-basedsystemformulticlassvulnerabilitydetection.IEEETrans. INFOCOMWorkshops,pages722–727.IEEE,2020. DependableSecur.Comput.,18(5):2224–2236,2021. [46] NIST. SARD:Softwareassurancereferencedataset.,2022. https://samate.nist. [73] QianFeng,RundongZhou,ChengchengXu,YaoCheng,BrianTesta,andHeng gov/SRD/index.php. Yin.Scalablegraph-basedbugsearchforfirmwareimages.InCCS,pages480–491. [47] MichaelPradelandKoushikSen.Deepbugs:alearningapproachtoname-based ACM,2016. bugdetection.Proc.ACMProgram.Lang.,2(OOPSLA):147:1–147:25,2018. [48] GuruPrasadBhandari,AmaraNaseer,andLeonMoonen.Cvefixes:automatedcol- lectionofvulnerabilitiesandtheirfixesfromopen-sourcesoftware.InPROMISE, pages30–39.ACM,2021. [49] NIST.“NationalVulnerabilityDatabase(NVD)”,2022.https://nvd.nist.gov/. [50] Facebook.“Inferstaticanalyzer”,[n.d.].https://fbinfer.com/. [51] RebeccaL.Russell,LouisY.Kim,LeiH.Hamilton,TomoLazovich,JacobHarer, OnurOzdemir,PaulM.Ellingwood,andMarcW.McConley.Automatedvulnera- bilitydetectioninsourcecodeusingdeeprepresentationlearning.InICMLA, pages757–762.IEEE,2018. [52] JannikPewny,FelixSchuster,LukasBernhard,ThorstenHolz,andChristian Rossow.Leveragingsemanticsignaturesforbugsearchinbinaryprograms.In ACSAC,pages406–415.ACM,2014. [53] Xin-ChengWen,XinchenWang,CuiyunGao,ShaohuaWang,YangLiu,and ZhaoquanGu.Whenlessisenough:Positiveandunlabeledlearningmodelfor vulnerabilitydetection.InASE,pages345–357.IEEE,2023. [54] XiaomengWang,TaoZhang,RunpuWu,WeiXin,andChangyuHou.CPGVA: codepropertygraphbasedvulnerabilityanalysisbydeeplearning. InICAIT, pages184–188.IEEE,2018. [55] Xin-ChengWen,CuiyunGao,JiaxinYe,ZhihongTian,YanJia,andXuanWang. Meta-pathbasedattentionalgraphlearningmodelforvulnerabilitydetection. CoRR,abs/2212.14274,2022. [56] Xin-ChengWen,CuiyunGao,FengLuo,HaoyuWang,GeLi,andQingLiao. LIVABLE:exploringlong-tailedclassificationofsoftwarevulnerabilitytypes. CoRR,abs/2306.06935,2023. [57] AlfredV.Aho,RaviSethi,andJeffreyD.Ullman.Compilers:Principles,Techniques, andTools. Addison-Wesleyseriesincomputerscience/Worldstudentseries edition.Addison-Wesley,1986. [58] RobertS.Boyer,BernardElspas,andKarlN.Levitt.SELECT-aformalsystem fortestinganddebuggingprogramsbysymbolicexecution.InReliableSoftware, pages234–245.ACM,1975. [59] AniquaZ.BasetandTamaraDenning.Idepluginsfordetectinginput-validation vulnerabilities.In2017IEEESecurityandPrivacyWorkshops(SPW),pages143–146, 2017. [60] JamesWaldenandMaureenDoyle.Savi:Static-analysisvulnerabilityindicator. IEEESecurity&Privacy,10(3):32–39,2012. [61] BillBaloglu. Howtofindandfixsoftwarevulnerabilitieswithcoveritystatic analysis.InSecDev,page153.IEEEComputerSociety,2016. [62] PatriceGodefroid,MichaelY.Levin,andDavidA.Molnar.Automatedwhitebox fuzztesting.InNDSS.TheInternetSociety,2008. [63] VitalyChipounov,VolodymyrKuznetsov,andGeorgeCandea.S2E:aplatform forin-vivomulti-pathanalysisofsoftwaresystems.InASPLOS,pages265–278. ACM,2011. [64] HongliangLiang,LeiWang,DongyangWu,andJiuyunXu.Mlsa:Astaticbugs analysistoolbasedonllvmir.In201617thIEEE/ACISInternationalConferenceon SoftwareEngineering,ArtificialIntelligence,NetworkingandParallel/Distributed Computing(SNPD),pages407–412,2016. [65] FangWu,JigangWang,JiqiangLiu,andWeiWang. Vulnerabilitydetection withdeeplearning.In20173rdIEEEInternationalConferenceonComputerand Communications(ICCC),pages1298–1302,2017. [66] GustavoGrieco,GuillermoLuisGrinblat,LucasC.Uzal,SanjayRawat,Josselin Feist,andLaurentMounier.Towardlarge-scalevulnerabilitydiscoveryusing machinelearning.InCODASPY,pages85–96.ACM,2016. [67] YuemingWu,DeqingZou,ShihanDou,WeiYang,DuoXu,andHaiJin.Vulcnn: |
Animage-inspiredscalablevulnerabilitydetectionsystem.InICSE,pages2365– 2376.ACM,2022. [68] HuantingWang,GuixinYe,ZhanyongTang,ShinHweiTan,SongfangHuang, DingyiFang,YansongFeng,LizhongBian,andZhengWang.Combininggraph- basedlearningwithautomateddatacollectionforcodevulnerabilitydetection. IEEETrans.Inf.ForensicsSecur.,16:1943–1958,2021. [69] AnhVietPhan,MinhLeNguyen,andLamThuBui. Convolutionalneural networksovercontrolflowgraphsforsoftwaredefectprediction.InICTAI,pages 45–52.IEEEComputerSociety,2017. [70] Xin-ChengWen,YupanChen,CuiyunGao,HongyuZhang,JieM.Zhang,and QingLiao.Vulnerabilitydetectionwithgraphsimplificationandenhancedgraph representationlearning.In45thIEEE/ACMInternationalConferenceonSoftware Engineering,ICSE2023,Melbourne,Australia,May14-20,2023,pages2275–2286. IEEE,2023. [71] ZhenLi,DeqingZou,ShouhuaiXu,XinyuOu,HaiJin,SujuanWang,ZhijunDeng, andYuyiZhong.Vuldeepecker:Adeeplearning-basedsystemforvulnerability detection.InNDSS.TheInternetSociety,2018. |
2401.14617 ASystematicLiteratureReviewonExplainabilityforMachine/Deep Learning-basedSoftwareEngineeringResearch SICONGCAO,YangzhouUniversity,China XIAOBINGSUN∗,YangzhouUniversity,China RATNADIRAWIDYASARI,SingaporeManagementUniversity,Singapore DAVIDLO,SingaporeManagementUniversity,Singapore XIAOXUEWU,YangzhouUniversity,China LILIBO,YangzhouUniversity,China JIALEZHANG,YangzhouUniversity,China BINLI,YangzhouUniversity,China WEILIU,YangzhouUniversity,China DIWU,UniversityofSouthernQueensland,Australia YIXINCHEN,WashingtonUniversityinSt.Louis,USA TheremarkableachievementsofArtificialIntelligence(AI)algorithms,particularlyinMachineLearning(ML)andDeepLearning (DL),havefueledtheirextensivedeploymentacrossmultiplesectors,includingSoftwareEngineering(SE).However,duetotheir black-boxnature,thesepromisingAI-drivenSEmodelsarestillfarfrombeingdeployedinpractice.Thislackofexplainabilityposes unwantedrisksfortheirapplicationsincriticaltasks,suchasvulnerabilitydetection,wheredecision-makingtransparencyisof paramountimportance.Thispaperendeavorstoelucidatethisinterdisciplinarydomainbypresentingasystematicliteraturereviewof approachesthataimtoimprovetheexplainabilityofAImodelswithinthecontextofSE.Thereviewcanvassesworkappearingin themostprominentSE&AIconferencesandjournals,andspans63papersacross21uniqueSEtasks.BasedonthreekeyResearch Questions(RQs),weaimto(1)summarizetheSEtaskswhereXAItechniqueshaveshownsuccesstodate;(2)classifyandanalyze differentXAItechniques;and(3)investigateexistingevaluationapproaches.Basedonourfindings,weidentifiedasetofchallenges remainingtobeaddressedinexistingstudies,togetherwitharoadmaphighlightingpotentialopportunitieswedeemedappropriate andimportantforfuturework. ∗Correspondingauthor Authors’addresses:SicongCao,SchoolofInformationEngineering,YangzhouUniversity,Yangzhou,China,DX120210088@yzu.edu.cn;XiaobingSun, SchoolofInformationEngineering,YangzhouUniversity,Yangzhou,China,xbsun@yzu.edu.cn;RatnadiraWidyasari,SchoolofComputingandInformation Systems,SingaporeManagementUniversity,Singapore,ratnadiraw.2020@phdcs.smu.edu.sg;DavidLo,SchoolofComputingandInformationSystems, SingaporeManagementUniversity,Singapore,davidlo@smu.edu.sg;XiaoxueWu,SchoolofInformationEngineering,YangzhouUniversity,Yangzhou, China,xiaoxuewu@yzu.edu.cn;LiliBo,SchoolofInformationEngineering,YangzhouUniversity,Yangzhou,China,lilibo@yzu.edu.cn;JialeZhang, SchoolofInformationEngineering,YangzhouUniversity,Yangzhou,China,jialezhang@yzu.edu.cn;BinLi,SchoolofInformationEngineering,Yangzhou University,Yangzhou,China,lb@yzu.edu.cn;WeiLiu,SchoolofInformationEngineering,YangzhouUniversity,Yangzhou,China,weiliu@yzu.edu.cn;Di Wu,SchoolofMathematics,Physics,andComputing,UniversityofSouthernQueensland,Toowoomba,Australia,di.wu@unisq.edu.au;YixinChen, DepartmentofComputerScienceandEngineering,WashingtonUniversityinSt.Louis,St.Louis,USA,chen@cse.wustl.edu. Permissiontomakedigitalorhardcopiesofallorpartofthisworkforpersonalorclassroomuseisgrantedwithoutfeeprovidedthatcopiesarenot madeordistributedforprofitorcommercialadvantageandthatcopiesbearthisnoticeandthefullcitationonthefirstpage.Copyrightsforcomponents ofthisworkownedbyothersthanACMmustbehonored.Abstractingwithcreditispermitted.Tocopyotherwise,orrepublish,topostonserversorto redistributetolists,requirespriorspecificpermissionand/orafee.Requestpermissionsfrompermissions@acm.org. ©2023AssociationforComputingMachinery. ManuscriptsubmittedtoACM ManuscriptsubmittedtoACM 1 4202 naJ 62 ]ES.sc[ 1v71641.1042:viXra2 S.Caoetal. CCSConcepts:•Generalandreference Surveysandoverviews;•Computingmethodologies Neuralnetworks;Artificial → → intelligence;•Softwareanditsengineering Softwaredevelopmenttechniques. → AdditionalKeyWordsandPhrases:ExplainableAI,XAI,interpretability,neuralnetworks,survey ACMReferenceFormat: SicongCao,XiaobingSun,RatnadiraWidyasari,DavidLo,XiaoxueWu,LiliBo,JialeZhang,BinLi,WeiLiu,DiWu,andYixin Chen.2023.ASystematicLiteratureReviewonExplainabilityforMachine/DeepLearning-basedSoftwareEngineeringResearch. 1,1 (January2023),41pages.https://doi.org/XXXXXXX.XXXXXXX 1 INTRODUCTION SoftwareEngineering(SE)isadisciplinethatdealswiththedesign,development,testing,andmaintenanceof softwaresystems.Assoftwarecontinuestopervadeawiderangeofindustries,diverseandcomplexSEdata,such assourcecode,bugreports,andtestcases,havegrowntobecomeunprecedentedlylargeandcomplex.Drivenby thesuccessofArtificialIntelligence(AI)algorithmsinvariousresearchfields,theSEcommunityhasshowngreat enthusiasmforexploringandapplyingadvancedMachineLearning(ML)/DeepLearning(DL)modelstoautomate orenhanceSEtaskstypicallyperformedmanuallybydevelopers,includingautomatedprogramrepair[38,56],code generation[49],andclonedetection[105,109].Arecentreportfromthe2021SEIEducator’sWorkshophasreferredto AIforSoftwareEngineering(AI4SE)asanumbrellatermtodescriberesearchthatusesAIalgorithmstotackleSE tasks[70]. DespitetheunprecedentedperformanceachievedbyML/DLmodelswithhighercomplexity,theyhavebeenslowto bedeployedintheSEindustry.ThisreluctancearisesduetoprioritizingaccuracyoverExplainability–AIsystems arenotoriouslydifficulttounderstandforhumansbecauseoftheircomplexconfigurationsandlargemodelsizes[32]. |
Fromtheperspectiveofthemodeluser,explainabilityisneededtoestablishtrustwhenimperfect“black-box” models areused.Forinstance,adevelopermayseektocomprehendtherationalebehindaDL-basedvulnerabilitydetection model’sdecision,i.e.,whyitpredictsaparticularcodesnippetasvulnerable,tofacilitateanalyzingandfixingthe vulnerability[68,129].Forthemodeldesigners,explainabilityisrequiredtoinvestigatefailurecasesanddirectthe weakAImodelsintheproperpathsasintended[93].Inotherwords,merelyasimpledecisionresult(e.g.,abinary classificationlabel)withoutanyexplanationisoftennotgoodenough.Thisfactstimulatestheurgentdemandfor designingalgorithmscapableofexplainingthedecision-makingprocessofblack-boxAImodels,leadingtothecreation ofanovelresearchtopictermedeXplainableAI(XAI)[20]. RelatedSurveys.Substantialeffortshavebeenunderwayinrecentyearstoenhancetheexplainabilityofblack-box models.Theseapproachesoperateeitherbyanalyzingtheimpactofspecificregionsoftheinputonthefinalprediction [55,78],orbyinspectingthenetworkactivation[83,134].Table1summarisessomeoftheirsuccessfulapplicationsin downstreamfields,suchashealthcare[59]andfinance[121].Simultaneously,theSEcommunityalsoembarksona seriesofresearchactivitiesregardingexplainability.Thisefforthasledtoadvancementsacrossarangeoftasksincluding defectprediction[57,74],vulnerabilitydetection[36,137],codesummarization[30,39],malwaredetection[51,113], amongmanyothers[9,50,103].Recently,Mohammadkhanietal.[61]conductedaseminalsurveyonexplainableAI forSEresearch.TheyprimarilyfocusontheexplainabilityofAI4SEmodelsoverthewhich,thewhat,andthehow dimensions,i.e.,whichSEtasksarebeingexplained,what typesofXAItechniquesareadopted,andhow theyare evaluated.However,theirsurveystillhasseverallimitations.First,theyonlycollectedandanalyzed24paperspublished beforeJune2022.Thereisaneedtoinvestigatealargercollectionofpapers,includingmorerecentones.Second, ManuscriptsubmittedtoACMASystematicLiteratureReviewonExplainabilityforML/DL-basedSEResearch 3 Table1. ComparisonofOurWorkwithPreviousSurveys/ReviewsonExplainabilityforDownstreamApplications Reference Year ApplicationScenario #Papers Taxonomy Baseline Benchmark Metric Patrícioetal.[73] 2023 Healthcare 53 Customized Messinaetal.[59] 2022 Healthcare 40 Hybrid (cid:32) (cid:32) (cid:32) Hossainetal.[35] 2023 Healthcare 218 Hybrid (cid:32) (cid:32) (cid:32) Zinietal.[136] 2022 NaturalLanguageProcessing - General (cid:35) (cid:35) (cid:32) Jieetal.[121] 2023 Finance 69 General (cid:35) (cid:35) (cid:35) Zablockietal.[125] 2022 AutonomousDriving - General (cid:35) (cid:35) (cid:35) Mohammadkhanietal.[61] 2023 SoftwareEngineering 24 General (cid:35) (cid:35) (cid:32) Oursurvey 2024 SoftwareEngineering 63 Customized (cid:35) (cid:71)(cid:35) (cid:71)(cid:35) (cid:32) (cid:32) (cid:32) theydirectlyborrowedthegeneraltaxonomy,i.e.,ante-andpost-hocexplanation,fromtheAIfieldtoclassifyXAI techniquesinSEtasks.Thistaxonomyiscoarse-grainedandmaynotbeapplicabletospecificdownstreamapplications. Third,theydidnot(oronlypartially)explicitlydiscusstheevaluationaspectsofreviewedpapers,includingavailable baselines,prevalentbenchmarks,andcommonlyemployedevaluationmetrics.Thelackofcomprehensiveevaluation mayposeobstaclestoreadersinterestedindeployingXAItechniquesinpracticalSEscenarios. OurContributions.ToeffectivelychartthemostpromisingpathforwardforresearchontheutilizationofXAI techniquesinAI4SE,weconductedaSystematicLiteratureReview(SLR)tobridgethisgap,providingvaluable insightstothecommunity.Inthispaper,wecollected63primarystudiespublishedin26flagshipconferencesand journalsoverthelast12years(2012-2023).Wethensummarizedtheresearchtopicstackledbyrelevantpapers,classified variousXAItechniquesusedindiverseSEtasks,andanalyzedtheevaluationprocess.Weanticipatethatourfindings willbeinstrumentalinguidingfutureadvancementsinthisrapidlyevolvingfield.Thisstudymakesthefollowing contributions: Wepresentasystematicreviewofrecent63primarystudiesonthetopicofexplainabilityformachine/deeplearning- • basedsoftwareengineeringresearch,forresearchersandpractitionersinterestedinprioritizingtransparencyin theirsolutions. WedescribethekeyapplicationsofXAI4SEencompassingadiverserangeof21specificSEtasks,groupedinto • fourcoreSEactivities,includingsoftwaredevelopment,softwaretesting,softwaremaintenance,andsoftware management. WeprovideataxonomyofXAItechniquesusedinSEbasedontheirmethodology,andananalysisoffrequently • usedexplanationformats. WesummarizeexistingbenchmarksusedinXAI4SEapproaches,includingavailablebaselines,prevalentbench- • marks,andcommonlyemployedevaluationmetrics,todeterminetheirvalidity. WediscusskeychallengesthatusingXAIencounterswithintheSEfieldandsuggestseveralpotentialresearch • directionsforXAI4SE. Wemaintainaninteractivewebsite,https://riss-vul.github.io/xai4se-paper/,withallofourdataandresultsto • facilitatereproducibilityandencouragecontributionsfromthecommunitytocontinuetopushforwardXAI4SE research. |
PaperOrganization.Theremainderofthispaperisstructuredasfollows:Section2describesthepreliminariesofXAI. Section3presentsResearchQuestions(RQs)andourSLRmethodology.ThesucceedingSections4-6aredevoted toansweringeachoftheseRQsindividually.Section7presentsthelimitationsofthisstudy.Section8discussesthe ManuscriptsubmittedtoACM4 S.Caoetal. 2024/1/21 16:51 CSUR.svg XAI Techniques Scope Stage Portability Where is the XAI How is the XAI What is the approach focusing on? approach developed? application scenario? Model- Local Ante-Hoc Specific Model- Global Post-Hoc Agnostic Fig.1. Generaltaxonomyofthesurveyintermsofscope,stage,andportability. challengesthatstillneedtobesolvedinfutureworkandoutlinesaclearresearchroadmapofresearchopportunities. Section9concludesthispaper. 2 EXPLAINABLEARTIFICIALINTELLIGENCE:PRELIMINARIES ThissectionfirstdetailsthedefinitionsofseveralcriticalterminologiescommonlyusedintheXAIfield.Then,weoffer ageneraloverviewofthetaxonomyofXAIapproaches,aimingtofurnishthereaderwithasolidcomprehensionofthis topic. 2.1 Definition ThegreatestchallengeinestablishingtheconceptofXAIinSEistheambiguousdefinitionofinterpretability and explainability.Thoseterms,togetherwithinterpretationandexplanation,areoftenusedinterchangeablyintheliterature [5,65].Forexample,quotingDoshi-VelezandKimetal.[19],interpretabilityistheability“toexplainortopresentin understandabletermstoahuman.”Bycontrast,accordingtoLentetal.[95],anexplainableAImeansitcan“present theuserwithanefiales:/i//lCy:/Uusenrsd/CeSrCs/tDoeoskdtopc/ChSaUiRn.svogfreasoningfromtheuser’sorder,throughtheAI’sknowledgeandinference, 1/1 totheresultingbehavior.”Somearguethatthetermsarecloselyrelatedbutdistinguishbetweenthem,althoughthere isnoconsensusonwhatthedistinctionexactlyis[3,66].Toensurethatwedonotexcludeworkbecauseofdifferent terminologies,weequatethem(andusetheminterchangeably)tokeepageneral,inclusivediscussionregardlessofthis debate.Inthissurvey,weframeexplanationsinthecontextofSEresearchusingML/DLandadoptthephrasingof Dametal.[11]asfollows: Definition1:ExplainabilityorInterpretabilityofanAI-poweredSEmodelmeasuresthedegreetowhichahuman observercanunderstandthereasonsbehinditsdecision(e.g.aprediction). Underthiscontext,therearetwodistinctwaysofachievingexplainability:(❶)makingtheentiredecision-making processtransparentandcomprehensible(i.e.,white-box/interpretablemodels);and(❷)explicitlyprovidinganexplana- tionforeachdecision(i.e.,surrogatemodels).Inaddition,sinceSEistask-oriented,explanationsinSEtasksshouldbe viewedfromaperspectivethatvaluespracticaluse[111].AsweobservedinSection5.2,therearemultiplelegitimate typesofexplanationsforSEpractitionerswhohavedifferentintentsandexpertise. ManuscriptsubmittedtoACMASystematicLiteratureReviewonExplainabilityforML/DL-basedSEResearch 5 2.2 Taxonomy TaxonomyisausefultooltogetanoverviewoftheXAIfield.Basedonthepreviousliterature,mostXAIapproaches canbecategorizedaccordingtothreecriteria[86]:(❶)scope(localvs.global);(❷)stage(ante-hocvs.post-hoc);(❸)and portability(model-specificvs.model-agnostic).Fig.1illustratesthegeneraltaxonomyoftheXAItechniques,andeach categoryisdetailedinthefollowing. ClassificationbyScope.Thescopeofexplanationscanbecategorizedaseitherlocalorglobal(someapproachescan beextendedtoboth)accordingtowhethertheexplanationsprovideinsightsaboutthemodelfunctioningforthegeneral datadistributionorforaspecificdatasample,respectively.Localexplainabilityapproachesseektoexplainwhyamodel performsaspecificpredictionforanindividualinput.Forexample,givenafileofinterestandadefectpredictionmodel, alocallyexplainablemodelmightgenerateattributionscoresforeachcodetokeninthefile[107].Representativelocal explainabilityapproachesincludeLIME[78],SHAP[55],andGrad-CAM[83].Globalexplainabilityapproachesworkon anarrayofinputstogiveinsightsintotheoverallbehavioroftheblack-boxmodel.Variousrule-basedandtree-based modelssuchasdecisiontreesareinthiscategory. ClassificationbyStage.XAIcanbecategorizedbasedonwhethertheexplanationmechanismisinherentwithin themodel’sinternalarchitectureorisimplementedfollowingthemodel’slearning/developmentphase.Theformeris namedante-hocexplainability(alsoknownasintrinsicexplainabilityorself-explainabilityincertainliterature),whilethe latterreferstopost-hocexplainability.Bydefinition,mostinherentlyinterpretableapproachesaremodel-specificsuch thatanychangeinthearchitecturewillneedsignificantchangesintheapproachitself.Ontheotherhand,significant researchinterestinrecentyearsisseenindevelopingpost-hocexplanationsastheycanexplainawell-trainedblack-box modeldecisionwithoutsacrificingtheaccuracy.Post-hocapproachestypicallyoperatebyperturbingpartsofthedata inahigh-dimentionalvectorspacetodiscernthecontributionsofvariousfeaturestothemodel’spredictions,orby analyticallyascertainingtheinfluenceofdifferentfeaturesonthepredictionoutcomes. Classification by Portability. According to the models they can be applied to, explanation approaches can be |
furtherclassifiedasmodel-specificandmodel-agnostic.Model-specificapproachesrequireaccesstotheinternalmodel architecture,meaningthattheyarerestrictedtoexplainonlyonespecificfamilyofmodels.Forexample,Deconvolution [126]ismodel-specificduetoitsabilitytoonlyexplaintheCNNmodel.Conversely,model-agnosticapproachescanbe usedtoexplainarbitrarymodelswithoutbeingconstrainedtoanyparticularmodelarchitecture. 3 REVIEWMETHODOLOGY 3.1 ResearchQuestion Inthispaper,wefocusoninvestigatingthefollowingResearchQuestions(RQs): RQ1:WhattypesofAI4SEstudieshavebeenexploredforexplainability? • RQ2:HowXAItechniquesareusedtosupportSEtasks? • - RQ2𝑎:WhattypesofXAItechniquesareemployedtogenerateexplanations? - RQ2𝑏:WhatformatofexplanationisprovidedforvariousSEtasks? RQ3:HowwelldoXAItechniquesperforminsupportingvariousSEtasks? • - RQ3𝑎:WhatbaselinetechniquesareusedtoevaluateXAI4SEapproaches? - RQ3𝑏:Whatbenchmarksareusedforthesecomparisons? - RQ3𝑐:WhatevaluationmetricsareemployedtomeasureXAI4SEapproaches? ManuscriptsubmittedtoACM6 S.Caoetal. 2024/1/22 21:02 CSUR.svg Automated Search IEEE Xplore AC LM ib rD ai rg yital SpringerLink Wiely Scopus SW ce ieb n co ef SG co ho og lale r Explanation Techniques Derive search strings 247 papers 2,184 papers 18,706 papers 1,940 papers 29,346 papers 211 papers 17,600 papers Refine search strings Research Question Identify Export relevant venues Manual Search 20 Selected 6 Selected conferences journals AI4SE Activity 86 papers 5 papers Filter by page limit (pages > 6) file:///C:/Users/CSC/Desktop/CSUR.svg 1/1 etaulavE tnemelpmoC Study Identification Manual Search Export 47 papers Forward Backward Study selection Export 35 papers 12 papers 70,325 Add 4 Total 65 papers papers papers Study Selection Remove Check the venue, Scan full-text to select Conduct quality duplicate studies title, and abstract primary studies assessment 28,524 papers 21,817 papers 445 papers 128 papers 61 papers Fig.2. Studyidentificationandselectionprocess. 3.2 SearchStrategy AsshowninFig.2,followingthestandardizedpracticewithinthefieldofSE[128],ourfirststepinvolvesidentifying primarystudiestoenhanceourabilitytoaddresstheformulatedRQseffectively.GiventhattheDLrevolution–triggered byAlexNet[42]in2012–hastransformedAIresearchandbecamethecatalystfortheML/DLboominallfields includingSE,wechosea12-yearperiodofJanuary1st,2012,toDecember31st,2023,tocollecttheliteraturerelatedto XAI4SE.Next,weidentifiedthetoppeer-reviewandinfluentialconferenceandjournalvenuesinthedomainsofSE andProgrammingLanguages(PL),asoutlinedinTable2.Intotal,weincluded15conferences(ICSE,ASE,ESEC/FSE, ICSME,ICPC,RE,ESEM,ISSTA,MSR,SANER,ISSRE,COMPSAC,QRS,OOPSLA,PLDI)andsixjournals(TSE,TOSEM, EMSE,JSS,IST,ASEJ).WechosetoincludePLvenuesinourstudygiventhefrequentoverlapofSEandPLresearch. Furthermore,wealsoincludefivetopconferences(AAAI,ICML,ICLR,NeurIPS,IJCAI)thatcenteredonmachine learning(ML)anddeeplearning(DL)astheseconferencesmightfeaturepapersapplyingMLandDLtechniquestoSE tasks. Apartfrommanuallysearchingprimarystudiesfromtop-tiervenues,wealsoretrievedrelevantpapersfromfive populardigitallibraries,includingIEEEXplore1,ACMDigitalLibrary2,SpringerLink3,Wiely4,andScopus5,andtwoof 1https://ieeexplore.ieee.org 2https://dl.acm.org 3https://link.springer.com 4https://onlinelibrary.wiley.com 5https://www.scopus.com ManuscriptsubmittedtoACMASystematicLiteratureReviewonExplainabilityforML/DL-basedSEResearch 7 Table2. PublicationVenuesforManualSearch Venue Acronym Fullname ecnerefnoC ICSE ACM/IEEEInternationalConferenceonSoftwareEngineering ASE IEEE/ACMInternationalConferenceAutomatedSoftwareEngineering ESEC/FSE ACMJointEuropeanSoftwareEngineeringConferenceandSymposiumontheFoundationsofSoftware Engineering ICSME IEEEInternationalConferenceonSoftwareMaintenanceandEvolution ICPC IEEEInternationalConferenceonProgramComprehension RE IEEEInternationalConferenceonRequirementsEngineering ESEM ACM/IEEEInternationalSymposiumonEmpiricalSoftwareEngineeringandMeasurement ISSTA ACMSIGSOFTInternationalSymposiumonSoftwareTestingandAnalysis MSR IEEEWorkingConferenceonMiningSoftwareRepositories SANER IEEEInternationalConferenceonSoftwareAnalysis,EvolutionandReengineering ISSRE IEEEInternationalSymposiumonSoftwareReliability COMPSAC IEEEInternationalComputerSoftwareandApplicationsConference QRS IEEEInternationalConferenceonSoftwareQuality,ReliabilityandSecurity OOPSLA ACMSIGPLANInternationalConferenceonObject-orientedProgramming,Systems,Languages,and Applications PLDI ACMSIGPLANConferenceonProgrammingLanguageDesignandImplementation AAAI AAAIConferenceonArtificialIntelligence ICML InternationalConferenceonMachineLearning ICLR InternationalConferenceonLearningRepresentations NeurIPS AnnualConferenceonNeuralInformationProcessingSystems IJCAI InternationalJointConferenceonArtificialIntelligence lanruoJ TSE IEEETransactionsonSoftwareEngineering TOSEM ACMTransactionsonSoftwareEngineeringandMethodology EMSE EmpiricalSoftwareEngineering JSS JournalofSystemsandSoftware IST InformationandSoftwareTechnology ASEJ AutomatedSoftwareEngineering Table3. SearchKeywords Group Keywords 1 “MachineLearn*”OR“DeepLearning”OR“NeuralNetwork?”OR“ReinforcementLearning” 2 “Explainable”OR“Interpretable”OR“Explainability”OR“Interpretability” |
3 “SoftwareEngineering”OR“SoftwareAnalytics”OR“SoftwareMainten*”OR“SoftwareEvolution”OR"SoftwareTest*"OR “SoftwareRequirement?”OR“SoftwareDevelop*”OR“ProjectManagement”OR“SoftwareDesign*”OR“Dependability” OR“Security”OR“Reliability” 4 “CodeRepresentation”OR“CodeGeneration”OR“CodeCommentGeneration”OR“CodeSearch”OR“CodeLocalization” OR“CodeCompletion”OR“CodeSummarization”OR“MethodNameGeneration”OR“Bug”OR“Fault”OR“Vulnerability” OR“Defect”OR“TestCase”OR“ProgramAnalysis”OR“ProgramRepair”OR“CloneDetection”OR“CodeSmell”OR “SATDDetection”OR“Compile”OR“CodeReview”OR“CodeClassification”OR“CodeChange”OR“IncidentDetection” OR“EffortCostPrediction”OR“GitHub”OR“StackOverflow”OR“Developer” * isawildcardusedtomatchzeroormorecharacters. ? isanotherwildcardusedtomatchasinglecharacter. themostpopularresearchcitationengines,WebofScience6andGoogleScholar7,basedonthesearchstring(listedin 6https://www.webofscience.com 7https://scholar.google.com ManuscriptsubmittedtoACM8 S.Caoetal. Table4. SummaryoftheProcessofStudySearchandSelection DataSource #Studies IEEEXplore 247 ACMDigitalLibrary 2,184 SpringerLink 18,706 Wiely 1,940 Scopus 29,346 WebofScience 211 GoogleScholar 17,600 Merge 70,325 Filteringstudieslessthan6pages 28,524 Removingduplicatedstudies 21,817 Excludingprimarystudiesbasedonvenue,title,andabstract 445 Excludingprimarystudiesbasedonfulltext 128 AfterQualityAssessment 61 AfterForward&BackwardSnowballing 108 Final 63 Table3)assembledfromagroupoftopic-relatedkeywordssummarizedfrommanuallycollectedpapers.Asshownin Table4,wecollectedatotalof70,325relevantstudieswiththeautomaticsearchfromthesesevenelectronicdatabases. 3.3 StudySelection 3.3.1 InclusionandExclusionCriteria. Afterpapercollection,weperformedarelevanceassessmentaccordingtothe followinginclusionandexclusioncriteria: (cid:33) ThepapermustbewritteninEnglish. (cid:33) Thepapermustbeapeer-reviewedfullresearchpaperpublishedinaconferenceproceedingorajournal. (cid:33) Thepapermusthaveanaccessiblefulltext. (cid:33) ThepapermustadoptML/DLtechniquestoaddressSEproblems. (cid:37) Thepaperhaslessthan6pages. (cid:37) Books,keynoterecords,non-publishedmanuscripts,andgreyliteraturearedropped. (cid:37) Thepaperisaliteraturerevieworsurvey. (cid:37) Thepaperisnotaconferencepaperthathasbeenextendedasajournalpaper. (cid:37) ThepaperusesSEapproachestocontributetoML/DLsystems. (cid:37) ThestudiesthatdonotapplyXAItechniquesonSEtasksareruledout. (cid:37) Thestudieswhereexplainabilityisdiscussedasanideaorpartofthefutureworkofastudyareexcluded. Inparticular,byliteraturefilteringanddeduplication(exclusioncriteria1),thetotalnumberofincludedpaperswas reducedto21,817.Afterthefirsttwoauthorsmanuallyexaminedthevenue,title,andabstractsofthepapers,thetotal numberofincludedpapersdeclinedsubstantiallyto445.Anyambiguouspaperswouldbeforwardedtothefourthand 11thauthorswhowereexperiencedinthefieldsofSEandXAIresearchtoconductasecondaryreview.Inaddition, books,keynoterecords,non-publishedmanuscripts,greyliterature,SLRs/surveys,andconferenceversionsofextended paperswerealsodiscardedinthisphase(exclusioncriteria2-4).TheSE4AIpapers[2],whichusedSEapproachesto contributetoML/DLsystemswerealsonotconsidered(exclusioncriteria5)becauseourSLRfocusedexclusivelyonthe ManuscriptsubmittedtoACMASystematicLiteratureReviewonExplainabilityforML/DL-basedSEResearch 9 Table5. ChecklistofQualityAssessmentCriteriaforExplainabilityStudiesinAI4SE No. QualityAssessmentCriteria QAC1 Istheimpactoftheproposedapproach(orempirical/casestudy)ontheAI4SEcommunityclearlystated? QAC2 Arethecontributionsofthestudyclearlyclaimed? QAC3 Doesthestudyprovideacleardescriptionoftheworkflowandimplementationoftheproposedapproach? QAC4 Aretheexperimentdetails,includingdatasets,baselines,andevaluationmetrics,clearlydescribed? QAC5 Dothefindingsdrawnfromtheexperimentsstronglysubstantiatetheargumentspresentedinthestudy? Table6. ExtractedDataItemsandRelatedResearchQuestions RQ DataItem RQ1 TheSEtaskthatanXAI4SEapproachtriestosolve RQ1 TheSEactivityinwhicheachSEtaskbelongs RQ1 Publicationtypeofeachprimarystudy(i.e.,newtechnique,empiricalstudy,orcasestudy) RQ2 XAItechniqueemployedbyeachstudy RQ2 Explanationformat RQ3 Theadoptedbaselineapproaches RQ3 Benchmarkdatasetname RQ3 Presence/absenceofreplicationpackage RQ3 WhatmetricsareusedtoevaluatetheXAItechniques explainabilityofAI4SEmodels.Furthermore,weruledoutstudiesthatdidnotapplyXAItechniquesonSEtasks,or justdiscussedexplainabilityasanideaorfuturework(exclusioncriteria6,7).Inthefourthphase,wereviewedthefull textsofthepapers(inclusioncriteria3),identifying128primarystudiesdirectlyrelevanttoourresearchtopic. 3.3.2 QualityAssessment. Topreventbiasesintroducedbylow-qualitystudies,weformulatedfiveQualityAssessment Criteria(QAC),giveninTable5,toevaluatethe128includedstudies.Thequalityassessmentprocesswaspilotedby |
thefirstandsecondauthors,involving30randomlyselectedprimarystudies.Weadoptedpairwiseinter-raterreliability withCohen’sKappastatistic[10]tomeasuretheconsistencyofthemarkings.Foranycasethattheydidnotreach aconsensusafteropendiscussions,thefourthand11thauthors(domainexpertsexperiencedinSEandXAI)were consultedastie-breakers.Withintwoiterations,theCohen’sKappacoefficientwassuccessfullyraisedfrommoderate (0.58)toalmostperfectagreement(0.84).Then,anassessmentwasperformedfortheremaining98primarystudies. Afterqualityassessment,afinalsetof61high-qualitypaperswasreserved. 3.3.3 ForwardandBackwardSnowballing. Toavoidomittinganypossiblyrelevantworkduringourmanualand automatedsearchprocess,wealsoperformedlightweightbackwardandforwardsnowballing[112],i.e.,basically examiningtheresearchreferencedineachofourselectedprimarystudies,aswellasthepublicationsthatsubsequently referredtothesestudies,onthereferencesandthecitationsof61high-qualitypapers.Asasupplement,wegathered47 morepapers,andconductedthecompletestudyselectionprocessagain,includingfiltering,deduplication,andquality assessment,andobtainedtwoadditionalpapers. ManuscriptsubmittedtoACM10 S.Caoetal. 25 24 20 16 14 13 12 8 7 4 3 1 0 0 0 0 0 0 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 snoitacilbuP fo rebmuN 70 63 60 50 40 38 30 25 20 11 10 4 1 1 1 1 1 1 0 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 Year (a)Distributionofpublicationsperyear. snoitacilbuP fo rebmuN Year (b)Cumulativepublicationtrendoveryears. Fig.3. Publicationstatisticsoveryears. ASEJ IST 5.9% 5.9% JSS ESEC/FSE Others 5.9% 28.3% 23.9% EMSE TSE 5.9% 41.2% ICSME 4.3% MSR 4.3% ICSE ICPC 13% TOSEM 4.3% OOPSLA 35.3% ASE 4.3%SANER 10.9% 6.5% (a)Distributionofpublicationsinvariouscon- (b)Distributionofpublicationsinvariousjour- ferences. nals. Fig.4. Distributionofselectedpapersindifferentpublicationvenues. 3.4 DataExtractionandAnalysis Basedonthecollected63primarystudies,weextractedtheessentialdataitemsusedtoanswerthreemainRQs.In Table6,weoutlinethedetailsinformationextractedandgatheredfrom63primarystudies.Thecolumnlabeled“Data Item”enumeratestherelevantdataitemsextractedfromeachprimarystudy,whilethecolumn“RQ”specifiesthe correspondingresearchquestion.Inordertomitigateerrorsduringdataextraction,thefirsttwoauthorsworking togetheronextractingthesedataitemsfromtheprimarystudies.Then,thefifthauthorverifiedtheextracteddata results. Fig.3apresentsthedistributionofselectedprimarystudiesineachyear.ThefirstXAI4SEstudywefoundwas publishedin2013[84].Afterthat,therewasa5-yearresearchgap,rangingfrom2014to2018.Theenthusiasmfor investigatingtheexplainabilityonAI4SEmodelshassteadilyrisensince2019,andreachesitspeakin2023,comprising ManuscriptsubmittedtoACMASystematicLiteratureReviewonExplainabilityforML/DL-basedSEResearch 11 Table7. SETaskTaxonomy SEActivity SETask #Papers References CodeUnderstanding 6 [9,75,79,99,103,117] ProgramSynthesis 3 [12,21,67] SoftwareDevelopment CodeSummarization 3 [30,39,71] CodeSearch 1 [100] TestCase-Related 4 [1,41,91,124] Debugging 3 [8,31,43] SoftwareTesting VulnerabilityDetection 11 [26,36,47,52,68,90,93,129,130,135,137] Bug/FaultLocalization 2 [48,110] ProgramRepair 1 [58] Malware/AnomalyDetection 4 [46,51,113,132] Bug/DefectPrediction 12 [17,28,29,40,44,57,63,74,76,107,116,133] OSSSustainabilityPrediction 1 [114] SoftwareMaintenance IncidentPrediction 1 [131] RootCauseAnalysis 1 [15] CodeReview 1 [118] CodeSmellDetection 3 [77,102,122] BugReport-Related 2 [14,84] InformationSearchonQ&ASites 1 [50] ConfigurationExtrapolation 1 [16] SoftwareManagement Effort/CostEstimation 1 [27] DeveloperRecommendation 1 [115] 39.7%ofthetotalpublications.Fig.3billustratesthecumulativepublicationtrendoveryears.Itisobservablethatthe slopeofthecurvefittingthedistributionexperiencesasignificantincreasebetween2019and2023.Thispronounced upwardtrendindicatesaburgeoningresearchinterestinthefieldofXAI4SE. Wealsoanalyzedthepublicationtrendofprimarystudiesinselectedconferencesandjournalvenues,respectively. AsshowninFig.4a,ESEC/FSEstandsoutasthepredominantconferencevenuesfavoredbyXAI4SEstudies,witha contributionof28.3%ofthetotal.OthervenuesmakingnoteworthycontributionsincludeICSE(13%),ASE(10.9%),and SANER(6.5%).Fig.4bshowsthedistributionofprimarypaperspublishedindifferentjournalvenues.Itcanbeseen that76.5%ofrelevantpaperswerepublishedinTSEandTOSEM,whichindicatesaboomingtrendofXAI4SEresearch intop-tierSEjournalsinthepastfewyears. 4 RQ1:WHATTYPESOFAI4SESTUDIESHAVEBEENEXPLOREDFOREXPLAINABILITY? ThisRQaimstoinvestigatetheapplicationscenariosofXAItechniquesinhelpingimprovetheexplainabilityofvarious AI4SEmodels.Outofthe63primarystudiesweanalyzedforthisSLR,weidentified21separateSEtaskswhereanXAI techniquehadbeenapplied.ThesetasksspanacrossthefourmainphasesoftheSoftwareDevelopmentLifeCycle (SDLC)[82],includingsoftwaredevelopment,softwaretesting,softwaremaintenance,andsoftwaremanagement.The fulltaxonomyisdisplayedinTable7,whichassociatestherelevantprimarystudypairedwiththeSEtask&activityit belongsto. |
ManuscriptsubmittedtoACM12 S.Caoetal. 6.3% 1.6% 20.6% 12.7% 41.3% 31.7% 85.7% Software Development Software Testing Software Maintenance Software Management New Technique Empirical Study Case Study (a)DistributionofXAI4SEstudiesindifferent (b)Distributionofmaincontributionsindiffer- SEactivities. entXAI4SEstudies. Fig.5. DistributionofXAI4SEstudiesacrossdifferentSEactivitiesandcontributiontypes. 4.1 ExploratoryDataAnalysis Fig.5adescribesthedistributionof63primarystudiesinfourSEactivities.Itisnoteworthythatthehighestnumberof studiesisobservedinsoftwaremaintenance,comprising41.3%ofthetotalresearchvolume.Followingthat,31.7%of studieswerededicatedtosoftwaretesting,and20.6%ofstudiesfocusedonsolvingSEtasksinsoftwaredevelopment. Thisdistributionunderscoresthevitalfocusondevelopmentandmaintenancetasks.Bycontrast,softwaremanagement onlyaccountsforamarginalproportion(6.3%)oftheresearchshare,suggestingarelativelylimitedexplorationin theseareas.Tofurtheridentifythemaincontributionofeachprimarystudy,wealsoinvestigatedthecontribution statementsineachpaper,andthengroupedthemintothreecategories,i.e.,NewTechnique,EmpiricalStudy,andCase Study.AsshowninFig.5b,85.7%oftheprimarystudiesconcentratedonproposingnovelexplainablesolutionsto improvethetransparencyofblack-boxAImodelsinvariousSEtasks.Another12.7%oftherelevantresearchfocused onconductingempiricalstudiestoinvestigatetheadvantagesanddisadvantagesofstate-of-the-artXAI4SEapproaches fromdifferentperspectives,e.g.,analyzingtheperformancedifferencesbetweenoff-the-shelfXAItoolsovercertainSE tasks/modelssuchasdefectprediction[40].Theremainingone(1.6%)primarystudyperformedacasestudytoexplain theperformanceofChatGPT[69]underdistinctpromptengineeringapproaches[79]. Fig.6displaysavisualbreakdownofdifferentSEtaskswefoundwithin63primarystudiesacrossa12-yearperiod. EarlySEtaskstoexploretheexplainabilitywerethoseofBug/DefectPrediction,ProgramRepair,CodeSmellDetection, andBugReport-RelatedAutomation.Itwasnotuntil2021thatthediversityoftargetSEtasksexperiencedasignificant increase,includingCodeUnderstanding,Debugging,andMalware/AnomalyDetection,andsoon.Itisnoteworthythat therearethreeSEtasksthathaveconsistentlymaintainedhighactivitylevelsovertheyears:Bug/DefectPrediction, VulnerabilityDetection,andCodeUnderstanding.ThemostpopularofthethreeisBug/DefectPrediction,contributing 19%ofthepaperswecollected.WesuspectthatthemainreasoncontributingtothepopularityofXAIinbug/defect ≈ predictionisthat,expertknowledge-basedfeatureengineeringiseasiertounderstandthancomplexhigh-dimensional features.Defectpredictionmodelisoftenbuiltuponasetofhand-craftedsoftwaremetrics.Thus,developerscaneasily obtainexplainableresultsthroughleveragingintrinsicallyinterpretablemodels(e.g.,decisiontree)oroff-the-shelf post-hocexplainers(e.g.,LIME[78]). ManuscriptsubmittedtoACMASystematicLiteratureReviewonExplainabilityforML/DL-basedSEResearch 13 2013 2019 2020 2021 2022 2023 Developer Recommendation 1 Effort/Cost Estimation 1 Configuration Extrapolation 1 Information Search on Q&A Sites 1 Bug Report-Related 1 1 Code Smell Detection 1 1 1 Code Review 1 Root Cause Analysis 1 Incident Prediction 1 OSS Sustainability Prediction 1 Bug/Defect Prediction 1 3 1 4 3 Malware/Anomaly Detection 2 2 Program Repair 1 Bug/Fault Localization 2 Vulnerability Detection 3 1 7 Debugging 2 1 Test Case-Related 1 1 2 Code Search 1 Code Summarization 1 2 Program Synthesis 1 1 1 Code Understanding 1 2 3 0 2 4 6 8 10 12 Fig.6. PaperspublishedperyearaccordingtoSEtask. 4.2 HowXAIWereUsedinSpecificSETasks? Inthissubsection,wedelvedintotheprogressofvariousSEtasksthatappliedXAItechniques.Byinvestigatingthis RQ,weaimedtoobtainaclearunderstandingofwhathasbeendoneandwhatelsecanbedoneinadvancingpractices forexplainableAI4SEsolutions. 4.2.1 SETasksinSoftwareDevelopment. Therearewide-rangingapplicationsofXAItechniquesinsoftwaredevelop- ment,encompassingcodeunderstanding,programsynthesis,codesummarization,andcodesearch. CodeUnderstanding.Codeunderstandingreferstotheprocessofcomprehendingandanalyzingsourcecodedeeply. Withinthecontextofdata-drivenSEresearch,codeunderstandingaimstoseekaneffectivewaytomapsourcecode intohigh-dimensionalsemanticspace,therebysupportingavarietyofcode-centricdownstreamtasks,suchassuch asVariableMisuseDetection[75],MethodNamePrediction[103],andVulnerabilityDetection[117].Inspiredbythe capabilityofcomplexAImodels,deepneuralnetworksinparticular,inlearningrichrepresentationsofrawdata,a seriesofcodemodelsaretrainedonlabeled(e.g.,CodeSearchNet[37])orunlabeledcodecorpus(e.g.,CodeXGlue [53]).Thistrainingprocessproducescodeembeddingswithrichcontextsandsemantics.Yangetal.[117]proposed GraphTensorConvolutionNeuralNetwork(GTCN),anovelcoderepresentationlearningmodelwhichiscapableof comprehensivelycapturingthedistanceinformationofcodesequencesandstructuralcodesemantics,togenerate accuratecodeembeddings.GTCNwasself-explainablebecausethetensor-basedmodelreducedmodelcomplexity, |
whichwasbeneficialforcapturingthedatafeaturesfromthesimplermodelspace.Wanetal.[99]proposedthree typesofstructuralanalysis,includingattentionanalysis,probingonthewordembedding,andsyntaxtreeinduction,to explorewhythepre-trainedlanguagemodelsworkandwhattheyindeedcaptureinSEtasks. ManuscriptsubmittedtoACM14 S.Caoetal. CodeSummarization.Codesummarization,alsoknownascodecommentgeneration,isacode-to-texttaskthatattempts toautomaticallygeneratetextualdescriptionsdirectlyfromsourcecode.ApromisingsolutionisNeuralComment Generation(NCG),whichemployssequence-to-sequenceneuralnetworkssuchasNeuralMachineTranslation (NMT)[7,94]toauto-summarizetargetsourcecodeintoshortnaturallanguagetext.Despitetheireffectiveness, state-of-the-artNCGapproachesarestillfarfrombeingusableinpracticebecausedevelopersarehardtounderstand andtrustsuchend-to-endsummarizationwithoutanyexplanation.Tothisend,Gengetal.[30]designedtwotypes ofexplanationstrategiesapplicabletodifferentapplicationscenariostoidentifythecorrespondingcodepartsused togeneratesummarization.Theblack-boxexplanationstrategyaimedtolocalizethecodesegmentthatsensitiveto programmutations,whilethewhite-boxexplanationstrategyfocusedoninspectingtheattentionscoreoftheeach codetoken.Jiangetal.[39]proposedCCLink,amodel-independentframeworkwhichaimedtofindthecodesegments thatcontributetothegenerationofkeyinformationintheauto-generatedcomments.CCLinkgeneratedaseriesof codemutantsfromkeyphrasesintheauto-generatedcomment,andtailoreddataminingalgorithmstoconstructthe linksbetweencodeanditsauto-generatedcomment.ThisinturnallowedCCLinktovisualizelinksasthecomment explanationstodevelopers. CodeSearch.Codesearchisthetext-to-codetaskofretrievingsourcecodethatmeetsusers’naturallanguagequeries fromalargecodebase.Deeplearning-basedapproaches,whichmapsquery/codetofeatureembeddingsandmeasures theirsimilarity,hasbecometheleadingparadigmforcodesearch.However,duetothelackofrelevantexplanatory information,itistime-consumingandlabor-intensivefordeveloperstounderstandtherankedlistofcandidatecode snippets.Wangetal.[100]proposedanexplainablecodesearchtool,namelyXCoS,tobridgetheknowledgegap betweenthequeryandcandidatecodesnippets.BasedonthebackgroundknowledgegraphextractedfromWikidata andWikipedia,XCoSprovidedconceptualassociationpaths,relevantdescriptions,andadditionalsuggestions,as explanations.Furthermore,itdesignedaninteractiveUserInterface(UI)whichorganizedexplanatoryinformationin theformoftreestohelpdevelopersintuitivelyunderstandtherationalebehindthesearchresults. 4.2.2 SETasksinSoftwareTesting. TheapplicationsofXAIinsoftwaretestingencompasstaskssuchastestcase-related, debugging,vulnerabilitydetection,andbug/faultlocalization. TestCase-Related.Thedesignandimplementationoftestcasesoccupymostofthesoftwaretestingcyclebecause theirqualitydirectlyaffectsthequalityofsoftwaretesting.Inrecentyears,thecombinationofintelligenttechnology andtestscenarios,suchastestcasegeneration[124],testcaserecommendation[41],andtestcasediversityanalysis[1], hasreceivedwidespreadattention.Yuetal.[124]improvedtheDL-basedautomatedassertiongenerationapproachby integratingtheInformationRetrieval(IR)-basedassertionretrievaltechniqueandtheretrieved-assertionadaptation technique.TheassertionretrievalusingIRalsoyieldsthecorrespondingfocal-test.Inthiscontext,focal-testreferstoa pairconsistingofatestmethodwithoutanassertionanditsfocalmethod.Developerscanthenusethisfocal-testas avaluablereferenceduringassertioninspection.Tomakefulluseofhistoricaltestcasestoimprovetestefficiency, Keetal.[41]proposedanexplainabletestcaserecommendationapproachbasedontheknowledgegraph.Oncea historicaltestcaseispredictedtobepronetorevealingdefectsbytheclassifier,thecorrespondingknowledgechainin thesoftwaretestknowledgegraphwillbereturnedasauxiliaryinformationtohelptestersunderstandthereasonfor therecommendation. Debugging.ML/DL-basedmodelsforSEtasks,similartotraditionalsoftware,sufferfromerrorsthatresultinunexpected behaviororincorrectfunctionality.Duetotheblack-boxnatureofcomplexAImodels,thedebuggingprocessdesigned fortraditionalsoftware,whichinvolvesreviewingthecode,tracingabnormalexecutionflows,andisolatingtheroot ManuscriptsubmittedtoACMASystematicLiteratureReviewonExplainabilityforML/DL-basedSEResearch 15 causeoftheproblem,maynotbeapplicabletoML/DL-basedSEmodels.Motivatedbytheideathatdiagnosticfeatures couldbepotentiallyuseful,Citoetal.[8]presentedamodel-agnosticruleinductiontechniquetoexplainwhenacode modelperformedpoorly.Themispredictiondiagnosismodelaimedtolearnasetofrulesthatcollectivelycoveralarge portionofthemodel’smispredictions,eachofwhichcorrelatesstronglywithmodelmispredictions.Theevaluation resultsshowedthattheselearnedrulesarebothaccurateandsimple. VulnerabilityDetection.Softwarevulnerabilities,sometimescalledsecuritybugs,areweaknessesinaninformation system,securityprocedures,internalcontrols,orimplementationsthatcouldbeexploitedbyathreatactorfora varietyofmaliciousends.Assuchweaknessesareunavoidableduringthedesignandimplementationofthesoftware, anddetectingvulnerabilitiesintheearlystagesofthesoftwarelifecycleiscriticallyimportant.Benefitingfromthe |
greatsuccessofDLincode-centricsoftwareengineeringtasks,anincreasingnumberoflearning-basedvulnerability detectionapproacheshavebeenproposed.Torevealthedecisionlogicbehindthebinarydetectionresults(vulnerable ornot),mosteffortsfocusonsearchingforimportantcodetokensthatpositivelycontributetothemodel’sprediction. Forexample,Lietal.[47]leveragedGNNExplainer[123]tosimplifythetargetinstancetoaminimalPDGsub-graph consistingofasetofcrucialstatementsalongwithprogramdependencieswhileretainingtheinitialmodelprediction. Additionally,severalapproachesturntoprovidingexplanatorydescriptionstohelpsecurityanalystsunderstandthe keyaspectsofvulnerabilities,includingvulnerabilitytypes[26],rootcause[90],similarvulnerabilityreports[68], andsoon.Forinstance,Zhouetal.[135]proposedanovelcontrastivelearningframeworkbasedonacombinationof unsupervisedandsuperviseddataaugmentationstrategytotrainafunctionchangeencoder,andfurtherfine-tuned threedownstreamtaskstoidentifynotonlysilentvulnerabilityfixes,butalsocorrespondingvulnerabilitytypesand exploitabilityrating. Bug/FaultLocalization.Buglocalizationreferstotheprocessofpinpointingtheexactlocationinthecodebasewhere thebugoriginatesbasedonbugreportsorissuedescriptionsprovidedbyusersortesters.Fromtheperspectiveof enhancingtheeffectivenessofbuglocalization,Widyasarietal.[110]formulatedthelocalizationtaskasabinary classification problem, i.e., predicting whether a test case will fail or pass. They applied TreeSHAP [54], a local model-agnosticexplanationtechnique,toidentifywhichpartsofcodeareimportantineachfailedtestcase.From theperspectiveofprovidingexplanationsforlocalizedbugs,Lietal.[48]respectivelydesignedtheglobalandlocal explanationstrategiestoexplainthemodelpredictions.Theglobalexplanationstrategyleverageddecisiontreesasthe surrogatemodelstoinferthedecisionpathsleadingtoonlyfaultyornormalfailureunits,whilethelocalexplanation strategycomparedtheincomingfailurewitheachhistoricalfailuretoexplainhowthemodeldiagnosedagivenfailure. 4.2.3 SETasksinSoftwareMaintenance. Softwaremaintenanceistheprocessofchanging,modifying,andupdating softwaretokeepupwithcustomerneeds.TheapplicationsofXAIinsoftwaremaintenancearediverse,including malware/anomalydetection,bug/defectprediction,codesmells,andbugreport-related. Malware/AnomalyDetection.Astheemergingcollaborativesoftwaredevelopmentmodesofopensourcehave becomeincreasinglypopular,theoverallsecurityrisktrend,suchasmaliciousapplications(malware)andcommits,in complexandintertwinedsoftwaresupplyrelationshipsincreasessignificantly.Wuetal.[113]proposedanexplainable ML-basedapproach,namedXMal,tonotonlypredictwhetheranappismalware,butalsounveiltherationalebehindits prediction.Forthispurpose,XMalbuiltasemanticdatabasebasedonmalwarekeyfeaturesandfunctionaldescriptions inAndroiddeveloperdocumentation8,andleveragedthemappingrelationbetweenthemaliciousbehaviorsandtheir correspondingsemanticstogeneratedescriptionsformalware. 8https://developer.android.google.cn/docs ManuscriptsubmittedtoACM16 S.Caoetal. DefectPrediction.Inthepastfewyears,defectpredictionisthemostextensiveandactiveresearchtopicinsoftware maintenance.Accordingtodifferentgranularities,thesestudiescanbefurtherclassifiedintotwocategories:file-level andcommit-level(alsoknownasJust-In-Time(JIT))defectprediction. File-leveldefectpredictiontechniquesoftenemployasetofhand-craftedfeaturemetricsextractedfromasoftware producttoconstructtheclassificationmodel.Thesesoftwaremetricsareeasyforresearchersandparticipantsto understandanduse,andgiveavividdescriptionofthesoftware’srunningstate.Althoughnon-linearalgorithms likeneuralnetworkshaveachievedpromisingpredictionperformancecomparedtotraditionaltreeorrule-based models,theyarehardtoprovidebothaccurateandexplainablepredictionresults.Yangetal.[116]proposedaweighted associationrulebasedonthecontributiondegreeoffeaturestooptimizetheprocessofrulegeneration,ranking, pruning,andprediction.Intermsofexplainability,suchaweightedensemblemodelincorporatingmultiplerules helpedtoimprovethequalityandmultiplicityoftheruleset.Inaddition,somestudiesdirectlyadoptsourcecode (e.g.,codetokens)asmeaningfulsemanticunitsfordefectprediction.Wattanakriengkraietal.[107]formalizeddefect explanationasaline-levelpredictiontask,andusedthemodel-agnostictechniqueLIME[78]toidentifyriskycode tokensinpredicteddefectivefiles.Acomparativeevaluationacross32releasesofninesoftwaresystemsshowedthat leveragingamodel-agnostictechniquecaneffectivelyidentifyandrankdefectivelinesthatcontaincommondefects whilerequiringasmalleramountofinspectioneffortandamanageablecomputationcost. Bycontrast,JITdefectpredictiontaskaimstohelpdevelopersprioritizetheirlimitedenergyandresourceson themostriskycommitsthataremostlikelytointroducedefects.Zhengetal.[133]trainedarandomforestclassifier basedonsixopen-sourcedprojectsasaJITdefectpredictionmodel,andadoptedLIMEtoidentifycrucialfeatures. Theevaluationexperimentsshowedthattheclassifiertrainedonthefivemostimportantfeaturesofeachproject couldachieve96%oftheoriginalpredictionaccuracy.Similarly,Pornprasitetal.[74]leveragedthecrossoverand mutationoperationstorandomlygeneratesyntheticneighborsaroundtheto-be-explainedinstancebasedontheactual |
characteristicsofdefect-introducingcommits,andbuiltalocalrule-basedregressionmodeltoexplaintheinteractions betweendefect-introducingfeatures. CodeSmellDetection.Duetothedeliverydatepressureordeveloperoversight,codesmellsmaybeintroducedin softwaredevelopmentandevolution,therebyreducingtheunderstandabilityandmaintainabilityofsourcecode.One ofthecommoncodesmellsisTechnicalDebt(TD),whichreflectsthetrade-offsoftwareengineersmakebetween short-termbenefitsandlong-termstability.TDisaccumulatedwhendevelopersmakewrongorunhelpfultechnical decisions,eitherintentionallyorunintentionally,duringsoftwaredevelopment.DespitetheeffectivenessofDL-based codesmelldetectionapproachesinautomaticallybuildingthecomplexmappingbetweencodefeaturesandpredictions, theycannotprovidetherationaleforthepredictionresults.TounderstandthebasisoftheCNN-baseddetectionmodel’s decisions,Renetal.[77]proposedabacktracking-basedapproach,whichexploitedthecomputationalstructureofCNN toderivekeyphrasesandpatternsfrominputcomments. BugReport-Related.ABugreportisoneoftheimportantcrucialSEdocumentationforsoftwaremaintenance. High-qualitybugreportscaneffectivelyreducethecostoffixingbuggyprograms.Forexample,insteadofusing otherblack-boxmodels,Shihabetal.[84]employedanexplainableMLmodel(i.e.,decisiontree)tounderstandwhich attributesaffectedwhetherornotabugwouldbere-opened.Toanalyzetheimpactoftest-relatedfactorstoDL-based bugreportpredictionmodels,Dingetal.[14]appliedSHAPtocomputetheimportanceoftellsmellfeatures. Others.Apartfromtheabovetasks,somestudiesleveragedXAItechniquesoncertainspecialSEtasks.Forexample, themodel-agnosticexplanationtechniqueLIME[78]wasusedtoexplainthepredictionresultsofXGBoost-basedSE taskssuchasOSSsustainabilityprediction[114]andincidentprediction[131].Inordertominetheformattingstyleof ManuscriptsubmittedtoACMASystematicLiteratureReviewonExplainabilityforML/DL-basedSEResearch 17 eachanalyzedGitrepository,Markovtsevetal.[58]proposedadecisiontree-basedexplainablemodeltoexpressthe foundformatpatternswithcompacthuman-readablerules. 4.2.4 SETasksinSoftwareManagement. ThereareonlyfourliteratureinvolvingtheutilizationofXAIinsoftware management,involvingthefollowingmainSEtasks,i.e.,informationsearchonQ&Asites,configurationextrapolation, effort/costestimation,anddeveloperrecommendation,leavingamplespaceforfurtherexploration. InformationSearchonQ&ASites.Asoneofthemostpopularandwidely-usedtechnicalQ&AsitesinSEcommunities, StackOverflow(SO)playsanincreasinglyimportantroleinsoftwaredevelopment.Whenfacingsoftwareprogramming problemssuchasimplementingspecificfunctionalitiesorhandlingerrorsincode,developersoftenturntoSOforhelp. Toprovidebothaccurateandexplainableretrievalresults,Liuetal.[50]proposedKGXQR,aknowledgegraph-based questionretrievalapproachforprogrammingtasks.KGXQRconstructedasoftwaredevelopment-relatedconcept knowledgegraphtofindthedesiredquestions,andfurthergeneratedexplanationsbasedontheassociationpaths betweentheconceptsinvolvedinthequeryandtheSOquestionstobridgetheknowledgegap. Configuration Extrapolation. Configuration management is a process in systems engineering for establishing consistencyofaproduct’sattributesthroughoutitslife.Theincreasingconfigurabilityofmodernsoftwarealsoputsa burdenonuserstotunetheseconfigurationsfortheirtargethardwareandworkloads.Toconfiguresoftwareapplications efficiently,MLmodelshavebeenappliedtomodelthecomplexrelationshipsbetweenconfigurationparametersand performance.Tounderstandtheunderlyingfactorsthatcausedthelowperformance,Dingetal.[16]usedaninherently interpretablelinearregressionmodel[23]tofindtheconfigurationwiththebestpredictedperformance.Theyprovided interactivevisualizationcharts(e.g.,radarcharts,barcharts)toexplaintherelationshipsbetweentheapplication-level configurationparametersandultimateperformance. Effort/CostEstimation.Effort/Costestimationistheprocessofpredictinghowmucheffortisrequiredtocomplete aparticulartaskorproject.Itisacrucialaspectofprojectmanagement,playingasignificantroleinsettingrealistic timelinesandallocatingresourcesefficiently.Arepresentativeeffortestimationactivityisstorypointestimation,which isaregressiontasktomeasuretheoveralleffortrequiredtofullyimplementaproductbacklogitem.Fuetal.[27] presentedGPT2SP,aTransformer-basedapproachthatcapturestherelationshipamongwordswhileconsideringthe contextsurroundingagivenwordanditspositioninthesequence.Itisdesignedtobetransferabletootherprojectswhile remainingexplainable.Theyleveragedtwoconcepts(i.e.,feature-basedexplanationsandexample-basedexplanations) ofXAIto1)helppractitionersbetterunderstandwhatarethemostimportantwordthatcontributedtothestorypoint estimationofthegivenissue;and2)searchforthebestsupportingexamplesthathadthesamewordandstorypoint fromthesameproject. DeveloperRecommendation.Collaborationefficiencyisofparamountimportanceforsoftwaredevelopment.Al- thoughalotofeffortsinrecommendingsuitabledevelopershavebeenmadeinbothresearchandpracticeinrecent years,suchapproachesoftensufferfromlowperformanceduetothedifficultyoflearningthedeveloper’sexper- tise,willingness,relevanceaswellasthesparsityofexplicitdeveloper-taskinteractions.Xieetal.[115]proposeda |
multi-relationshipembeddedapproachnamedDevRec,inwhichtheyexplicitlyencodedthecollaborationrelationship, interactionrelationship,andsimilarityrelationshipintotherepresentationlearningsofdevelopersandtasks.DevRec alsovisualizedthehigh-orderconnectivityandattentiveembeddingpropagationintherecommendationsubgraphsto explainwhyataskwasrecommended(orassigned)tothedeveloper. ManuscriptsubmittedtoACM18 S.Caoetal. ✍▶RQ1-Summary◀ Wesummarizedatotalof63primarystudiesinto21SEtasks,whichcoverfourmajoractivitieswithinSDLC,in- • cludingsoftwaredevelopment,softwaretesting,softwaremaintenance,andsoftwaremanagement.Subsequently, wedelvedintotheprogressofvariousSEtasksthatappliedXAItechniques. Inthespanofthe12-yearperiod,researchonexplainabilityofAI4SEmodelshasexperiencedanotableshift • inpreference.EarlyXAI4SEstudiespredominantlyconcentratedonbug/defectprediction,programrepair, etc.Itwasnotuntil2021thatthediversityoftargetSEtaskswitnessedasignificantincrease,includingcode understanding,debugging,andmalware/anomalydetection. Wefoundthattheexplainabilityoflearning-basedvulnerabilitydetectionanddefectpredictionmodels • arethemostwidelyexplored(with11and12papers,respectively).TheleastmentionedSEactivity,software management,wasinvolvedinonlyfourpapers. 5 RQ2:HOWXAITECHNIQUESAREUSEDTOSUPPORTSETASKS? InSection4,weanalyzedwhichAI-assistedSEtaskshavebeenexploredforexplainabilitytodate.Inthispart,we turnourattentiontotwokeycomponentsofXAI:explanationapproachesandexplanationformats.Establishingthe associationbetweenexplanationapproachesandtargetSEtaskshelpstoempiricallydeterminewhethercertainXAI techniquesareparticularlysuitableforspecificSEtasks.Meanwhile,theexplanationformatsadoptedacrossdifferent SEtasksrevealkeyaspectsthatthestakeholdersseektounderstandfromthedecisionofagivenblack-boxmodel. Specifically,weaimedtocreateataxonomyofXAItechniquesforAI4SEstudiesanddetermineiftherewasacorrelation betweentheexplanationapproachesandexplanationformats. 5.1 RQ2𝑎:WhattypesofXAItechniquesareemployedtogenerateexplanations? WefirstdiscussvariousexplanationapproachesemployedbyexistingXAI4SEstudies.Oneclassicalpracticeisbuilding taxonomiesofXAItechniquesusedinoursurveyedliterature.However,wenotethattheXAIcommunitylacksa formalconsensusonthetaxonomy,asthelandscapeofexplainabilityistoobroad,involvingsubstantialtheoriesrelated tophilosophy,socialscience,andcognitivescience[86].Inaddition,thesetaxonomiesaremostlydevelopedforgeneral purposeorspecificdownstreamapplicationssuchashealthcare[59]andfinance[121],andmaynotbeapplicableto theSEfield.Asacountermeasure,wesummarizetheXAItechniquesusedinprimarystudies,andproposeanovel taxonomyapplicabletothefieldofSE.Inparticular,accordingtotheirmethodologiesofimplementation,mostXAI techniquesstudiedinthisreviewcanbecategorizedintofivegroups:InterpretableModel(IM)( 25%),FeatureAttribution ≈ (FA)( 33%),AttentionMechanism(AM)( 13%),DomainKnowledge(DK)( 21%),andExampleSubset(ES)( 6%).Fig.7 ≈ ≈ ≈ ≈ illustratesthevarioustypesofXAItechniquesthatweextractedfromourselectedstudies. InterpretableModel(IM).Theeasiestwaytoachieveexplainabilityistoconstructinterpretablemodels,suchas DecisionTree(DT),LogisticRegression(LR),Rule-basedModels(RM),andNaiveBayes(NB).Thesemodelshave built-inexplainabilitybynature.Forinstance,inDTandRM,theassociationbetweenfeaturesandtargetismodelled linearlyormonotonically,i.e.,anincreaseinthefeaturevalueeitheralwaysleadstoanincreaseoralwaystoadecrease inthetargetoutcome.Suchlinearityormonotonicityisusefulfortheinterpretationofamodelbecauseitmakesit easiertounderstandarelationship.Ontheotherhand,NBandBayesiannetworksformulateacyclicgraphstoprovide conditionalprobabilisticoutcomespresentingtherelationshipofvariables.Asthemostrepresentativemodel,decision treepredictsthevalueofatargetvariablebylearningsimpleif-then-elserulesinferredfromthedatafeatures.The ManuscriptsubmittedtoACMASystematicLiteratureReviewonExplainabilityforML/DL-basedSEResearch 19 2024/1/24 19:26 CSUR.svg Naïve Bayes Decision Tree Rule Induction Conceptual RuleFit Interpretable Model Models CodeT5 (~25%) Logistic Information Linear Regression Retrieval Regression GRU Domain Transformer Knowledge Rule-Based Semantic (~21%) Solver Matching Aention Multi-Head Mechanism Aention (~13%) BERT-QA Knowledge Single Explanation Graph Aention Techniques for LIME AI4SE Studies Delta SP-LIME Debugging Example SHAP SubSet Feature Aribution (~6%) (~33%) IG TreeSHAP HuoYan BreakDown Counterfactuals GNN-Explainer De-convolution file:///C:/Users/CSC/Desktop/CSUR.svg Fig.7. XAItechniquestaxonomy&distribution. 1/1 treestructureisidealforcapturinginteractionsbetweenfeaturesinthedata,andalsohasanaturalvisualizationof adecisionmakingprocess.TakingFig.8asanexample,eachnodeinadecisiontreemayrefertoanexplanation, e.g.whenthetime_daysvariable(i.e.,thenumberofdaystofixthebug)isgreaterthan13.9andthelaststatusis Resolved Fixed,thenthebugwillbere-opened[84].Inaddition,interpretablemodelscanserveaspost-hocsurrogates |
toexplainindividualpredictionsofblack-boxmodels.Thegoalbehindthisinsightistoleveragearelativelysimpler andtransparentmodeltoapproximatethepredictionsofthecomplicatedmodelasbestaspossible,andatthesame time,provideexplainability.SurrogatemodelshaveshowneffectivenessinexplainingAI4SEapproachesbuiltupon morecomplexML/DLmodels,e.g.,deepneuralnetworks,SVMs,orensemblemodels.Examplesincludevulnerability detection[137],defectprediction[74],andprogramrepair[58]. FeatureAttribution(FA).Featureattribution-basedexplanationsaimtomeasuretherelevance(i.e.,attributionscore) ofeachinputfeaturetoamodel’spredictionbyrankingtheexplanatoryscore.Thescorecouldrangefromapositive valuethatshowsitscontributiontothemodel’sprediction,toazerothatwouldmeanthefeaturehasnocontribution,to anegativevaluewhichmeansthatremovingthatfeaturewouldincreasetheprobabilityofthepredictedclass.Theycan bebroadlydividedinto(❶)perturbation-basedapproachesthatmakefeatureperturbationswhileanalyzingprediction change,e.g.,LIME[78],GNNExplainer[123],andSHAP[55],(❷)gradient-basedapproachesthatpropagateimportance signalsbackwardthroughallneuronsofthenetwork,e.g.,IntegratedGradients(IG)[92]andGrad-CAM[83],and(❸) ManuscriptsubmittedtoACMStudy Identification AAuuttoommaatteedd SSeeaarrcchh IIEEEEEE XXpplloorree AACC LLMM iibb rrDD aaii rrgg yyiittaall SSpprriinnggeerrLLiinnkk WWiieellyy SSccooppuuss SSWW ccee iieebb nn ccoo eeff SSGG ccoo hhoo oogg llaallee rr Explanation Techniques Derive search strings 224477 ppaappeerrss 22,,118844 ppaappeerrss 1188,,770066 ppaappeerrss11,,994400 ppaappeerrss2299,,334466 ppaappeerrss 221111 ppaappeerrss 1177,,660000 ppaappeerrss Refine e tn e Manual Search 20 S.Caoetal. search ta u m e Export Research strings la v lp m 4477 ppaappeerrss FFoorrwwaarrdd BBaacckkwwaarrdd time_days Question Identify Export E o C relevant venues Manual Search Study selection Export 3355 ppaappeerrss 1122 ppaappeerrss >13.9 <=13.9 last_state num_fix_files cc11 oo77 nn SS ffeeee rrll eeee nncc ccttee eedd ss 88 jjooSS uuee rrll nnee aacctt llsseedd 7700,,332255 AAdddd 44 Total 65 papers ppaappeerrss ppaappeerrss As Fs ii xg en ded Re Fs io xl ev ded V Fe ir xif ei ded >4 <=4 AA cI4 tivS iE ty 8866 ppaappeerrss 55 ppaappeerrss Reopened Reopened time_days Reopened num_fix_files Study Selection >21.3 <=21.3 >2 <=2 Filter by page Remove Check the venue, Scan full-text to select Conduct quality limit (pages > 6) duplicate studies title, and abstract primary studies assessment time_days Reopened Not Reopened time_days >65.1 <=65.1 >7.25 <=7.25 28,524 papers 21,817 papers 445 papers 128 papers 61 papers Reopened Not Reopened Not Reopened Fig.8. Sampledecisiontreeusedforexplainablere-openedbugprediction. decomposition-basedapproachesthatbreakdowntherelevancescoreintolinearcontributionsfromtheinput,e.g. Layer-wiseRelevancePropagation(LRP)[4],DeepTaylorDecomposition(DTD)[62],andDeepLIFT[85]. Amongthem,featureperturbationsareperhapsthemostpopularapproachesinXAI4SEstudies.Theseperturbations Casual Graph Tensor Convolution canbeperformedinhigh-dimensionallatentspacebymasking[47,130]ormodifying[74,114]certainfeaturesof Structural Inference Neural Network Analysis Specification inputinstances.Thebasicideaistoremovetheminimumsetofinputstochangethemodelprediction.Anearly Mining Neural Machine XAI Approaches Translation representativeperturbation-basedapproachisLocalInterpretableModel-agnosticExplanations(LIME)[78]. Counterfactual LLM- Specifically,LIMEfirstperturbstheto-be-explainedinstanceinthehigh-dimensionalfeaturespacetorandomlygenerate Explanation Specific Knowledge syntheticnSc eo ip ge hbors.Then,basedSt oag ne theirpredictionP ro ert sa ub lil ti sty d erivedfromtheglobalblack-boxmodel,LIMEtrainsa MA et cte hn at nio isn m Techniques Graph/Base Tailored BaF cke ta ratu cr ke ing Where is the XAI How is the XAI Which is XAI localaspuprroracohg foactuesinmg oon?del(e.g.,adppercoaischi odenvetlorpeede?,linearreapgprreoascshi aopnpl)iedt oto?produceanexplanation.LIMEhasbeensuccessfully Self- appliedtovari Lo ou cas l SEtasks,suchas Ad nte ef -He oc ctprediction[44,1 SM0 po e7 cd i,e fl i1- c33],OSSSustainabilityPrediction[114],andtestcasD eecision Tree E Texp chla nin iqa ub ele s BERT generation[1].SHapleyAdditiveexPlanations(SHAP)[M5o5d]e,l-whichstemsfromgametheory,isanotherpopularXAI Random Multi-Task Global Post-Hoc Specific Forest Explanation Learning techniquebasedonperturbation-basedfeatureattribution.Itassignseachfeatureanfairvalue,otherwiseknownas Techniques for shapleyvalue,tomeasureitscontributiontothemodel’soutput.FeatureswithpositiveSHAPvaluespositivelyimpact DeepLIFT AI4SE Studies theprediction,andviceversa.SimilartoLIME,SHAPisalsomodel-agnostic,thusitcanbeusedtoexplainanyML LIME |
model.Forexample,Widyasarietal.[110]appliedatreeensemblemodel-specificvariant,TreeSHAP[54],toidentify SubgraphX SHAP Model- Model- whichcodestatementsareimportantineachfailedtestcase. Agnostic Specific Delta Techniques Techniques AttentionMechanism(AM).Attentionmechanismisoftenviewedasawaytoattendtothemostrelevantpart Debugging GradCAM ofinputs.Intuitively,attentionmaycapturemeaningfulcorrelationsbetweenintermediatestatesofinputthatcan GNNExplainer Rule Induction GNN-LRP explainthemodel’spredictions.Themostsignificantadvantageofattentionmechanismliesinthatitcanbeeasily Neuron Activation Difference integratedintoanydeepneuralnetwork(e.g.,CNN[71],Bi-LSTM[102],GRU[30],GNN[46],Transformer[26]),and BreakDown PGExplainer improvetheperformanceoftheoriginaltaskaswellasprovidingexplainability.Manyapproachestrytoexplainmodels basedontheattentionweightsorbyanalyzingtheknowledgeencodedintheattention.Forexample,Lietal.[46] employedanattention-basedGNNmodel,namedGAT[97],toweightheimportanceofneighboringcodegraphnodes inruntimeexceptiondetection.Thesecomputedweightswerethenusedtovisualizetheimportanceofvariousedges tothedetectedruntimeexception.Similarly,Wangetal.[102]visualizedthefeatureweigMhotdoelf-Deeapcehnwdeonrt dTewchitnhiqtuheeshelp Model-Agnostic Techniques ofsingle-andmulti-headattentionmechanismtoofferword-andphase-leveleCxapsluaail nabSiltriutcytufraol rw Gh Ny N-La Rc Pom Pm GEe xn plt aii ns er DeepLIFT LIME SHAP BreakDown Inference Analysis classifiedasSATD. Counterfactual Delta Debugging GradCAM Rule Induction GNNExplainer SubgraphX Explanation ManuscriptsubmittedtoACM EExxppllaannaattiioonnss Neuron Activation Difference ffoorr AAII44SSEE Graph Tensor Convolution Neural Machine PPiippeelliinneess Neural Network Translation Attention Random Specification Knowledge Feature Mechanism Forest Mining Graph/Base Backtracking Decision Tree BERT Multi-Task Learning Tailored Techniques Self-Explainable Techniques 14 src/mongo/shell/linenoise.cpp @@ -2762,7 +2762,17 @@ int linenoiseHistorySetMaxLen(int len) { 347 347 # Get the HTML fragment inside the appropriate HTML element and then 348 348 # extract the text from it. 349 349 html_frag = extract_text_in(html, u"<div class='lyricbox'>") 350 - lyrics = _scrape_strip_cruft(html_frag, True) 350 + if html_frag: 351 + lyrics = _scrape_strip_cruft(html_frag, True) 351 352 352 - if lyrics and 'Unfortunately, we are not licensed' not in lyrics: 353 - return lyrics 353 + if lyrics and 'Unfortunately, we are not licensed' not in lyrics: 354 + return lyrics 1144 ssrrcc//mmoonnggoo//sshheellll//lliinneennooiissee..ccpppp CCVVEE--22001166--66449944 DDeettaaiill @@@@ --22776622,,77 ++22776622,,1177 @@@@ iinntt lliinneennooiisseeHHiissttoorryySSeettMMaaxxLLeenn((iinntt lleenn)) {{ DDeessccrriippttiioonn 22776622 //** SSaavvee tthhee hhiissttoorryy iinn tthhee ssppeecciiffiieedd ffiillee.. OOnn ssuucccceessss 00 iiss rreettuurrnneedd TThhee cclliieenntt iinn MMoonnggooDDBB uusseess wwoorrlldd--rreeaaddaabbllee ppeerrmmiissssiioonnss oonn ..ddbbsshheellll hhiissttoorryy 22776633 ** ootthheerrwwiissee --11 iiss rreettuurrnneedd.. **// ffiilleess,, wwhhiicchh mmiigghhtt aallllooww llooccaall uusseerrss ttoo oobbttaaiinn sseennssiittiivvee iinnffoorrmmaattiioonn bbyy rreeaaddiinngg 22776644 iinntt lliinneennooiisseeHHiissttoorryySSaavvee((ccoonnsstt cchhaarr** ffiilleennaammee {{ 22776655 -- FFIILLEE&&** ffpp == ffooppeenn((ffiilleennaammee,, ""wwtt""));; tthheessee ffiilleess.. 22776666 iiff ((ffpp ==== NNUULLLL)) {{ WWeeaakknneessss EEnnuummeerraattiioonn 22776677 rreettuurrnn --11;; 22776688 }} CCCWWWEEE---IIIDDD CCCWWWEEE NNNaaammmeee 22776699 22777700 ffoorr ((iinntt jj == 00;; jj << hhiissttoorryyLLeenn;; ++++jj)) {{ CCCWWWEEE---222000000 EEExxxpppooosssuuurrreee ooofff SSSeeennnsssiiitttiiivvveee IIInnnfffooorrrmmmaaatttiiiooonnn tttooo aaannn UUUnnnaaauuuttthhhooorrriiizzzeeeddd AAAccctttooorrr 22777711 iiff ((hhiissttoorryy[[jj]][[00]] !!== ''\\00'')) {{ 22777722 ffpprriinnttff((ffpp,, ""%%ss\\nn"",, hhiissttoorryy[[jj]]));; 22777733 }} VVuullTTeelllleerr 22777744 }} TThhee FFIILLEE bbeeffoorree VVEERRSSIIOONN ddooeess nnoott vveerriiffyy tthhaatt uussee oofftthhee ppeerrmmiissssiioonnss oonn 22777755 -- ffcclloossee((ffpp));; hhiissttoorryy ffiilleess,, aalllloowwss llooccaall gguueesstt ttoo rreeaadd sseennssiittiivvee iinnffoorrmmaattiioonn bbyy rreeaaddiinngg tthheessee 22777766 rreettuurrnn 00;; 22777777 }} ffiilleess..ASystematicLiteratureReviewonExplainabilityforML/DL-basedSEResearch 21 |
DomainKnowledge(DK).Sofar,wehaveencounteredseveralXAItechniquestoexplainblack-boxmodels.However, therearesomelimitationsregardingtheminpractice.Ontheonehand,duetotheweakcapabilityofprocessingcomplex data,intrinsicallyinterpretablemodelsarepronetogettingtrappedintothetrade-offdilemmabetweenperformanceand explainability,i.e.,sacrificingpredictiveaccuracyinexchangeforexplainability.Ontheotherhand,importantregions highlightedbyfeatureattributionorattentionmechanismarenotnecessarilyuser-friendlyintermsofexplainability. Forexample,theimportanceofasingletokeninacodesnippetmaynotconveyasufficientlymeaningfulexplanation. Toaddressthesechallenges,someworkstakespecialstepstoincorporateadditionalknowledgefromexpertsinto theirexplanations.Concretely,weidentifytwoincipienttrendsintheapplicationofdomainknowledge:(❶)designing oneormoreauxiliarytasksrelatedtothemaintasktoprovideadditionalinsightsregardingtheinputdata,and(❷) theuseofanexternalknowledgedatabasecuratedbyexperts.Forexample,toexplainwhyaprogram/commitwas predictedasvulnerable,recentworksproposedtopredictvulnerabilitytypes[26],identifykeyaspects[90],generate vulnerabilitydescriptions[57,129],searchforsimilarissues[68],etc.Toassistdevelopersinunderstandingthereturn resultsofneuralcodesearchtools,XCoS[100]constructedabackgroundknowledgegraph,andregardeditasan externalknowledgebasetoprovideconceptualassociationpaths,relevantdescriptions,andadditionalsuggestions,as explanations. ExampleSubset(ES).ApartfromtheaboveXAItechniques,anotherformofexplanationexistsbyexplainingmodel behaviorfromtheperspectiveofsamplesubsets,whichaimstoofferinsightsintothefactorsthatinfluencedthemodel’s decisionfromindividualinstancesorscenarioswherethemodel’spredictionchanges.Onerepresentativeworkinthis areaisthecounterfactualexplanation.IttreatstheinputasthecauseofthepredictionundertheGrangercausality, i.e.,givenanobservedinput𝑥 andaperturbed𝑥ˆwithcertainfeatureschanged,theprediction𝑦wouldchangeto𝑦ˆ. Counterfactualexplanationrevealswhatwouldhavehappenedbasedoncertainobservedinputchanges.Theyare oftengeneratedtomeetupcertainneedssuchasmispredictiondiagnosisbyselectingspecificcounterfactuals.Such examplescanbegeneratedbyhumansorperturbationtechniqueslikewordreplacement.Citoetal.[9]proposeda MaskedLanguageModeling(MLM)-basedperturbationalgorithm,whichreplaceseachtokenwithablank"mask"and usesMLMtocomeupwithaplausiblereplacementfortheoriginaltoken,forgeneratingcounterfactualexplanations. Counterfactuals-basedexplanationnotonlyrevealswhichregionoftheinputprogramisusedbythecodemodelfor prediction,butalsoconveyscriticalinformationonhowtomodifythecodesothatthemodelwillchangeitsprediction. Inaddition,someattemptsborrowedcertainclassicaltechniques,suchasprogrammutation[88]andDeltaDebugging (DD)[127],fromthefieldofsoftwaretestingtosearchforimportantcodesnippetspositivelycontributingtothemodel predictions.Rabinetal.[75]leveragedDDtosimplifyapieceofcodeintotheminimalfragmentwithoutreversingit originalpredictionlabel.Thereductionprocesscontinuesuntiltheinputdataiseitherfullyreduced(toitsminimal components,dependingonthetask)oranyfurtherreductionwouldcorrupttheprediction.Byintegratingwithfour state-of-the-artneuralarchitecturesacrosstwopopulartasks(codesummarizationandvariablemisusedetection),they showedthatmodelinputscouldoftenbereducedtremendously(evenby90%ormore),andthesereductionsinno wayrequiredtheinputstoremainnatural,or,dependingonthetask,inanywayvalid.Sunejaetal.[93]similarly implementedaprediction-preservinginputminimization(P2IM)approachtoquantifyhowwelltheblack-boxAImodel learnedthecorrectvulnerability-relatedsignals. 5.1.1 ExploratoryDataAnalysis. Fig.9delineatestheapplicationstatusofdifferenttypesofexplanationapproaches in21previouslysummarizedSEtasks.Overall,featureattributionisthemostprevalentexplainabilitytechniquein XAI4SEresearch,followedbyinterpretablemodel,domainknowledge,andattentionmechanism.Giventheflexibility ManuscriptsubmittedtoACM22 S.Caoetal. IM FA AM DK ES Other Developer Recommendation 1 Effort/Cost Estimation 1 Configuration Extrapolation 1 Information Search on Q&A Sites 1 Bug Report-Related 1 1 Code Smell Detection 1 1 1 Code Review 1 Root Cause Analysis 1 1 Incident Prediction 1 OSS Sustainability Prediction 1 Bug/Defect Prediction 5 6 1 1 Malware/Anomaly Detection 1 1 3 Program Repair 1 Bug/Fault Localization 1 1 Vulnerability Detection 1 3 2 4 1 Debugging 2 1 Test Case-Related 2 2 Code Search 1 Code Summarization 1 2 1 Program Synthesis 3 Code Understanding 1 1 3 1 0 2 4 6 8 10 12 Fig.9. XAItechniquesbythetask. offeatureattributioninmodeladaptation,thepopularityofsuchtypesofXAItechniquesisnotsurprising.Meanwhile, theprevalenceofinterpretablemodelisalsoexpected,astheyareinherentlymoreunderstandabletohumansand canserveassurrogatemodelstoexplainindividualpredictionsofcomplexdeepneuralnetworks,whicharemore commoninAI4SEapproaches.DuetothediversityofSEdata,domainknowledge-basedexplanationsarealsopopular. Theexplanationsareintendedtogivecontrolbacktotheuserbyhelpingthemunderstandthemodelandoffering additionalinsightsintotheinputdata. |
Inadditiontotheprevalence,selectingthemostsuitableexplanationapproachforaspecifictaskisalsoacrucial aspectthatneedstobecarefullyconsidered.WeextractedtheselectionrationalesforXAItechniquesfromselected primarystudies,andclassifiedthemintothefollowingthreecategories: TaskFitness.GiventheinherentdifferencesinfeatureengineeringandfunctionalrequirementsamongvariousAI4SE workflows,somestudiesselectedXAItechniquesbasedontheircharacteristicsandfitnesswithtargetSEtasks.For instance,theexplanationsformostofthemetric-basedAI4SEpipelines(e.g.,defectprediction[74],OSSsustainability prediction[114],andincidentprediction[131])werederivedbyusinginterpretablemodels,suchasdecisiontreeor RuleFit[25],thankstotheirstrongabilitytoanalyzeandextracthuman-understandablerulesfromhand-craftedfeature metrics. ModelCompatibility.SincecertaincraftedXAItechniqueshavestrictapplicationscenarios,e.g.,requiringtheinternal architectureorparameterinformationofto-be-explainedmodels,someresearchersdeterminedthemostsuitableXAI techniquesfromthosecompatiblewiththeiremployedmodels[36,47,130].Forinstance,duetothegreatperformance ofdeconvolution[126]inprovidingvisualexplanationsforCNN-basedapplications,Renetal.[77]leveragedatargeted backtrackingtechniquetoextractprominentphrasesthatcontributemosttothedecisionwhetherthecommentisa SATDornotfromtheinputcommentasexplanations. ManuscriptsubmittedtoACMASystematicLiteratureReviewonExplainabilityforML/DL-basedSEResearch 23 StakeholderPreference.Inadditiontothetaskfitnessandmodelcompatibility,stakeholderpreferenceisalsoone oftheimportantfactorsaffectingtheselectionofXAItechniques.Generallyspeaking,modelusersaimtoutilize XAItechniquestobetterunderstandtheoutputofadeployedmodelandmakeaninformeddecision,whilemodel designersfocusonusingXAItechniquesduringmodeltrainingandvalidationtoverifythatthemodelworksas intended.Inaddition,theexplanationswouldbegeneratedfordistinctpurposesatdifferentlevelsofexpertiseeven whenconsideringasinglestakeholder.Forinstance,toassistsoftwaredevelopersinunderstandingadefectivecommit, someapproachessimplyhighlightthelinesofcodethatthemodelthinksaredefective[107],whileothersextract human-understandablerules[74],orevennaturallanguagedescriptions[57]fromthedefectivecodethatcanserveas actionableandreusablepatternsorknowledge. ✍▶RQ2𝑎-Summary◀ OurexploratorydataanalysisrevealedfivecommonlyusedXAItechniques,includingInterpretableModel • (IM)( 25%),FeatureAttribution(FA)( 33%),AttentionMechanism(AM)( 13%),DomainKnowledge ≈ ≈ ≈ (DK)( 21%),andExampleSubset(ES)( 6%),thathavebeenusedinresearchonXAI4SE. ≈ ≈ ComparedwithotherXAItechniques,featureattribution-basedapproacheswerebyfarthemostpopular • optioninoursurveyedstudies. WesummarizedtheselectionstrategiesforXAItechniquesinSEtasksintothreemaincategories:TaskFitness, • ModelCompatibility,andStakeholderPreference. 5.2 RQ :WhatformatofexplanationisprovidedforvariousSEtasks? 2𝑏 Next,inordertogaininsightsfromvariousexplanationformatsoutputtedbyXAI4SEapproaches,weprovidea taxonomy,alongwithdescriptiveexamplesandstatistics,astowhysometypesofexplanationformatswereused forparticularSEtasks.Inparticular,theseexplanationscanbebroadlyclassifiedintofivemajortypes:Numeric,Text, Visualization,SourceCode,andRule. Numeric.Numericalexplanations,whicharecapableofconveyinginformationinacompactformat,focusonquanti- fyingthepositiveornegativecontributionofaninputvariabletothemodelprediction.Suchimportancescorescan eitherbedirectlyusedasexplanations[40,133]orserveasindicatorstoguidekeyfeatureselection[91].Popular examplesofnumericalexplanationsareLIME[78]andSHAP[55].Estevesetal.[17]usedSHAPvaluestounderstand theCKmetrics[6]thatinfluencedthedefectivenessoftheclasses.Inthesamecontext,Leeetal.[44]employedthree widely-usedmodel-agnostictechniques,includingLIME,SHAP,andBreakDown[89],tocalculatethecontribution ofeachfeaturefordefectmodels’predictions.Xiaoetal.[114]leveragedthelocalexplanationsgeneratedbyLIME fromtheXGBoostmodeltoanalyzethecontributionofvariablestosustainedactivityindifferentprojectcontexts.To reflecttheglobalbehaviorofacomplexJITdefectpredictionmodel,Zhengetal.[133]employedSP-LIME,avariantof LIME,toanalyzetherelationshipbetweenthefeaturesinthemodelandthefinalpredictionresults.SP-LIMEexplicitly selectsrepresentativeanddiverse(butnotrepetitive)samples,presentingaglobalviewwithintheallocatedbudgetof maximalfeatures.Sunetal.[91]usedSHAPtoguidethesearchforthefeaturethathaslargestmaliciousmagnitude, i.e.,havingthepotentialtobemanipulatedbytheadversary,totesttherobustnessofmalwaredetectors. Text.Incontrasttonumericalexplanations,textualnaturallanguagedescriptionsareeasiertobecomprehendedby non-experts,offeringclarityinunderstandingthebehaviorofintelligentSEmodels.Suchtextualexplanationscan eitherbederivedfromscratchbyusinggenerativeAImodels(e.g.,NMT[7,94])orretrievedfromexternalknowledge ManuscriptsubmittedtoACMStudy Identification AAuuttoommaatteedd SSeeaarrcchh |
IIEEEEEE XXpplloorree AACC LLMM iibb rrDD aaii rrgg yyiittaall SSpprriinnggeerrLLiinnkk WWiieellyy SSccooppuuss SSWW ccee iieebb nn ccoo eeff SSGG ccoo hhoo oogg llaallee rr Explanation Techniques Derive search strings 224477 ppaappeerrss 22,,118844 ppaappeerrss 1188,,770066 ppaappeerrss11,,994400 ppaappeerrss 2299,,334466 ppaappeerrss 221111 ppaappeerrss 1177,,660000 ppaappeerrss Refine e tn e Manual Search search ta u m e Export Research strings la v lp m 4477 ppaappeerrss FFoorrwwaarrdd BBaacckkwwaarrdd Question Identify Export E o C relevant venues Manual Search Study selection Export 3355 ppaappeerrss 1122 ppaappeerrss 1177 SSeelleecctteedd 88 SSeelleecctteedd ccoonnffeerreenncceess jjoouurrnnaallss 7700,,332255 AAdddd 44 Total 65 papers ppaappeerrss ppaappeerrss AI4SE Activity 8866 ppaappeerrss 55 ppaappeerrss Study Selection Filter by page Remove Check the venue, Scan full-text to select Conduct quality limit (pages > 6) duplicate studies title, and abstract primary studies assessment 28,524 papers 21,817 papers 445 papers 128 papers 61 papers Casual Graph Tensor Convolution Structural Inference Neural Network Analysis Specification Mining Neural Machine Translation Counterfactual LLM- Explanation Specific Knowledge Attention Techniques Graph/Base Feature Mechanism Tailored Backtracking Self- Decision Tree Explainable Techniques BERT Random Multi-Task Forest Explanation Learning Techniques for DeepLIFT AI4SE Studies LIME SubgraphX SHAP Model- Model- Agnostic Specific Delta Techniques Techniques Debugging GradCAM GNNExplainer Rule Induction GNN-LRP Neuron Activation Difference BreakDown PGExplainer Model-Dependent Techniques Model-Agnostic Techniques Casual Structural GNN-LRP PGExplainer DeepLIFT LIME SHAP BreakDown Inference Analysis Counterfactual Delta Debugging GradCAM Rule Induction GNNExplainer SubgraphX Explanation EExxppllaannaattiioonnss Neuron Activation Difference ffoorr AAII44SSEE Graph Tensor Convolution Neural Machine PPiippeelliinneess Neural Network Translation Attention Random Specification Knowledge Feature Mechanism Forest Mining Graph/Base Backtracking Decision Tree BERT Multi-Task Learning Tailored Techniques Self-Explainable Techniques 14 src/mongo/shell/linenoise.cpp @@ -2762,7 +2762,17 @@ int linenoiseHistorySetMaxLen(int len) { 347 347 # Get the HTML fragment inside the appropriate HTML element and then 348 348 # extract the text from it. 349 349 html_frag = extract_text_in(html, u"<div class='lyricbox'>") 350 - lyrics = _scrape_strip_cruft(html_frag, True) 350 + if html_frag: 351 + lyrics = _scrape_strip_cruft(html_frag, True) 351 352 352 - if lyrics and 'Unfortunately, we are not licensed' not in lyrics: 353 - return lyrics 353 + if lyrics and 'Unfortunately, we are not licensed' not in lyrics: 354 + return lyrics 24 S.Caoetal. 1144 ssrrcc//mmoonnggoo//sshheellll//lliinneennooiissee..ccpppp CCVVEE--22001166--66449944 DDeettaaiill @@@@ --22776622,,77 ++22776622,,1177 @@@@ iinntt lliinneennooiisseeHHiissttoorryySSeettMMaaxxLLeenn((iinntt lleenn)) {{ DDeessccrriippttiioonn 22776622 //** SSaavvee tthhee hhiissttoorryy iinn tthhee ssppeecciiffiieedd ffiillee.. OOnn ssuucccceessss 00 iiss rreettuurrnneedd TThhee cclliieenntt iinn MMoonnggooDDBB uusseess wwoorrlldd--rreeaaddaabbllee ppeerrmmiissssiioonnss oonn ..ddbbsshheellll hhiissttoorryy 22776633 ** ootthheerrwwiissee --11 iiss rreettuurrnneedd.. **// ffiilleess,, wwhhiicchh mmiigghhtt aallllooww llooccaall uusseerrss ttoo oobbttaaiinn sseennssiittiivvee iinnffoorrmmaattiioonn bbyy rreeaaddiinngg 22776644 iinntt lliinneennooiisseeHHiissttoorryySSaavvee((ccoonnsstt cchhaarr** ffiilleennaammee {{ 22776655 -- FFIILLEE&&** ffpp == ffooppeenn((ffiilleennaammee,, ""wwtt""));; tthheessee ffiilleess.. 22776666 iiff ((ffpp ==== NNUULLLL)) {{ WWeeaakknneessss EEnnuummeerraattiioonn 22776677 rreettuurrnn --11;; 22776688 }} CCCWWWEEE---IIIDDD CCCWWWEEE NNNaaammmeee 22776699 22777700 ffoorr ((iinntt jj == 00;; jj << hhiissttoorryyLLeenn;; ++++jj)) {{ CCCWWWEEE---222000000 EEExxxpppooosssuuurrreee ooofff SSSeeennnsssiiitttiiivvveee IIInnnfffooorrrmmmaaatttiiiooonnn tttooo aaannn UUUnnnaaauuuttthhhooorrriiizzzeeeddd AAAccctttooorrr 22777711 iiff ((hhiissttoorryy[[jj]][[00]] !!== ''\\00'')) {{ 22777722 ffpprriinnttff((ffpp,, ""%%ss\\nn"",, hhiissttoorryy[[jj]]));; 22777733 }} VVuullTTeelllleerr 22777744 }} TThhee FFIILLEE bbeeffoorree VVEERRSSIIOONN ddooeess nnoott vveerriiffyy tthhaatt uussee oofftthhee ppeerrmmiissssiioonnss oonn 22777755 -- ffcclloossee((ffpp));; hhiissttoorryy ffiilleess,, aalllloowwss llooccaall gguueesstt ttoo rreeaadd sseennssiittiivvee iinnffoorrmmaattiioonn bbyy rreeaaddiinngg tthheessee 22777766 rreettuurrnn 00;; |
22777777 }} ffiilleess.. Fig.10. VulnerabilityexplanationgeneratedbyVulTeller[129],whichtransfersdetectedvulnerablecodetonaturallanguage descriptionsconveyingthereasonforvulnerabilities. bases(e.g.,StackOverflow9,Wikipedia10).Forexample,Mahbubetal.[57]firstperformeddiscriminatorypre-training withbug-patchcodepairstoequipthepre-trainedCodeT5[104]withthecapabilityofunderstandingbuggycode patternsandgeneratingcommitmessagesaccordingly,andthenfine-tunedthemodeltogenerateexplanationfrom buggycode.Similarly,basedonthevulnerablecodelocalizedbyattentionmechanism,Zhangetal.[129]leveraged aGRU-baseddecodertogeneratevulnerabilitysymptom-orreason-relateddescriptivesentencesstepbystep.Such summarization-styledexplanationscaneffectivelybridgethecognitivegapbetweenstructuredprogramminglanguage andflattennaturallanguage.Fig.10showsacodesnippetofanexample(CVE-2016-649411)demonstratedin[129],the proposedapproachcancapturethesemanticinformationbehindtheinputcodesnippetandaccuratelypointoutthe locationofthevulnerability,whilerevealingthepotentialriskofinformationleakageaswritingsensitivedatatothe specifiedfileviafopen()withoutcheckingpermissions(atline2,765). Anotherwaytoobtaintextualexplanationsisbyretrievingexternalknowledgebases.Asanexample,Wuetal. [113]builtasemanticdatabasebasedonmalwarekeyfeaturesandfunctionaldescriptionsindeveloperdocumentation, andleveragedthemappingrelationbetweenthemaliciousbehaviorsandtheircorrespondingsemanticstogenerate reasonabledescriptionsthatareeasierforuserstounderstand.Tobridgetheknowledgegapbetweenthequeryand candidatecodesnippets,Wangetal.[100]linkedboththequeryandthecandidatecodesnippetstotheconcepts inthebackgroundknowledgegraphandgeneratedexplanationsbasedontheassociationpathsbetweenthesetwo partsofconceptstogetherwithrelevantdescriptions.Sunetal.[90]extractedvulnerabilitykeyaspects(including vulnerabilitytypes,rootcauses,attackvectors,andimpacts)fromsecuritypatchesbasedonpre-trainedBERT-based NameEntityRecognition(NER)andQuestionAnswering(QA)models[13],andassembledthemintopre-defined templatesasexplanations.Inspiredbysimilar/homogeneousvulnerabilitiesthathavesimilarrootcausesorleadto similarimpacts,Nietal.[68]firstretrievedthemostsemanticallysimilarproblematicpostsfromSOandprioritizedthe mostusefulresponsebasedonaquality-firstsortingstrategy.Then,theyemployedtheBERT-QAmodeltoextractthe rootcause,impact,andsolutionfromtheanswerstothegivenquestionsasusefulandunderstandablenaturallanguage explanations.Comparedtogenerativemodelspre-trainedonthehuman-labeledcorpus,externalknowledgebases suchasSOandWikipediaofferstructuredknowledgemodelsthatexplicitlystorerichfactualknowledge.Thus,they 9https://stackoverflow.com/ 10https://www.wikipedia.org/ 11https://nvd.nist.gov/vuln/detail/CVE-2016-6494 ManuscriptsubmittedtoACMWhatDoTheyCapture?-AStructuralAnalysisofPre-TrainedLanguageModelsforSourceCode ICSE’22,May21–29,2022,Pittsburgh,PA,USA WhatDoTheyCapture?-AStructAuraSlyAsntaelymsiastoifcPLrei-tTerraaintuedreLaRnegvuaiegweMoondeElsxfporlaSionuarcbeilCitoydeforML/DL-basedSEReseaICrScEh’22,May21–29,2022,Pittsburgh,PA,USA 25 (cid:6)(cid:5)(cid:1)(cid:24)(cid:25)(cid:26)(cid:1)(cid:25)(cid:2)(cid:31)(cid:25)(cid:36)(cid:36)(cid:21)(cid:27)(cid:25)(cid:4)(cid:25)(cid:39)(cid:28)(cid:37)(cid:20)(cid:23)(cid:33)(cid:24)(cid:25)(cid:11)(cid:16)(cid:33)(cid:32)(cid:25)(cid:3)(cid:10) (cid:7)(cid:5)(cid:1) (cid:34)(cid:35)(cid:28)(cid:32)(cid:37)(cid:20)(cid:30)(cid:33)(cid:27)(cid:2)(cid:31)(cid:25)(cid:36)(cid:36)(cid:21)(cid:27)(cid:25)(cid:4)(cid:19)(cid:14)(cid:15)(cid:15)(cid:17)(cid:18)(cid:4)(cid:12)(cid:17)(cid:15)(cid:13)(cid:3) (cid:8)(cid:5)(cid:1) (cid:28)(cid:26)(cid:1)(cid:25)(cid:39)(cid:28)(cid:37)(cid:20)(cid:23)(cid:33)(cid:24)(cid:25)(cid:1)(cid:28)(cid:36)(cid:1)(cid:32)(cid:33)(cid:37)(cid:1)(cid:16)(cid:33)(cid:32)(cid:25)(cid:10) |
(cid:9)(cid:5)(cid:1) (cid:36)(cid:40)(cid:36)(cid:5)(cid:25)(cid:39)(cid:28)(cid:37)(cid:2)(cid:25)(cid:39)(cid:28)(cid:37)(cid:20)(cid:23)(cid:33)(cid:24)(cid:25)(cid:3)(cid:6)(cid:5)(cid:1)(cid:24)(cid:25)(cid:26)(cid:1)(cid:25)(cid:2)(cid:31)(cid:25)(cid:36)(cid:36)(cid:21)(cid:27)(cid:25)(cid:4)(cid:25)(cid:39)(cid:28)(cid:37)(cid:20)(cid:23)(cid:33)(cid:24)(cid:25)(cid:11)(cid:16)(cid:33)(cid:32)(cid:25)(cid:3)(cid:10) (cid:7)(cid:5)(cid:1) (cid:34)(cid:35)(cid:28)(cid:32)(cid:37)(cid:20)(cid:30)(cid:33)(cid:27)(cid:2)(cid:31)(cid:25)(cid:36)(cid:36)(cid:21)(cid:27)(cid:25)(cid:4)(cid:19)(cid:14)(cid:15)(cid:15)(cid:17)(cid:18)(cid:4)(cid:12)(cid:17)(cid:15)(cid:13)(cid:3) (cid:26)(cid:38)(cid:32)(cid:23)(cid:20)(cid:24)(cid:25)(cid:26) (cid:8)(cid:5)(cid:1) (cid:28)(cid:26)(cid:1)(cid:25)(cid:39)(cid:28)(cid:37)(cid:20)(cid:23)(cid:33)(cid:24)(cid:25)(cid:1)(cid:28)(cid:36)(cid:1)(cid:32)(cid:33)(cid:37)(cid:1)(cid:16)(cid:33)(cid:32)(cid:25)(cid:10) (cid:9)(cid:5)(cid:1) (cid:36)(cid:40)(cid:36)(cid:5)(cid:25)(cid:39)(cid:28)(cid:37)(cid:2)(cid:25)(cid:39)(cid:28)(cid:37)(cid:20)(cid:23)(cid:33)(cid:24)(cid:25)(cid:3) (cid:24)(cid:25)(cid:26) (cid:25) (cid:34)(cid:21)(cid:35)(cid:21)(cid:31)(cid:36) (cid:10) (cid:26)(cid:22)(cid:38)(cid:30)(cid:32)(cid:33)(cid:23)(cid:23)(cid:20)(cid:29)(cid:24)(cid:25)(cid:26) (cid:24)(cid:25)(cid:26) (cid:25) (cid:34)(cid:21)(cid:35)(cid:21)(cid:31)(cid:36) (cid:10) (cid:22)(cid:30)(cid:33)(cid:23)(cid:29) (cid:2) (cid:31)(cid:25)(cid:36)(cid:36)(cid:21)(cid:27)(cid:25) (cid:4) (cid:34)(cid:21)(cid:35)(cid:21)(cid:31)(cid:36) (cid:3) (cid:41) (cid:28)(cid:26)(cid:20)(cid:36)(cid:37)(cid:21)(cid:37)(cid:25)(cid:31)(cid:25)(cid:32)(cid:37) (cid:2) (cid:31)(cid:25)(cid:36)(cid:36)(cid:21)(cid:27)(cid:25) (cid:4) (cid:34)(cid:21)(cid:35)(cid:21)(cid:31)(cid:36) (cid:3) (cid:41) (cid:28)(cid:26)(cid:20)(cid:36)(cid:37)(cid:21)(cid:37)(cid:25)(cid:31)(cid:25)(cid:32)(cid:37) (cid:25)(cid:39)(cid:28)(cid:37)(cid:20)(cid:23)(cid:33)(cid:24)(cid:25) (cid:11) (cid:16)(cid:33)(cid:32)(cid:25) (cid:28)(cid:28)(cid:26)(cid:26) (cid:33)(cid:33)(cid:34)(cid:34)(cid:25)(cid:25)(cid:35)(cid:35)(cid:21)(cid:21)(cid:37)(cid:37)(cid:33)(cid:33)(cid:35)(cid:35) (cid:10)(cid:10) (cid:22)(cid:30)(cid:33)(cid:23)(cid:29) (cid:25)(cid:39)(cid:28)(cid:37)(cid:20)(cid:23)(cid:33)(cid:24)(cid:25) (cid:11) (cid:16)(cid:33)(cid:32)(cid:25) (cid:28)(cid:28)(cid:26)(cid:26) (cid:33)(cid:33)(cid:34)(cid:34)(cid:25)(cid:25)(cid:35)(cid:35)(cid:21)(cid:21)(cid:37)(cid:37)(cid:33)(cid:33)(cid:35)(cid:35) (cid:10)(cid:10) (cid:22)(cid:30)(cid:33)(cid:23)(cid:29) |
(cid:25)(cid:25)(cid:39)(cid:39)(cid:28)(cid:28)(cid:37)(cid:37)(cid:20)(cid:20)(cid:23)(cid:23)(cid:33)(cid:33)(cid:24)(cid:24)(cid:25)(cid:25) (cid:28)(cid:28)(cid:36)(cid:36) (cid:32)(cid:32)(cid:33)(cid:33)(cid:37)(cid:37) (cid:16)(cid:16)(cid:33)(cid:33)(cid:32)(cid:32)(cid:25)(cid:25) (cid:23)(cid:21)(cid:30)(cid:30) (cid:25)(cid:25)(cid:39)(cid:39)(cid:28)(cid:28)(cid:37)(cid:37)(cid:20)(cid:20)(cid:23)(cid:23)(cid:33)(cid:33)(cid:24)(cid:24)(cid:25)(cid:25) (cid:28)(cid:28)(cid:36)(cid:36) (cid:32)(cid:32)(cid:33)(cid:33)(cid:37)(cid:37) (cid:16)(cid:16)(cid:33)(cid:33)(cid:32)(cid:32)(cid:25)(cid:25) (cid:23)(cid:21)(cid:30)(cid:30) (cid:5)(cid:6)(cid:9)(cid:4)(cid:3)(cid:8)(cid:9)(cid:7)(cid:10)(cid:1)(cid:9)(cid:10)(cid:7)(cid:2) (cid:21)(cid:37)(cid:37)(cid:35)(cid:28)(cid:22)(cid:38)(cid:37)(cid:25) (cid:21)(cid:35)(cid:27)(cid:38)(cid:31)(cid:25)(cid:32)(cid:37) (cid:5)(cid:6)(cid:9)(cid:4)(cid:3)(cid:8)(cid:9)(cid:7)(cid:10)(cid:1)(cid:9)(cid:10)(cid:7)(cid:2) (cid:21)(cid:37)(cid:37)(cid:35)(cid:28)(cid:22)(cid:38)(cid:37)(cid:25) (cid:21)(cid:35)(cid:27)(cid:38)(cid:31)(cid:25)(cid:32)(cid:37) (cid:36)(cid:40)(cid:36) (cid:25)(cid:39)(cid:28)(cid:37) (cid:2) (cid:25)(cid:39)(cid:28)(cid:37)(cid:20)(cid:30)(cid:33)(cid:21)(cid:24) (cid:3) (cid:36)(cid:40)(cid:36) (cid:25)(cid:39)(cid:28)(cid:37) (cid:2) (cid:25)(cid:39)(cid:28)(cid:37)(cid:20)(cid:30)(cid:33)(cid:21)(cid:24) (cid:3) (a)APythoncodesnippetwithitsAST (b)Atten(ati)oHneahpemaatpmapinLayer5 (c)Attentiondistribu(bt)ioBnipainrtiLteaGyerarp5h,Head12 (a)APythoncodesnippetwithitsAST (b)AttentionheatmapinLayer5 (c)AttentiondistributioninLayer5,Head12 Fig.11. Visualexplanationsprovidedbydifferenttechniques.Adaptedfrom[99]. Figure 2: Visualization of self-attention distribution for a code snippet in CodeBERT. (a) A Python code snippet with its Figure 2: Visualization of self-attention distribution for a code snippet in CodeBERT. (a) A Python code snippet with its correspondincgorArSesTp.o(bn)dHinegaAtmSTa.p(bo)fHtehaetmavaeproafgtehdeaatvteernatgioednawtteeingthiotsniwneLigahytesrin5.L(ca)yeSrel5f.-(act)tSeenltfi-oatntednitsitornibduitsitorinbuintioLnayinerL5ay,eHre5a,dH1e2a.d12. ThebrightneTssheofbrliignhetsniensdsiocfaltiensetshiendaitctaetnetsitohnewatetiegnhtitosninwaeisgphetsciifincahspeaecdi.fiIcfhtheaedc.oIfntnheectceodnnneocdteedsnapodpeesaarpipnetahreinmtohteifmsottriufcsttururecture ofthecorrespofotnhdeincogrAreSsTpo,wndeinmgaArkSTat,rh ewe wel emi ln l-ae krs nkoin wthnree fld oirn. tehseiinrsryemd.bolicreasoningability,whichgeneratesexplainableresults,andavoidshallucinations where 𝛼𝑖,𝑗 𝑐 wishtehree𝛼a𝑖t,t𝑗e𝑐ntiiosnthtehaattt𝑤en 𝑖tipooarni ygtsihn taa ottin𝑤𝑤g 𝑖𝑗f .proaTym hsegteoan t𝑤e ter 𝑗a n.t teTid ohnsetaattetmenettnirot asnit nh ia nt gar ste aramfianpcit lnu ega wlslyailmi lnpbcleoer rwr ee piclt rl. ebseernetperdesaesnftoedlloawsfso:llows: ( ) ( ) weights are cowmeipguhttesdarferocmomtphuetesdcaflreodmVditoshute-apslrcizoaaldetudiocdnto.otBf-petsrhiodedeuqscuetxeorpyflatihneinqgutehrryoughnumericalimportancescoresandtextualnaturallanguagedescriptions, vector of𝑤𝑖,avnedctothreofk𝑤ey𝑖,vaencdtotrheofke𝑤y 𝑗v,eufcostelolrroswocfae𝑤nd𝑗ub,nyfdoaellrosswotafentdmdbathyxe.aIbsneohfatmviaoxr.oIfnth𝑠 e= un[ dC eL rlS y𝑠] in,=𝑤 g[ 1mC,L𝑤 oS d2] e,, l.𝑤 t. h1. r,, o𝑤𝑤 u2𝑛g,, h.[. tS. h,E𝑤 eP𝑛 f] o,,[ r𝑢 mS 1E, o𝑢P f] 2,, v𝑢. i1s. u,.𝑢, a𝑢2ls, 𝑚,.. w,.[ h,E𝑢 icO𝑚 hS,][ mE. aO kS e] s. theuseof thevectorizedt fohc reo mmv uepc lauto tt eri dnizg ae, sda tc hgo emen wpe eur ita ginl hga te,t dta aet sgnt ueetn mniote ino ora n fml taa het p ect aeh vrna att i lnci uo ui esn lam vrm elyc ce tac a onh t rta rb Vn aeci ,s t uim v se ic noa gpn t ti hb o ee n.Humans,ingeneral,𝑐c 1anpro𝑐c1essvisualinfor𝑐m 2ation𝑐2fasterandmucheasieras |
f qo ur em ryul va et ce td oras Qqt uh aee nr dyw v te hei c eg th o kt r ee yQd vas eu n cdm toth ro ef Kkt :h eye c svv eo e na m c tl etu p o nae r cr eev Kde i: nc tot ao o pr t ahV re a, r lu leins li f fn o ag r sm ht iah ot ne io an nd[6 i4 s] a.A pT rtt h ime en ait rnio yppn rcu eowT t -mth ia rs pes at oiii h nnn n(cid:5)e ipi en(cid:2)t n (cid:2)nu (cid:2)i (cid:2)g(cid:2)a t(cid:2)t f (cid:2),l (cid:2)e (cid:2)i il (cid:2)t(cid:2)s ny d (cid:2)w(cid:6)t ti ih o(cid:7)n hn(cid:5)e (cid:2)t eo(cid:2)t(cid:2)n (cid:2)r (cid:2) (cid:2)(cid:2)o b(cid:2)(cid:2)o T(cid:2)(cid:2) (cid:2)(cid:2)f (cid:2) (cid:2)jd (cid:2) (cid:2)a re (cid:2) e(cid:2)(cid:2) (cid:2)a(cid:2) (cid:2)ud (cid:2) c(cid:8)(cid:6) T ntci i(cid:7) r sen va fd (cid:2)(cid:2)t oe(cid:2)n(cid:2)o (cid:2) sr(cid:2)t (cid:2)s(cid:2) mo (cid:2)a a(cid:2)f(cid:2)(cid:2)o r(cid:8) ecT er ror mna da(cid:5)n rs ee (cid:2)c(cid:2)i ss (cid:2)rd (cid:2)h(cid:2)if (cid:2)g(cid:2)eo e i(cid:2)(cid:2)t(cid:2)r nr n (cid:2)e(cid:6)m ec c c(cid:5) (cid:7) do toe (cid:2)(cid:2) u(cid:2)r (cid:2)r d (cid:2)(cid:2) f(cid:2) (cid:2)r r(cid:2) (cid:2)o(cid:2) (cid:2)ee (cid:2)e e(cid:2)(cid:2) (cid:2)r(cid:2) (cid:2)n r(cid:2)l (cid:2)(cid:6) (cid:2)[a . (cid:2)sc(cid:8) 9t(cid:7) D eoi 6(cid:2) lod (cid:2)u ]f(cid:2)(cid:2)n .(cid:2) -e (cid:2)r(cid:2) s(cid:2) Vrs (cid:2)i(cid:2) u. (cid:2)n(cid:8) ibD psge ueut B rawr vli E in ie zsReg aen T tdB io’wE s lneRo asTr ryd n’ sss -tein ma s pre-training,twoobjectivesaredesignedforself-supervisedlearn- Att Q,K,V =d Qi sff oKe ft𝑇r min axtheirQaKbi𝑇lityto Vsh ,owrelatioinn (s 2gh ),iip.es.a,t mi dmn ia cg u ts, il k oti i. e ne p d. l (, e Nlm as Sc na Pas g )lke .ues Iad n, gbl teay hn m erge moup dar ae eg ss le kie n enm dgtoi ln(d aMge nla gi Lnt uMtg ae gn)(M etaio mnL ndM oi dn n) eea v lx in a ntd r gi so ,n eu te n hsx t et efo ns ter ocmn ket ese p nnf r soc ere - od fpi arffe ne-rent Att (Q,K,V ) =so(ftmax) (cid:2) m h𝑑 eo mad doe sdls ef. olF r(cid:4)i ag(cid:2) ·. (cid:3)V s1 i𝑑 n1, m gd lo ei ds ip e nlla p(cid:4)y u·s t.h Te h(a2 etm) sea tp ws oand iand pic ppbt urii oop tana scr eh(t niN et i tte sn oS ep ksg nP hu er c)a no. t ep wI s [ahn e Mrs n det A, ith sw Se rte n Kaih nc ]nmi e cc .dth Ia a o nvrsa miek pr se ue rr ly aad ac cn lo s tl d rm a ia eco mn pem m ,g rpo el Bu lyn se Ea el s d Ryg na Te tamu ansm tup ie d noldo e nird fdt esoee pa rocl n mlh fi an dn alc yg ti r eq te, desu pt eneh wl ltas eie c i cof te to nt h sdor ,1k twwv h 5ei hi %s en tu ih css oa hpt floi h ez atf e c hi rna iees agn p gil nea rc aptt i puae htn lt ai non d where𝑑 modelrepresentsth (cid:3)e md ai tm rie xn .s Li io un eo tf ale .a [c 5h 2]h pid rod pen osr ee dp are ns ae ttn otet ka ne- tnive[MmA tuoSlktKie]-ne.nsIcnfoopdrerparocnsteisctiwbel,oeBrrkEetRphlTaatcuecnmoimfeonbritmn.eAldymlsooecnlaeglcettxhspe1e5sre%tlpeoacftttetedhrnetosiknaenpndustt,h8e0%global |
w tioh ne .r Fe o𝑑 rm so ed lfe -l ar tt de ti eo ep nn nr. te rF ies oo pe nr rn ,est Qses el ,ft n- Kha taet ,t tae id onni ndmtio bVen yn a, ds rQi ieo ff, mKn er, ago ea pr nfn apted pila nhV inc gfha eesr aahe ortfim u fd utra hd e npe ecfpn o tpii r on rr ev nge vu ssp ,ilor on ief .ue est r .she ,ane hQbt ip idla =r i- -te yv Hi d𝑙o −eu t1s e Wt ach o rti 𝑙 𝑄i ekd o- e ,n rn e, ps a lnf ao d ca rr eapre dnp ro dor wde os up is mtcli hab e lycdl [ee rde Mer x Apewp Sp lli aKatl cha n ]ec a ,[ de t 1Mim wo 0An %Se itsKn ha]it n r. t, eh1A t eh0 um%e nsef coao lhn erre acmg tnu et gon dh efce tdha o,sa khe an eel nge nae dtc sdmt tf,e hraad op en m,td lo h et vik fgh toe he 1cn l a0liseg b%,fh ut8 t al10 i arn0% reg% yr [ae 8rle ]e .vant denrepresenta Ktio =n Hb𝑙 −y1d Wiff𝑙 𝐾e ,r ae nn dtl Vin =ea Hr𝑙f −u1wn Wecit 𝑉𝑙gio ,hn rt ess. s, pSii e.em c. t,i ilQ va er ll= yy, .HW A𝑙 ta −ln1 agW ste ,𝑙 𝑄t tha, el. e[1 n0 c2 or] dav enri dsu oa mliz lye Fd orreth npe elaxa ctt et se edn nt wti eo inn tchew tpe hri eg edh sit ecs lteo iocf ntt e,h ide tim tsoo mks eot ndim selfep rdo or amt san avt bow icnao abr rd yus lcaa larn ysd si[p fi8h c]r a.a tis oe nsthat wK pr ho= id cH u hc𝑙 i− e s1 s oW t bh t𝑙 𝐾 ae i, w np fiar enn ho dIaid nd cl fu hV rocc orioe s= dms n eotH rt b teh ht t𝑙 xae o− eti1 nufi ulW aea tn id sla lt𝑉𝑙 il f zr Tre e,c op ror tm aen hr ns et etpe s sh ox e fe e ort ncu dwT rh lt ta meai o aa hv s rlv tt aee e ie or tox rl Te fyc p tnp r b. ho tlar hloAn enH oe er yt s s ct er 𝐿e f siki l o nwb ena .ru q= dts hma et ut ye t, eee [id dto ntrhh ht n tcb𝐿 1o e ie aal,H o pt p le .h c trn𝐿 . tuke e o.c r.- , km =o et h erdo wa n𝑛𝐿[ed i sh ]hnre ,,𝐿 e1 el t’ n, d hs . eacp . po.r “, pe d phF t ad le oo ino𝑛𝐿i esc lr d] da p it -,i n rno tneg oen eduxs Sgiat Ec at go ts (t a te t1eo inw ae v)n m sdx p eh iktp fr oe ne se el tdn ,a etd xwe h Wi gci an lc oe eas mt atr,w sp i nw s pv etrh u we nelhey ec ttde eoh sea ati x nhc ls aaac .cte e rsmo [i er ego 9m C spmt 9n cw o alm ]o, eed ro fsi ne ene utn s daB t rciet sr ts uoEeg hi a ncm Rs m ec tr sT r( eoe eoen dpn cnd [ co ru2dte ot bos 4 tu) l n aipe ]a c vfl ss od tr a ea eee se n ,g da ecd idcg dus to obe tG wa tnd a ni hvsrs ib ra lae tee ei ls hpdc .n ebu es h Ta o ee tt Cr r yfnil cayv of o p- oite ld ea nc h ln. sd oel ieT sa Bm n woirs f dE g fa oi s iet R ni si lt pn rlfi te T go eroid c un dws r[a cgt i3 ui at te tn4i sup lic o evg] roh a,n se asn rw :lpi ui t aoc oli ev na srsl e ik a-:d lyae n sb id st. Inordertot uio tin lia zl een thc eod oin rdg es” ra or fe ti hn eje sc ette qed ucheto nnittqih aue leistn o,piknu ect nlue smd,itnb hegedad “titpneong st. iio-nana(ly1s)iisf,ptwroobt isi nv ege note enx na wcm eop srl dae; reem(2 cb) oeo ndth dseie ncr gw u,ti ais vne ed, ,t sh iyto nws te ailxp la tbri er ee ecd ions nde sg uim cdteie orn ent d.s afr so am pd oi sff ie -rent documentsareconsideredasnegativeexamples. tionalencodings”areinjectedtotheinputInemadbdeitdidonin,gso.meapproachesdevteivloepeedxainmteprlaec;t(iv2e)oUtshererInwteisrefa,cteh(oUsIe)ptoaiprreodvisdeegvmiseunaltsexfprolamnadtiioffnesr.eFnotrexample, w𝑖 =𝑒 𝑤𝑖 𝑝𝑜𝑠 𝑤𝑖 , (3) Recently,self-supervisedlearningusingmaskedlanguagemod- Ji(ang)e+tal.[3(9])designedseveralcondtroibcuutmioennmtsinairnegcaolgnosriidthemresdtoasinnfeergtahteivkeeyexelaemmpenletss.incodethatcontributetothe eling has become a popular technique for natural language un- where𝑒dw e𝑖no= te𝑒 s(t𝑤 h𝑖e)w+o𝑝 rd𝑜𝑠 e(mg𝑤 eb𝑖ne)de, rdaintigonlaoyfert,haenkde𝑝y𝑜p𝑠hdrea( ns3 eo) steisntthheepcooR -mec me en nt dtl esy .r, sWs te ahl nf e- dns iu nap gde aer nvv edi ls oge ped enrl ee ira nar tten ionin ndg s[tu 5o,s ci 8n o,mg 9,pm 2r4ea ,hs 3ek 4ne ,dd 3t6l ha ]en .Ig cnou dta heg ,ee tchm oeno Utd eI- xlotaodfsthe sitionalembeddinglayer.Ty ap uic toa -ll gy e, nt eh re ap teo dsi ct oio mn mal ee nn tsco ad ni dng pri em sepe nlli tei sns tg oth ha es sdob efve tewc lo oapm reee resna thgp eino gep reau rpil hna ir gc,t isle leuc vh setn rraaiq tliu opne rebf -yo trr caoin nloa ert diu ncr goa tdl hel ea min mog pdu oea rlg tsae hntau vpn eh- raalsseosand |
where𝑒denote ths eth pe osw ito iord ne om fcb oe dd ed ti on kg enlay bae sr, ea dn od n𝑝 s𝑜 in𝑠 ed ae nn dot ce os sit nh ee fp uo n- ctions.derstandi bn eg ena pn rd opg oe sn ee dr fa ot rio pn rog[ r5 a, m8, u9 n, d2 e4 r, st3 a4 n, d3 i6 n] g. .I In ntt hh ie spco apn et re ,x wt eo sf elect thecorrespondingpartsinthesourcecode.Inthisway,thedevelopercancheckwhethertheauto-commentscorrectly sitionalembeddinglayer.Typically,thepositionalencodingimplies softwaretewnogrienpereersienngta,tsiveeveprrael-tprarien-etdraminoeddelcsofodrecmodoedreelpsrhesaevnetaatilosons:(1) thepositionof 2c .2ode Pto rk ee -n Tb ra as ie nd io nn gs Lin aed neas gnc udrib aceo gst ehine Meinf otue dnn ectit loionnosf.thecode.SbuecehnilplurostpCroa otsdieo edn BEfios Rrl Tigp[hr1ot 1-gw ]r,eawimg hh iucte hnddt, aew kresh stiac thnhdedio cnoegs n.cnIano tt etnihn aicstu iopr nanpooeftirsc,oewuab recleseel caloet decntec ay nt dothe developers.Tofacilitateexaminingtwthoerceapnrdenisdaeatntuetraactloi-vdlaeenpsgnrueiap-gptreeatdisnebseacdsreimpdtoioodnnelraseslfaoitnrepdcuoctdso,enacrneedpptprseraesne-tndrtadaientissocanrispla:tni(o1gn)usatgoegive Givenacorpus,eachsentence(orcodesnippet)isfirsttokenized 2.2 Pre-Training Language Mcoontdroellbacktotheuser,XCoS[100]CleovdeerBagEeRdmTao[sd1ter1lu]bc,ytwumrheiadcshckoitnnagcketephsteutiahnleptruceotesn;tcaoandtdeisn(p2al)atGiyortnahpeohfrCessoouduletrsBcEoefRcqToud[e1er3ya]n,scwdohpiicnhg,the into a series of tokens (e.g., Byte Pair Encoding, BPE [32]). Be- natural-laimngpuroavgeesdCeosdcerBipEtRioTnbaysiinncpourptso,raantidngptrhee-tdraaitna-sflaowlaningfuoarmgeation Givenacorpufso,reeaBchERsTen’stepnrec-etr(aoirnicnogd,eitsntaikpepsetth)eiscfiornsctattoenkaetnioiznedoftwoseg- ManuscriptsubmittedtoACM into a series omfetnotskeanssth(ee.gin.,pButy,tedePfianierdEansco𝑐1din=g,𝑤B1P,E𝑤2[,3.2.].),.𝑤B𝑛e-and𝑐m 2 o=delbyammaosnkginvgartihabeleinspinuttos;manodde(l2p)rGe-rtarapihnCinogd.eBERT[13],which { } fore BERT’s pr𝑢 e1-, t𝑢 ra2, in.. in., g𝑢 ,𝑚it,twakheesreth𝑛 eacnodn𝑚 catdeennaottieonthoefletwngothssego-f twoismegp-rovesCodeBERTbyincorporatingthedata-flowinformation { } 3 MOTIVATION ments,respectively.Thetwosegmentsarealwaysconnectedbaymaongvariablesintomodelpre-training. ments as the input, defined as 𝑐 1 = 𝑤 1,𝑤 2,...,𝑤𝑛 and 𝑐 2 = specialseparatortoken[SE{P].Thefirstand}lasttokensofeachse- PriorworkinNLPhaspointedoutthattheself-attentionmecha- {𝑢 1,𝑢 2,...,𝑢𝑚 q}u, ew nch ee ar re e𝑛 alwan ayd s𝑚 padd de en do wte ithth ae sple en cig alth cls aso sf ifit cw ato ionse tg o- ken[C3LS] MOnTisImViAnTTrIaOnsNformerhasthecapabilityofcapturingcertainsyntax ments,respectively.Thetwosegmentsarealwaysconnectedbya andanendingtoken[EOS],respectively.Finally,theinputofeach informationinnaturallanguages.Inspiredbythis,wevisualizeand specialseparatortoken[SEP].Thefirstandlasttokensofeachse- PriorworkinNLPhaspointedoutthattheself-attentionmecha- quencearealwayspaddedwithaspecialclassificationtoken[CLS] nisminTransformerhasthecapabilityofcapturingcertainsyntax andanendingtoken[EOS],respectively.Finally,theinputofeach informationinnaturallanguages.Inspiredbythis,wevisualizeand 2379 237926 S.Caoetal. Table8. TheVariousFormatsofExplanationsinPriorStudies Format Examples #Studies References Numeric ImportanceScore,Similarity 14 [1,14,15,17,40,43,44,51,63,91,114,122,131,133] Text Descriptions,KeyPhases/Words 12 [26,27,41,50,57,68,77,90,113,118,129,135] Visualization Heatmap,BipartiteGraph 9 [16,39,46,52,99,100,102,115,132] SourceCode Tokens,Statements 15 [9,28,30,36,47,71,75,79,93,103,107,110,117,124,130] Rule AssociationRules 13 [8,12,21,29,31,48,58,67,74,76,84,116,137] identifiedconceptualassociationpaths.Userscaninteractbyselectingtheanchorconceptrelevanttothequerywhich willupdatetheresultsofcodesnippetsrelatedtotheselectedconcepts. SourceCode.AssourcecodeisoneoftheprimarydatatypethatAI4SEmodelsattempttolearnfrom,itservesas anaturalformatofexplanation.Gengetal.[30]identifiedimportantcodetokenscontributingtothegenerationof aspecificpartofthesummarizationbycheckingwhichmeaningfulwordsdisappearedinanyofthesummarization newlygeneratedfrommutants.Wangetal.[103]furtherclassifiedanentireinputintotwotypesoffeatures:defining features(i.e.,wheat)representingthereasonsmodelspredictaspecificlabel,andtherest(i.e.,chaff).Theyproposeda coarse-to-fineapproach,named"ReduceandMutate",toidentifywheatthatcodemodelsuseforprediction.Incontrast |
tosimplification-basedtechniquesthatcanbeappliedtoanyDLarchitecture,Lietal.[47]employedaGNN-specific explanationframework,GNNExplainer[123],tosimplifythetargetcodeinstancetoaminimalsubsetconsistingof crucialstatementswhileretainingtheinitialmodelprediction.Alongthelinesofarchitecture-specificsimplification, Huetal.[36]investigatedsixfamousGNNexplainerstoevaluatetheirexplanationperformanceonvulnerability detectorsfromthreedifferentperspectives,whichareeffectiveness,stability,androbustness.Theirempiricalstudy showedthattheexplanationresultsprovidedbydifferentexplainersforvulnerabilitydetectionvarysignificantly,and theperformanceofallexplainerswasstillnotsatisfactory. Rule.Rule,whichcanbeorganizedintheformofIF-THEN-ELSEstatementswithAND/ORoperators,isaschematicand logicformat.Despiteitscomplexitycomparedtovisualizationandnaturallanguagedescription,rule-basedexplanationis stillintuitiveforhumansandusefulforexpressingcombinationsofinputfeaturesandtheiractivationvalues.Generally, theserulesapproximateablack-boxmodelbuthavehigherinterpretability.Zouetal.[137]identifiedimportantcode tokenswhoseperturbationsleadtothevariantexampleshavingasignificantimpactonthepredictionofthetarget modelviaheuristicsearching,andtrainedadecisiontree-basedregressionmodeltoextracthuman-understandable rulesforexplainingwhyaparticularexampleispredictedintoaparticularlabel.Toexplaintheindividualpredictions oftheblack-boxglobalmodel,Pornprasitetal.[74]builtaRuleFit-basedlocalsurrogatemodel,whichcombinedthe strengthsofdecisiontreeandlinearmodel,tounderstandthelogicalreasonslearnedfromtherulefeatures.Then,they derivedapositively-correlatedrulesethavingthehighestimportancescoreandsatisfyingtheactualfeaturevaluesof theinstanceasexplanations.Citoetal.[8]proposedaruleinductiontechniquewhichproduceddecisionlistsbasedon featuresandmispredictedinstancestoexplainthereasonsformispredictions.Basedonthepriorknowledgethatdata oftenexhibithighly-skewedfeaturedistributionsandtrainedmodelsinmanycasesperformpoorlyonthesubsetwith under-representedfeatures,Gesietal.[31]deducedthemispredictionexplanationrulesonlyonbiasedfeaturesinstead ofblindlytryingonallfeatures,hencededucingbetterruleswithinthesamecomputationtime. ManuscriptsubmittedtoACMASystematicLiteratureReviewonExplainabilityforML/DL-basedSEResearch 27 Numeric Text Visualization Source Code Rule Developer Recommendation 1 Effort/Cost Estimation 1 Configuration Extrapolation 1 Information Search on Q&A Sites 1 Bug Report-Related 1 1 Code Smell Detection 1 1 1 Code Review 1 Root Cause Analysis 1 Incident Prediction 1 OSS Sustainability Prediction 1 Bug/Defect Prediction 5 1 2 4 Malware/Anomaly Detection 1 1 2 Program Repair 1 Bug/Fault Localization 1 1 Vulnerability Detection 5 1 4 1 Debugging 1 2 Test Case-Related 2 1 1 Code Search 1 Code Summarization 1 2 Program Synthesis 3 Code Understanding 1 5 0 2 4 6 8 10 12 Fig.12. ExplanationformatswithSEtasks. 5.2.1 ExploratoryDataAnalysis. Table8describestheproportionofvariousformatsofexplanationswesummarized from63primarystudiesindetailandgivesexamples.Wefoundthatthedistributionofdifferentexplanationformats wasrelativelyeven,andthemostcommonformatbeingemployedwassourcecode( 24%).Theprevalenceofsource ≈ code-basedexplanationisnotsurprising,giventheriseinthenumberofpubliclyavailablecoderepositories,aseriesof AImodels[33,34]haveachievednotableadvancementsinvariousdownstreamcode-centricSEtasks,suchascode generation,understanding,andanalysis[98],makingitanaturalwaytoobtainhuman-understandableexplanations.In addition,wenoticedthatvisualexplanationislesscommonlyusedintheSEcommunity,onlyaccountingforatotalof 9primarystudies.Wesuspectthatthemainreasoncontributingtoitsrelativelackofadoptionliesinthat,although visualization,suchasheatmap[52]andbipartitegraph[99],canprovideafastandstraightforwardexplanationfor practitioners(e.g.,adeveloper,domainexpert,orend-user)whoareinexperiencedinML/DL[87],itcanonlyconvey limitedinformationandrequirespost-processingforfurtheruse. Fig.12providesanoverviewofthedistributionofexplanationformatsadoptedbydifferentSEtasks.Weobserved thatthetypesofexplanationformatsvariedeveninasingleSEtask.Asanexample,theexplanationsgenerated forbinaryvulnerabilitydetectorsinoursurveyedstudiescanbetext(e.g.,vulnerabilitydescriptions[68,129],types [90,135]),visualization(e.g.,heatmap[52]),sourcecode(e.g.,codestatements[47,93]),orrules[137].Thisdiversity helpstosatisfythepersonalizedneedsofthestakeholderswhohavedifferentintentsandexpertise.Weanticipatethat thistrendwillpersist,withthecontinuedproposalandapplicationofnewXAItechniquestovariousSEtasks. ✍▶RQ -Summary◀ 2𝑏 AvarietyofexplanationformatshavebeenexploredinoursurveyedXAI4SEresearch,withthemainformats • utilizedbeingnumeric( 22%),text( 19%),visualization( 14%),sourcecode( 24%),andrule( 21%). ≈ ≈ ≈ ≈ ≈ ManuscriptsubmittedtoACM28 S.Caoetal. 28.6% 44.4% 49.2% 55.6% 12.7% 9.5% 1~2 Baselines 3~4 Baselines >=5 Baselines Not Applicable Available Link Not Applicable (a)Distributionofbaselineusage. (b)Approachavailability. Fig.13. StatisticalinformationofbaselinetechniquesandXAI4SEapproaches. |
SourcecodeisthemostessentialexplanationformatinXAI4SEstudies.Fewstudiesemployedvisualizationas • theirpreferredformatofexplanationduetothelimitedinformationitconveyed. TheformatsofexplanationsvariedeveninasingleSEtask.Thisdiversityhelpstosatisfythepersonalized • needsofthestakeholderswhohavedifferentintentsandexpertise. 6 RQ3:HOWWELLDOXAITECHNIQUESPERFORMINSUPPORTINGVARIOUSSETASKS? Evaluatingtheeffectivenessofproposedsolutionsagainstexistingdatasetsandemployingbaselinecomparisonsis astandardpracticeinAI4SEresearch.InthisRQ,weendeavortoinvestigatetheinfluenceofXAI4SEresearchby scrutinizingtheeffectivenessofthetechniquesproposedinthestudiesunderconsideration.Ouranalysisprimarily focusesonevaluatingmetricsonatask-specificbasis,aimingtoencapsulatetheprevailingbenchmarksandbaselines withinthefieldofXAI4SEresearch. 6.1 RQ3𝑎:WhatbaselinetechniquesareusedtoevaluateXAI4SEapproaches? ForRQ3𝑎,wescrutinizethebaselinetechniquesemployedforcomparativeanalysisinourselectedprimarystudies. Althoughcommonbaselinesforparticularsoftwareengineeringtaskswereidentified,itwasnotedthatasubstantial portionoftheliteratureautonomouslydevelopeduniquebaselines.Theextensivevarietyandvolumeofthesebaselines precludedtheirdetailedinclusionwithinthebodyofthismanuscript.Therefore,weincludedthelistingofbaselines thateachpapercomparedagainstonourinteractivewebsiteathttps://riss-vul.github.io/xai4se-paper/.Fig.13abriefly analyzesthedistributionofbaselineusageinXAI4SEstudies.Approximatelyhalfofthepapersrevieweddonot engageincomparisonswithanybaseline( 49.5%),whereasaminoritycontraststheirfindingswithmorethanfour ≈ distinctmethods( 9.5%).Itwasobservedthatnumerousbaselinetechniquesconsistofestablishedwhite-boxmodels ≈ withtransparentalgorithmsorconventionalexpertsystems.Thistrendmaybeattributed,inpart,tothenascent stageofXAI4SEresearch,whichhasresultedinalimitedrangeofexistingXAI-centriccomparatives.AsXAI4SE ManuscriptsubmittedtoACMASystematicLiteratureReviewonExplainabilityforML/DL-basedSEResearch 29 4.4% 11.8% 38.2% 45.6% Self-Generated Previous Benchmark Industry-Provided Not Available Fig.14. Constructionstrategiesofbenchmarks. progressestowardsmaturity,anevolutiontowardsevaluationsincorporatingbenchmarksagainstestablishedXAI- centricmethodologiesisanticipated.Furthermore,itwasnotedthattheselectionofbaselinetechniquesexhibitsahigh degreeofspecificity,varyingsignificantlyevenamongstudiesaddressingidenticalsoftwareengineeringtasks.For example,toevaluatetheeffectivenessoftheirproposedexplainablevulnerabilitydetectionapproaches,sixoutof11 primarystudiesemployedatleasttwobaselinesforevaluation,whileonlytwopapers[26,129]overlapslightlyin termsofthebaselines. AconcerningtrendidentifiedinourreviewisthelackofpubliclyaccessibleimplementationsformanyXAI4SE approaches.Weconductedamanualinspectionofalllinksprovidedwithineachstudy.Ininstanceswherealinktoa replicationpackagewasavailable,weassesseditscontentsforsourcecodeandrelevantdocumentation.Absentany directlinks,wealsoendeavoredtolocateeithertheoriginalreplicationpackageoranequivalentreproductionpackage onGitHubusingthetitleofthepaperasasearchquery.AsdepictedinthepiechartinFig.13b,approximatelyonly44% oftheprimarystudiesofferaccessiblereplicationpackages.Amongtheremainingstudieswithreplicationpackages, aconsiderableportionofthemproposeXAItechniquesforspecificSEtasksforthefirsttime,suchasinformation searchonQ&Asites[50]andOSSsustainabilityprediction[114].This,inpart,explainstheproliferationofhighly individualizedbaselineapproaches.Researchersoftenlackaccesstocommonbaselinesforcomparison,compelling themtoimplementtheirownversions.Therobustnessofresultsinsuchpapersmaybecompromised,asmanydonot provideinformationaboutthebaselinesused.Moreover,distinctimplementationsofthesamebaselinescouldleadto confoundingresultswhenassessingpurportedimprovements.Whileweanticipatethatthesetofexistingpublicly availablebaselineswillimproveovertime,wealsorecognizethenecessityforwell-documentedandpubliclyavailable baselines,accompaniedbyguidelinesthatdictatetheirproperdissemination. 6.2 RQ :Whatbenchmarksareusedforthesecomparisons? 3𝑏 ForRQ ,weinvestigatedthecollectionstrategiesofbenchmarksinXAI4SEstudies.Ascanbeseenfromthedata 3𝑏 inFig.14,31( 45%)studiesusedpreviouslycuratedbenchmarksforevaluatingXAI4SEapproaches.Theselection ≈ ofopen-sourcebenchmarksisoftenmotivatedbytheircompellingnatureinassessingtheperformanceofAI-driven methodologies,whichfacilitatesthereproducibilityandreplicationbysubsequentstudies.Giventhenascentemergence ManuscriptsubmittedtoACM30 S.Caoetal. 12 Previous Benchmark Self-Generated 10 Industry-Provided Not Available 8 6 4 2 0 C ode U n Pd re or g Cs rt oaa dmn e di S Sn y ug n mth me asi ris Cz Ta o et si d teo n CS ae Vsa uer l-c nRh ee rl aD Ba bt e uie lb gi /d tu Fg y M agi D u al ln e t wtg e L ac rot c ePi /rao l o Ain gz nr Oa ot a B Si mm u So a gn l S/R y uDe sep tD fa aei e itr c ne t act bPi ir lo ie Itn d ni y cc it Pi dr Ro ee on ndi t o tc t Pi r Co e an d ui sc e It i C no A fon oC dn rea o l md Sy e as m ti B iRs e ol ue l g nv C |
i D S oe Re ee nw t fape iroc gct ri uht r-o o aRn tn ie l o E Da Q fnt f e &e o vEd rA ex t l t / oS ri C pat o epe s rs o tl Ra Et esi cto i on m ma mti eo nn datio n Fig.15. CollectionstrategiesofbenchmarksbytheSEtask. ofXAI4SEresearch,thereisanobservedscarcityofappropriatebenchmarkdatasets.Thisalsoexplainswhythere isaconsiderableamount(38.2%)ofself-generatedbenchmarks.Inouronlinerepository,werecordedtheaccessible benchmarklinksprovidedbyprimarystudies.Ouraimistoassistresearchersbyofferinginsightsintotheavailable benchmarksforevaluatingmethodologieswithindistinctsoftwareengineeringtasks.Furthermore,weadvocatefor futurescholarstosharetheirself-createdbenchmarkspublicly,therebyfurnishingavaluableresourcethatfacilitates notonlycomparativeanalysesamongdifferentmethodologiesbutalsobroadensthedatasetaccessibleforExplainable AItechniques. Whiletheadoptionofpre-existingbenchmarkswasinfrequentacrossoursurveyedstudies,wedidobserveasubset ofbenchmarksthatrecurredwithinourprimarystudies.Acomprehensivedelineationofthetypesofthebenchmarks employedintheseprimarystudiesisdepictedinFigure15.Forvulnerabilitydetection,wefoundthattheBig-Vul [22]datasetwasusedfrequently,includingevaluatingtheaccuracyofthekeyaspectsextractedfromthedetected vulnerabilities(e.g.,vulnerablestatements[36,47],vulnerabilitytypes[26]).Additionally,inthecontextofdefect prediction,thedatasetreleasedbyYatishetal.[120]wasemployedasabenchmarkforcomparingpriorXAItechniques targetingsoftwaredefectprediction[40,44]. 6.3 RQ3𝑐:WhatevaluationmetricsareemployedtomeasureXAI4SEapproaches? WithanincreasingnumberofXAItechniques,thedemandgrowsforsuitableevaluationmetrics[3,5,32].Thisneed isnotonlyrecognizedbytheAIcommunity,butalsobytheSEcommunityasevaluatingtheperformanceofXAI techniquesisacrucialaspectoftheirdevelopmentanddeployment[79].Table16describesthedistributionofevaluation strategiesfoundinthisSLR.InouranalysisofutilizedevaluationstrategieswithinworkonXAI4SE,weobserved thatnearlyhalf( 44%)ofprimarystudiesadoptedanecdotalevidenceinsteadofquantitativemetricstoevaluate ≈ theirproposedapproaches.That’stobeexpectedbecausetraditionalperformancemetricsexisttoevaluateprediction accuracyandcomputationalcomplexity,whileauxiliarycriteriasuchasexplainabilitymaynotbeeasilyquantified[18]. Among45papersthatperformedquantitativeevaluation,wefoundthattheSEcommunityalsohasyettoagreeupon standardizedevaluationmetricsduetotheabsenceofgroundtruths.Furthermore,duetothewiderangeofobjectives associatedwithexplainabilityinSEtasks,relyingsolelyonasingleevaluationmetricmaynotadequatelyreflectthe ManuscriptsubmittedtoACMASystematicLiteratureReviewonExplainabilityforML/DL-basedSEResearch 31 Not Applicable 3 (3.5%) User Study 21 (24.4%) Quantitative Evaluation 45 (52.3%) Case Study 17 (19.8%) Fig.16. EvaluationstrategiesofXAItechniques. fullspectrumofanXAItool’sperformance.Consequently,researchersfrequentlyutilizeavarietyofevaluationmetrics, eachdesignedtomeasurespecificaspectsofexplainability.Theevaluationmetricsderivedfromananalysisof63papers havebeenorganizedintofivedistinctcategories,i.e.,metricsfornumericalexplanations,textualexplanations,visual explanations,codeexplanations,andruleexplanations,basedonthevariousformatsofexplanationsdescribedabovein Section5.2.ThedetailedinformationisdisplayedinTable9.Foreachexplanationformat,welistedthemainevaluation metricsresearchedbynolessthantwostudies. NumericalExplanationMetrics.Commonlyusedmetricsfornumericalexplanationsaredistance-basedmetrics and#KeyFeatures(i.e.,thenumberofkeyfeatures).Bothofthemaimtoquantifytowhatextenttrulymeaningful featuresareusedbyAI4SEmodelsfordecision.Forinstance,Sunetal.[91]employedCosineSimilaritytoevaluatethe transferabilityofadversarialcaseswithmaliciousfeaturesextractedbySHAP.Additionally,Liuetal.[51]leveraged Top-Kfeaturesthatpositivelycontributetothepredictionsofstate-of-the-artmalwaredetectorstovalidatewhether theylearnedactualmaliciousbehaviorsforclassification. TextualExplanationMetrics.MetricsinthiscategoryoriginatefromNLPtasks,measuringgeneralqualityaspects ofthegeneratedtext.Inthestudiesreviewed,themostwidelyusedtextualexplanationmetricsareAccuracy[26,118], Precision[118,135],Recall[118,135],F1-score[26,113,118,135],AUC[118,135],BLEU[57,129],andROUGE[90,129]. Thesemetricsjustcorrespondtotwowaysofachievingtextualexplanations:retrievingcandidateanswersfromexternal knowledgebases(i.e.,classificationtask)andgeneratingfromscratchbyusinggenerativemodels(i.e.,generationtask). Forexample,Fuetal.[26]employedAccuracyandWeightedF1-scoretoevaluatetheperformanceofvulnerability classification.Similarly,Yangetal.[118]adoptedAccuracy,Precision,Recall,andF1-scoretoassesstheperformance ofeachtextclassifierfromtheattributeperspective.Tomeasuretheaccuracyofexplainablesilentdependencyalert generation,Sunetal.[90]usedROUGE-1,ROUGE-2,andROUGE-Lasevaluationmetrics. |
VisualExplanationMetrics.Forvisualexplanations,attentionweightisthemostfrequentmetric,usedincode understanding [99], vulnerability detection [52], and code smells [102]. In addition, statistic-based metrics, such asPearsonCorrelation[39]andSpearmanCorrelation[99],arealsoutilizedtomeasurethequalityofgenerated explanations.Intuitively,ifanexplanationapproachismorereasonable,itsgeneratedexplanationsshouldbemore semanticallyrelatedtothesampletobeexplained. ManuscriptsubmittedtoACM32 S.Caoetal. Table9. MetricsUsedforEvaluation Format Metric #Studies Reference Distance-Based 2 [1,91] Numeric #KeyFeatures 3 [14,44,51] Accuracy 2 [26,118] Precision 2 [118,135] Recall 2 [118,135] Text F1-Score 4 [26,113,118,135] AUC 2 [118,135] BLEU 2 [57,129] ROUGE 2 [90,129] AttentionWeights 3 [52,99,102] Visualization Statistic-Based 2 [39,99] Accuracy 3 [36,47,117] Recall 2 [93,107] F1-Score 2 [71,117] BLEU 2 [30,79] SouceCode TopK-Based 6 [75,103,107,110,117,130] Statistic-Based 2 [36,79] MeanFirstRank(MFR) 2 [47,130] MeanAverageRank(MAR) 2 [47,130] Accuracy 3 [8,48,137] Precision 3 [8,31,58] Recall 2 [8,31] Rule F1-Score 5 [29,31,74,84,137] AUC 2 [29,74] Statistic-Based 2 [74,76] CodeExplanationMetrics.Forcodeexplanations,traditionalclassificationmetrics(e.g.,Accuracy[36,47,117],Recall [93,107],andF1-score[29,31,74,84,137]),andranking-basedmetricslikeTop-𝑘Accuracy[75,103,107,110,117,130] arethemostcommonlyused.Forexample,Yangetal.[117]combinedmultiplemetrics,includingTop-1Recall,Top-5 Recall,Accuracy,andF1-scoretomeasurethecontributionsofinterpretabletensorpropagationtoimprovingthe performanceofvariablemisusedetection. RuleExplanationMetrics.Intermsofruleexplanations,themetricsneedtomeasuretowhatextentthegenerated rulescancoverandcorrectlyexplainthetargetdata.Therearefivefrequentlyusedperformancemetrics,including Accuracy[8,48,137],Precision[8,31,58],Recall[8,31],F1-score[29,31,74,84,137],andAUC[29,74],forevaluating thequalityofruleexplanations.Forexample,Zouetal[137]employedAccuracyandF1-scoretoevaluatethefidelity ofdifferentexplanationapproaches.Citoetal.[8]andGesietal.[31]usedPrecision,Recall,andF1-scoretomeasure thecoverageofgeneratedrulesformispredictions. ✍▶RQ3-Summary◀ Theanalysisindicatedanotablescarcityofwell-documentedandreusablebaselinesorbenchmarksforwork • onXAI4SE.Approximately49%ofthebaselinetechniquesemployedintheevaluationsofourstudiedXAI4SE approacheswereself-generated,withasignificantportionnotbeingpubliclyaccessibleorreusable. ManuscriptsubmittedtoACMASystematicLiteratureReviewonExplainabilityforML/DL-basedSEResearch 33 WenoticedthatthereisnoconsensusonevaluationstrategiesforXAI4SEstudies,andinmanycases,the • evaluationisonlybasedoninnovativeandtask-specificmetricsorresearchers’subjectiveintuitionofwhat constitutesagoodexplanation. The evaluation metrics predominantly utilized across the studies were categorized based on five types of • explanationformats,i.e.,numeric,text,visualization,sourcecode,andrule.Asignificantportionofthereviewed papersemployedconventionalclassificationmetrics,suchasAccuracyandF1-score,toevaluatetheperformance oftheirproposedapproaches. 7 LIMITATIONS StudyCollectionOmission.Ourreviewhassomepotentiallimitations,andoneofthemistheriskofinadvertently excludingrelevantstudiesduringtheliteraturesearchandselectionphase.Theincompletesummarizationofkeywords relatedtoSEtasksandthevarieduseofterminologyforexplainabilityacrossstudiesmayhaveledtooursearchcriteria overlookingrelevantresearchthatoughttohavebeenincorporatedintoourSLR.Toaddressthisconcern,wefirst manuallyselected26top-tierSE&AIvenuessuggestedbyprevioussurveysonAI4SEresearch[101,106,119],and extractedrelativelycomprehensiveandstandardkeywordsforSEtasksandXAItechniques.Withthesesearchstrings, wefurtheraugmentedoursearchresultsbycombiningautomatedsearchwithforward-backwardsnowballing. DataExtractionBias.Anotherpotentiallimitationisdataextractionbias.Certaindiscrepanciesaroseinevitablywhen extractingrelatedcontentandclassifyingthedataitemsinTable6.Tomitigatethebiasindataextractionphasetothe validityofourfindings,weinvitedtwopractitioners,thefourthand11thauthors,toconductasecondaryreviewof controversialdataitemsthatunabletoreachconsensusonclassification.Bothofthemhavemorethan10yearsof experienceinthefieldofSEandXAI. Byapplyingthesecountermeasures,westrivetoguaranteethecomprehensivenessoftheselectedpapersandthe accuracyofthedataitems,therebyenhancingthereliabilityofourfindings. 8 CHALLENGESANDOPPORTUNITIES Inthissection,wediscusscurrentchallenges(Section8.1)andhighlightpromisingopportunities(Section8.2)for applyingXAItechniquesinAI4SEresearch. 8.1 Challenges Challenge1:LackofConsensusonExplainabilityinSE.Oneofthemajorchallengesindevelopingexplainable approachesforAI4SEmodelsisthelackofformalconsensusonexplainabilitywithinthefieldofSE.Asshownin theearliersections,numerouspointsofviewareproposedwhentryingtoarticulateexplainabilityforaspecificSE |
task.Forinstance,toassistsoftwaredevelopersinunderstandingdefectivecommit,someapproachessimplyhighlight thelinesofcodethatthemodelthinksaredefective[107],whileothersextracthuman-understandablerules[74],or evennaturallanguagedescriptions[57]fromthedefectivecodethatcanserveasactionableandreusablepatternsor knowledge.Althoughthisdiversitycanmeetthedistinctrequirementsofaudienceswithdifferentlevelsofexpertise,it greatlyincreasesthedifficultyofestablishingaunifiedframeworkwhichprovidescommongroundforresearchersto contributetowardtheproperlydefinedneedsandchallengesofthefield. ManuscriptsubmittedtoACM34 S.Caoetal. Challenge2:Trade-offbetweenPerformanceandExplainability.Forsomereal-worldtasks,amodelwithhigher accuracyusuallyofferslessexplainability(andviceversa).SuchPerformance-ExplainabilityTrade-off(PET)dilemma oftenresultsinuserhesitationwhenchoosingbetweenblack-boxmodelsandinherentlytransparentmodels.While certainstudies[80,81]indicatethatblack-boxmodelsperformingcomplexoperationsdonotnecessarilyresultinbetter performancethansimplerones,itisoftenthecaseforadvancedSEmodelsbuiltuponimmenseamountsofstructured andunstructureddata.ThischallengehighlightsthenecessityofflexiblyselectingXAItechniquesaccordingtothe characteristics(e.g.,input/outputformat,modelarchitecture)ofdifferentSEtasks.Forexample,givingprecedenceto transparentmodelswhentheirperformancecouldmeettheusers’requirements,andtransitioningtomorecomplex modelswhenneeded. Challenge3:HighlyIndividualizedBaselinesandBenchmarks.Evaluatingthenewlyproposedapproachover baseline techniques on a benchmark dataset is developing into a standard practice in SE research. Nevertheless, groundtruthinformationscarcityisstillamajorissuesincetheprocessofdataannotationisexpertise-intensiveand time-consumingforlarge-scaledatasets.ThisisevenmorecriticalintheXAIfield,whereadditional(andcommonly multi-modal)annotationsarerequired(e.g.,textualdescriptions,structureddecision-makingrules).Apotentialsolution involvespromotingcooperationandcollaborationbetweentheindustryandacademia.Wehavestartedtoseeefforts inconstructinghigh-qualitybenchmarkswithannotatedexplanatoryinformationinotherdomains,suchastheFFA- IRdataset[45]forevaluatingtheexplainabilityonmedicalreportgeneration.Additionally,onlyasmallsubsetof primarystudiesattempttoperformacomparisonoverbaselinetechniques.Thisispossiblyduetothelackofpublicly availableandreplicablebaselines,aswellasthedifferencebetweeneachapproach’sexplanationformat.Forexample, itisevidentfromTable12,theexplanationformatsvaryeveninasinglevulnerabilitydetectiontask,includingtext (e.g.,vulnerabilitydescriptions[68,129],types[90,135]),visualization(e.g.,heatmap[52]),sourcecode(e.g.,code statements[47,93]),andrules[137].Thelackofstandardizedbaselinescanleadtoconfusingorconflictingresults, i.e.,theperformancescoreofthesamebaselinetechniquevariesindifferentpapers.Toexcludemanualeffortsand subjectivebias,wehighlighttheimportanceofreplicabilityandreproducibilityforXAI4SEstudies.Theresearchers shouldsharereplicationpackagesandprovidesufficientdetailswhendescribingtheprocessesofdatapreparation, approachimplementation,andperformanceevaluation. Challenge4:LimitedPerformanceMeasures.Thelastopenchallengeisthelimitedevaluationoftheexplanations providedbyXAItechniques.Themostadoptedapproachistoresorttothepractitioners’expertise,i.e.,relyingon theresearchers’intuitionofwhatconstitutesagoodexplanation[60].AspreviouslydiscussedinSection6.3,nearly half( 44%)ofXAI4SEstudiesadoptedanecdotalevidenceinsteadofquantitativemetricstoevaluatetheirproposed ≈ approaches.However,consideringthevariabilityinexperts’opinions,thisstrategyisparticularlybiasedandsubjective [66].GiventhatsuchhumanfeedbackisimmenselyvaluableinunderstandingthestrengthsandweaknessesofXAI techniques,aclearavenueforimprovementistostandardizeaprotocolforhumanevaluationofthesesystemsbySE practitioners. Inaddition,variousmetricshavebeenemployedforquantitativeevaluation,somespecialconcernswithintheXAI fieldarenotfullyconsideredyet: Efficiency. Most SE tasks, such as code search and code completion, have a high demand for the response • efficiencyofAImodels,i.e.,preferringtools/approachesprovidingactionableresultswithinacceptabletimecost, andexplanationsarenoexception.Giventhatexplainabilitymostlyservesasaby-productofmodeloutputs, ManuscriptsubmittedtoACMASystematicLiteratureReviewonExplainabilityforML/DL-basedSEResearch 35 approachesrequiringhighercomputationoverheadareunlikelytobeadoptedbytheaudience,eveniftheyachieve promisingperformanceintermsofotheraspects. OtherImportantMetrics.Explainabilityisamulti-facetedconcept.Thus,insteadofevaluatingonlyoneproperty • ofexplanationquality(e.g.,ExplanationAccuracywhichmeasureshowconsistenttheexplanationresultsarewith theresearchers’intuition[47,117]),itisessentialtogetinsightinto,andpreferablyquantify,allpropertiessuch thataninformedtrade-offcanbemade[66].Forexample,apartfromeffectiveness,Huetal.[36]alsoevaluated therobustness,stability,andconsistencyofstate-of-the-artexplanationapproachesforGNN-basedvulnerability |
detectors.Suchamulti-dimensionaloverviewcouldbeimplementedasaradarchartthatcomprehensivelyand conciselyconveysthestrengthsandweaknessesoftheexplanationapproach,astheLargeLanguageModel(LLM) communityrecentlydoes[72]. 8.2 Opportunities Opportunity1:ApplyingXAItoOtherSEtasks.WeobservedthatXAItechniqueshavebeenwidelyusedincertain SEtaskstosupportusersindecision-makingorimprovethetransparencyofAImodels,suchasdefectprediction, vulnerabilitydetection,codeunderstanding,andsoon.However,thecurrentapplicationofXAItechniquesinsome SEactivitiesremainsrelativelysparse.AsshowninFig.6,notmanystudiesfocusonsoftwaredevelopmentand management.Thisunveilsasubstantialopportunity:broadeningtheapplicationofXAItechniquestotheseunder- exploredresearchtopic.SuchanextensioncouldleadtomorepracticalandactionableAI-drivensolutionsacrossthe entireSDLC. Opportunity2:DevelopingXAISuitableforSETasks.Wenotedthatmostapproachesusedtoprovideexplanations forAI-drivenSEmodelsaredirectlyinheritedfromoff-the-shelfXAItechniqueswithoutanycustomization.Unfortu- nately,existingXAItechniques,originallynotdesignedforSEtasks,likelygeneratesuboptimalorevenmisleading explanations.Giventhesereasons,werecognizearesearchopportunitytocustomizeexplanationapproachesmore suitableforSEtasks.Inthisregard,theintegrationwithdomainknowledgeappearstobethemostpromisingdirection toexplore. Opportunity 3: Human-in-the-Loop Interaction. To effectively support human decision-making, there is an escalatingneedforinteractiveXAItoolsthatempoweruserstoactivelyengagewithandexploreblack-boxSEmodels, therebyfacilitatingaprofoundcomprehensionofthemodels’mechanismsandtheirexplainability.However,most reviewedworksleaveasideimportantaspectspertainingtotheXAItool’sinteractionwithSEpractitionersasanAI assistant.Severalstudieshavehighlightedthatusershadmoretrustwhenpresentedwithinteractiveexplanations [108].Amongallpapersreviewed,Wangetal.[100]leveragedastructuredconceptualtreetodisplaytheresultsof queryscoping,theidentifiedconceptualassociationpaths.Userscaninteractbyselectingtheanchorconceptrelevant tothequerywhichwillupdatetheresultsofcodesnippetsrelatedtotheselectedconcepts. Opportunity4:DeploymentinPractice.ThedeploymentofXAItechniquesinSEpractice,particularlysecurity- criticaltaskslikevulnerabilitydetection,necessitatesrigorousvalidationtosatisfynotonlyeffectiveness,butalso robustness,controllability,andotherspecialconcerns.Thisproveschallenginggiventhecomplexanddynamicnature ofSDLC.Inaddition,duetothedifferentintentsandexpertiseofaudiences,asingleexplanationformatmaynot beapplicabletoeveryone.Forexample,visualexplanationcanbeattractiveforthelaypersonsinceitisnaturaland intuitive.Nonetheless,complexvisualizationtoolscouldpotentiallyoverwhelmdomainexpertswithsuperfluous information,therebyincreasingtheircognitiveloadandrenderingthetoolcounterproductive[121].Therefore,a ManuscriptsubmittedtoACM36 S.Caoetal. potentialopportunityistoworkcloselywithrelevantSEpractitionerstocontinuouslyiteratetheXAItoolbasedon theirfeedback. 9 CONCLUSION ExplainabilityremainsapivotalareaofinterestwithintheSEcommunity,particularlyasincreasinglyadvancedAI modelsrapidlyadvancethefield.Thispaperconductsasystematicliteraturereviewof63primarystudiesonXAI4SE researchfromtop-tierSEandAIconferencesandjournals.Initially,weformulatedaseriesofresearchquestionsaimed atexploringtheapplicationofXAItechniquesinSE.OuranalysisbeganbyhighlightingSEtasksthathavesignificantly benefitedfromXAI,illustratingthetangiblecontributionsofXAI(RQ1).Subsequently,wedelvedintothevarietyof XAItechniquesappliedtoSEtasks,examiningtheiruniquecharacteristicsandoutputformats(RQ2).Followingthis,we investigatedtheexistingbenchmarks,includingavailablebaselines,prevalentbenchmarks,andcommonlyemployed evaluationmetrics,todeterminetheirvalidityandtrustworthiness(RQ3). Despitethesignificantcontributionsmadetodate,thisreviewalsouncoverscertainlimitationsandchallenges inherent in existing XAI4SE research, offering a research roadmap that delineates promising avenues for future exploration.ItisouraspirationthatthisSLRequipsfutureSEresearcherswiththeessentialknowledgeandinsights requiredforinnovativeapplicationsofXAI. REFERENCES [1] JubrilGbolahanAdigun,TomPhilipHuck,MatteoCamilli,andMichaelFelderer.2023.Risk-drivenOnlineTestingandTestCaseDiversityAnalysis forML-enabledCriticalSystems.InProceedingsofthe34thIEEEInternationalSymposiumonSoftwareReliabilityEngineering(ISSRE).IEEE,344–354. [2] SaleemaAmershi,AndrewBegel,ChristianBird,RobertDeLine,HaraldC.Gall,EceKamar,NachiappanNagappan,BesmiraNushi,andThomas Zimmermann.2019. Softwareengineeringformachinelearning:acasestudy.InProceedingsofthe41stInternationalConferenceonSoftware Engineering:SoftwareEngineeringinPractice(ICSE-SEIP).IEEE/ACM,291–300. [3] AlejandroBarredoArrieta,NataliaDíazRodríguez,JavierDelSer,AdrienBennetot,SihamTabik,AlbertoBarbado,SalvadorGarcía,Sergio Gil-Lopez,DanielMolina,RichardBenjamins,RajaChatila,andFranciscoHerrera.2020. ExplainableArtificialIntelligence(XAI):Concepts, |
taxonomies,opportunitiesandchallengestowardresponsibleAI.Inf.Fusion58(2020),82–115. [4] SebastianBach,AlexanderBinder,GrégoireMontavon,FrederickKlauschen,Klaus-RobertMüller,andWojciechSamek.2015. Onpixel-wise explanationsfornon-linearclassifierdecisionsbylayer-wiserelevancepropagation.PloSone10,7(2015),e0130140. [5] NadiaBurkartandMarcoF.Huber.2021.ASurveyontheExplainabilityofSupervisedMachineLearning.J.Artif.Intell.Res.70(2021),245–317. [6] ShyamR.ChidamberandChrisF.Kemerer.1994.AMetricsSuiteforObjectOrientedDesign.IEEETrans.SoftwareEng.20,6(1994),476–493. [7] KyunghyunCho,BartvanMerrienboer,ÇaglarGülçehre,DzmitryBahdanau,FethiBougares,HolgerSchwenk,andYoshuaBengio.2014.Learning PhraseRepresentationsusingRNNEncoder-DecoderforStatisticalMachineTranslation.InProceedingsofthe19thConferenceonEmpiricalMethods inNaturalLanguageProcessing(EMNLP).ACL,1724–1734. [8] JürgenCito,IsilDillig,SeohyunKim,VijayaraghavanMurali,andSatishChandra.2021.Explainingmispredictionsofmachinelearningmodels usingruleinduction.InProceedingsofthe29thACMJointEuropeanSoftwareEngineeringConferenceandSymposiumontheFoundationsofSoftware Engineering(ESEC/FSE).ACM,716–727. [9] JürgenCito,IsilDillig,VijayaraghavanMurali,andSatishChandra.2022.CounterfactualExplanationsforModelsofCode.InProceedingsofthe 44thIEEE/ACMInternationalConferenceonSoftwareEngineering:SoftwareEngineeringinPractice(ICSE-SEIP).IEEE,125–134. [10] JacobCohen.1960.Acoefficientofagreementfornominalscales.Educationalandpsychologicalmeasurement20,1(1960),37–46. [11] HoaKhanhDam,TruyenTran,andAdityaGhose.2018.Explainablesoftwareanalytics.InProceedingsofthe40thInternationalConferenceon SoftwareEngineering:NewIdeasandEmergingResults(ICSE-(NIER).ACM,53–56. [12] RaphaëlDang-Nhu.2020. PLANS:Neuro-SymbolicProgramLearningfromVideos.InProceedingsofthe34thAnnualConferenceonNeural InformationProcessingSystems(NeurIPS). [13] JacobDevlin,Ming-WeiChang,KentonLee,andKristinaToutanova.2019.BERT:Pre-trainingofDeepBidirectionalTransformersforLanguage Understanding.InProceedingsofthe2019ConferenceoftheNorthAmericanChapteroftheAssociationforComputationalLinguistics:Human LanguageTechnologies(NAACL-HLT).AssociationforComputationalLinguistics,4171–4186. [14] JianshuDing,GuishengFan,HuiqunYu,andZijieHuang.2021.AutomaticIdentificationofHighImpactBugReportbyTestSmellsofTextual SimilarBugReports.InProceedingsofthe21stIEEEInternationalConferenceonSoftwareQuality,ReliabilityandSecurity(QRS).IEEE,446–457. ManuscriptsubmittedtoACMASystematicLiteratureReviewonExplainabilityforML/DL-basedSEResearch 37 [15] RuomengDing,ChaoyunZhang,LuWang,YongXu,MinghuaMa,XiaominWu,MengZhang,QingjunChen,XinGao,XuedongGao,HaoFan, SaravanRajmohan,QingweiLin,andDongmeiZhang.2023.TraceDiag:Adaptive,Interpretable,andEfficientRootCauseAnalysisonLarge-Scale MicroserviceSystems.InProceedingsofthe31thACMJointEuropeanSoftwareEngineeringConferenceandSymposiumontheFoundationsofSoftware Engineering(ESEC/FSE).ACM,1762–1773. [16] YiDing,AhsanPervaiz,MichaelCarbin,andHenryHoffmann.2021.Generalizableandinterpretablelearningforconfigurationextrapolation.In Proceedingsofthe29thACMJointEuropeanSoftwareEngineeringConferenceandSymposiumontheFoundationsofSoftwareEngineering(ESEC/FSE). ACM,728–740. [17] GeandersonEstevesdosSantos,EduardoFigueiredo,AdrianoVeloso,MarkosViggiato,andNivioZiviani.2020.Understandingmachinelearning softwaredefectpredictions.Autom.Softw.Eng.27,3(2020),369–392. [18] FinaleDoshi-VelezandBeenKim.2018.Considerationsforevaluationandgeneralizationininterpretablemachinelearning.Explainableand interpretablemodelsincomputervisionandmachinelearning(2018),3–17. [19] FinaleDoshi-VelezandBeenKim.2018.TowardsARigorousScienceofInterpretableMachineLearning.arXivpreprintarXiv:1702.08608(2018). [20] RudreshDwivedi,DevamDave,HetNaik,SmitiSinghal,OmerF.Rana,PankeshPatel,BinQian,ZhenyuWen,TejalShah,GrahamMorgan,and RajivRanjan.2023.ExplainableAI(XAI):CoreIdeas,Techniques,andSolutions.ACMComput.Surv.55,9(2023),194:1–194:33. [21] KevinEllis,CatherineWong,MaxwellI.Nye,MathiasSablé-Meyer,LucasMorales,LukeB.Hewitt,LucCary,ArmandoSolar-Lezama,andJoshuaB. Tenenbaum.2021.DreamCoder:bootstrappinginductiveprogramsynthesiswithwake-sleeplibrarylearning.InProceedingsofthe42ndACM SIGPLANInternationalConferenceonProgrammingLanguageDesignandImplementation(PLDI).ACM,835–850. [22] JiahaoFan,YiLi,ShaohuaWang,andTienN.Nguyen.2020.AC/C++CodeVulnerabilityDatasetwithCodeChangesandCVESummaries.In Proceedingsofthe17thInternationalConferenceonMiningSoftwareRepositories(MSR).ACM,508–512. [23] JulianJFaraway.2004.LinearmodelswithR.ChapmanandHall/CRC. [24] ZhangyinFeng,DayaGuo,DuyuTang,NanDuan,XiaochengFeng,MingGong,LinjunShou,BingQin,TingLiu,DaxinJiang,andMingZhou. 2020.CodeBERT:APre-TrainedModelforProgrammingandNaturalLanguages.arXivpreprintarXiv:2002.08155(2020). [25] JeromeHFriedmanandBogdanEPopescu.2008.Predictivelearningviaruleensembles.(2008). |
[26] MichaelFu,VanNguyen,ChakkritTantithamthavorn,TrungLe,andDinhPhung.2023.VulExplainer:ATransformer-BasedHierarchicalDistillation forExplainingVulnerabilityTypes.IEEETrans.SoftwareEng.49,10(2023),4550–4565. [27] MichaelFuandChakkritTantithamthavorn.2023.GPT2SP:ATransformer-BasedAgileStoryPointEstimationApproach.IEEETrans.Software Eng.49,2(2023),611–625. [28] YuxiangGao,YiZhu,andQiaoYu.2022.Evaluatingtheeffectivenessoflocalexplanationmethodsonsourcecode-baseddefectpredictionmodels. InProceedingsofthe19thIEEE/ACMInternationalConferenceonMiningSoftwareRepositories(MSR).ACM,640–645. [29] YuxiangGao,YiZhu,andYuZhao.2022.Dealingwithimbalanceddataforinterpretabledefectprediction.Inf.Softw.Technol.151(2022),107016. [30] MingyangGeng,ShangwenWang,DezunDong,HaotianWang,ShaomengCao,KechiZhang,andZhiJin.2023. Interpretation-basedCode Summarization.InProceedingsofthe31stIEEE/ACMInternationalConferenceonProgramComprehension(ICPC).IEEE,113–124. [31] JiriGesi,XinyunShen,YunfanGeng,QihongChen,andIftekharAhmed.2023.LeveragingFeatureBiasforScalableMispredictionExplanationof MachineLearningModels.InProceedingsofthe45thIEEE/ACMInternationalConferenceonSoftwareEngineering(ICSE).IEEE,1559–1570. [32] RiccardoGuidotti,AnnaMonreale,SalvatoreRuggieri,FrancoTurini,FoscaGiannotti,andDinoPedreschi.2019. ASurveyofMethodsfor ExplainingBlackBoxModels.ACMComput.Surv.51,5(2019),93:1–93:42. [33] DayaGuo,ShuaiLu,NanDuan,YanlinWang,MingZhou,andJianYin.2022.UniXcoder:UnifiedCross-ModalPre-trainingforCodeRepresentation. InProceedingsofthe60thAnnualMeetingoftheAssociationforComputationalLinguistics(ACL).AssociationforComputationalLinguistics, 7212–7225. [34] DayaGuo,ShuoRen,ShuaiLu,ZhangyinFeng,DuyuTang,ShujieLiu,LongZhou,NanDuan,AlexeySvyatkovskiy,ShengyuFu,MicheleTufano, ShaoKunDeng,ColinB.Clement,DawnDrain,NeelSundaresan,JianYin,DaxinJiang,andMingZhou.2021.GraphCodeBERT:Pre-training CodeRepresentationswithDataFlow.InProceedingsofthe9thInternationalConferenceonLearningRepresentations(ICLR). [35] MdImranHossain,GhadaZamzmi,PeterR.Mouton,MdSirajusSalekin,YuSun,andDmitryGoldgof.2023.ExplainableAIforMedicalData: CurrentMethods,Limitations,andFutureDirections.ACMComput.Surv.(2023). [36] YutaoHu,SuyuanWang,WenkeLi,JunruPeng,YuemingWu,DeqingZou,andHaiJin.2023.InterpretersforGNN-BasedVulnerabilityDetection: AreWeThereYet?.InProceedingsofthe32ndACMSIGSOFTInternationalSymposiumonSoftwareTestingandAnalysis(ISSTA).ACM,1407–1419. [37] HamelHusain,Ho-HsiangWu,TiferetGazit,MiltiadisAllamanis,andMarcBrockschmidt.2019.Codesearchnetchallenge:Evaluatingthestateof semanticcodesearch.arXivpreprintarXiv:1909.09436(2019). [38] NanJiang,ThibaudLutellier,andLinTan.2021.CURE:Code-AwareNeuralMachineTranslationforAutomaticProgramRepair.InProceedingsof the43rdIEEE/ACMInternationalConferenceonSoftwareEngineering(ICSE).IEEE,1161–1173. [39] ShuyaoJiang,JiachengShen,ShengnanWu,YuCai,YueYu,andYangfanZhou.2023. TowardsUsableNeuralCommentGenerationvia Code-CommentLinkageInterpretation:MethodandEmpiricalStudy.IEEETrans.SoftwareEng.49,4(2023),2239–2254. [40] JirayusJiarpakdee,ChakkritKlaTantithamthavorn,HoaKhanhDam,andJohnC.Grundy.2022.AnEmpiricalStudyofModel-AgnosticTechniques forDefectPredictionModels.IEEETrans.SoftwareEng.48,2(2022),166–185. [41] WenjunKe,ChaoWu,XiufengFu,ChenGao,andYinyiSong.2020.InterpretableTestCaseRecommendationbasedonKnowledgeGraph.In Proceedingsofthe20thIEEEInternationalConferenceonSoftwareQuality,ReliabilityandSecurity(QRS).IEEE,489–496. ManuscriptsubmittedtoACM38 S.Caoetal. [42] AlexKrizhevsky,IlyaSutskever,andGeoffreyE.Hinton.2012.ImageNetClassificationwithDeepConvolutionalNeuralNetworks.InProceedings ofthe26thAnnualConferenceonNeuralInformationProcessingSystems(NeurIPS).1106–1114. [43] JohannesLampel,SaschaJust,SvenApel,andAndreasZeller.2021. Whenlifegivesyouoranges:detectinganddiagnosingintermittentjob failuresatMozilla.InProceedingsofthe29thACMJointEuropeanSoftwareEngineeringConferenceandSymposiumontheFoundationsofSoftware Engineering(ESEC/FSE).ACM,1381–1392. [44] GichanLeeandScottUk-JinLee.2023.AnEmpiricalComparisonofModel-AgnosticTechniquesforDefectPredictionModels.InProceedingsof the30thIEEEInternationalConferenceonSoftwareAnalysis,EvolutionandReengineering(SANER).IEEE,179–189. [45] MingjieLi,WenjiaCai,RuiLiu,YuetianWeng,XiaoyunZhao,CongWang,XinChen,ZhongLiu,CainengPan,MengkeLi,YingfengZheng, YizhiLiu,FloraD.Salim,KarinVerspoor,XiaodanLiang,andXiaojunChang.2021.FFA-IR:TowardsanExplainableandReliableMedicalReport GenerationBenchmark.InProceedingsofthe1stNeuralInformationProcessingSystemsTrackonDatasetsandBenchmarks(NeurIPSDatasetsand Benchmarks). |
[46] RongfanLi,BihuanChen,FengyiZhang,ChaoSun,andXinPeng.2022.DetectingRuntimeExceptionsbyDeepCodeRepresentationLearningwith Attention-BasedGraphNeuralNetworks.InProceedingsofthe29thIEEEInternationalConferenceonSoftwareAnalysis,EvolutionandReengineering (SANER).IEEE,373–384. [47] YiLi,ShaohuaWang,andTienN.Nguyen.2021.Vulnerabilitydetectionwithfine-grainedinterpretations.InProceedingofthe29thACMJoint EuropeanSoftwareEngineeringConferenceandSymposiumontheFoundationsofSoftwareEngineering(ESEC/FSE).ACM,292–303. [48] ZeyanLi,NengwenZhao,MingjieLi,XianglinLu,LixinWang,DongdongChang,XiaohuiNie,LiCao,WenchiZhang,KaixinSui,YanhuaWang, XuDu,GuoqiangDuan,andDanPei.2022.Actionableandinterpretablefaultlocalizationforrecurringfailuresinonlineservicesystems.In Proceedingsofthe30thACMJointEuropeanSoftwareEngineeringConferenceandSymposiumontheFoundationsofSoftwareEngineering(ESEC/FSE). ACM,996–1008. [49] HuiLiu,MingzhuShen,JiaqiZhu,NanNiu,GeLi,andLuZhang.2022.DeepLearningBasedProgramGenerationFromRequirementsText:Are WeThereYet?IEEETrans.SoftwareEng.48,4(2022),1268–1289. [50] MingweiLiu,SiminYu,XinPeng,XueyinDu,TianyongYang,HuanjunXu,andGaoyangZhang.2023.KnowledgeGraphbasedExplainable QuestionRetrievalforProgrammingTasks.InProceedingsofthe39thIEEEInternationalConferenceonSoftwareMaintenanceandEvolution(ICSME). IEEE,123–135. [51] YueLiu,ChakkritTantithamthavorn,LiLi,andYepangLiu.2022.ExplainableAIforAndroidMalwareDetection:TowardsUnderstandingWhythe ModelsPerformSoWell?.InProceedingsoftheIEEE33rdInternationalSymposiumonSoftwareReliabilityEngineering(ISSRE).IEEE,169–180. [52] ZhenguangLiu,PengQian,XiangWang,LeiZhu,QinmingHe,andShoulingJi.2021.SmartContractVulnerabilityDetection:FromPureNeural NetworktoInterpretableGraphFeatureandExpertPatternFusion.InProceedingsofthe30thInternationalJointConferenceonArtificialIntelligence (IJCAI).2751–2759. [53] ShuaiLu,DayaGuo,ShuoRen,JunjieHuang,AlexeySvyatkovskiy,AmbrosioBlanco,ColinB.Clement,DawnDrain,DaxinJiang,DuyuTang,Ge Li,LidongZhou,LinjunShou,LongZhou,MicheleTufano,MingGong,MingZhou,NanDuan,NeelSundaresan,ShaoKunDeng,ShengyuFu,and ShujieLiu.2021.CodeXGLUE:AMachineLearningBenchmarkDatasetforCodeUnderstandingandGeneration.InProceedingsoftheNeural InformationProcessingSystemsTrackonDatasetsandBenchmarks1,NeurIPSDatasetsandBenchmarks2021. [54] ScottM.Lundberg,GabrielG.Erion,HughChen,AlexJ.DeGrave,JordanM.Prutkin,BalaNair,RonitKatz,JonathanHimmelfarb,NishaBansal, andSu-InLee.2020.FromlocalexplanationstoglobalunderstandingwithexplainableAIfortrees.Nat.Mach.Intell.2,1(2020),56–67. [55] ScottM.LundbergandSu-InLee.2017.AUnifiedApproachtoInterpretingModelPredictions.InProceedingsofthe31stAnnualConferenceon NeuralInformationProcessingSystems(NeurIPS).4765–4774. [56] ThibaudLutellier,HungVietPham,LawrencePang,YitongLi,MoshiWei,andLinTan.2020. CoCoNuT:combiningcontext-awareneural translationmodelsusingensembleforprogramrepair.InProceedinsofthe29thACMSIGSOFTInternationalSymposiumonSoftwareTestingand Analysis(ISSTA).ACM,101–114. [57] ParvezMahbub,OhiduzzamanShuvo,andMohammadMasudurRahman.2023.ExplainingSoftwareBugsLeveragingCodeStructuresinNeural MachineTranslation.InProceedingsofthe45thIEEE/ACMInternationalConferenceonSoftwareEngineering(ICSE).IEEE,640–652. [58] VadimMarkovtsev,WarenLong,HugoMougard,KonstantinSlavnov,andEgorBulychev.2019.STYLE-ANALYZER:fixingcodestyleinconsistencies withinterpretableunsupervisedalgorithms.InProceedingsofthe16thInternationalConferenceonMiningSoftwareRepositories(MSR).IEEE/ACM, 468–478. [59] PabloMessina,PabloPino,DenisParra,AlvaroSoto,CeciliaBesa,SergioUribe,MarceloE.Andia,CristianTejos,ClaudiaPrieto,andDaniel Capurro.2022.ASurveyonDeepLearningandExplainabilityforAutomaticReportGenerationfromMedicalImages.ACMComput.Surv.54,10s (2022),203:1–203:40. [60] TimMiller.2019.Explanationinartificialintelligence:Insightsfromthesocialsciences.Artif.Intell.267(2019),1–38. [61] AhmadHajiMohammadkhani,NitinSaiBommi,MariemDaboussi,OnkarSabnis,ChakkritTantithamthavorn,andHadiHemmati.2023. A SystematicLiteratureReviewofExplainableAIforSoftwareEngineering.arXivpreprintarXiv:2302.06065(2023). [62] GrégoireMontavon,SebastianLapuschkin,AlexanderBinder,WojciechSamek,andKlaus-RobertMüller.2017.Explainingnonlinearclassification decisionswithdeepTaylordecomposition.PatternRecognit.65(2017),211–222. [63] ToshikiMoriandNaoshiUchihira.2019.Balancingthetrade-offbetweenaccuracyandinterpretabilityinsoftwaredefectprediction.Empir.Softw. Eng.24,2(2019),779–825. ManuscriptsubmittedtoACMASystematicLiteratureReviewonExplainabilityforML/DL-basedSEResearch 39 [64] TamaraMunzner.2014.Visualizationanalysisanddesign.CRCpress. |
[65] AzqaNadeem,DaniëlVos,ClintonCao,LucaPajola,SimonDieck,RobertBaumgartner,andSiccoVerwer.2023.SoK:ExplainableMachineLearning forComputerSecurityApplications.InProceedingsofthe8thIEEEEuropeanSymposiumonSecurityandPrivacy(EuroS&P).IEEE,221–240. [66] MeikeNauta,JanTrienes,ShreyasiPathak,ElisaNguyen,MichellePeters,YasminSchmitt,JörgSchlötterer,MauricevanKeulen,andChristin Seifert.2023.FromAnecdotalEvidencetoQuantitativeEvaluationMethods:ASystematicReviewonEvaluatingExplainableAI.ACMComput. Surv.55,13s(2023),295:1–295:42. [67] AmirmohammadNazari,YifeiHuang,RoopshaSamanta,ArjunRadhakrishna,andMukundRaghothaman.2023.ExplainableProgramSynthesis byLocalizingSpecifications.InProceedingsofthe38thAnnualACMSIGPLANConferenceonObject-OrientedProgramming,Systems,Languages,and Applications(OOPSLA).2171–2195. [68] ChaoNi,XinYin,KaiwenYang,DehaiZhao,ZhenchangXing,andXinXia.2023.DistinguishingLook-AlikeInnocentandVulnerableCodeby SubtleSemanticRepresentationLearningandExplanation.InProceedingsofthe31thACMJointEuropeanSoftwareEngineeringConferenceand SymposiumontheFoundationsofSoftwareEngineering(ESEC/FSE).ACM,1611–1622. [69] OpenAI.2022.Chatgpt:Optimizinglanguagemodelsfordialogue. https://chat.openai.com. [70] IpekOzkayaandJamesIvers.2019.AIforSoftwareEngineering.InProceedingsoftheSEIEducator’sWorkshop. [71] MatteoPaltenghiandMichaelPradel.2021.ThinkingLikeaDeveloper?ComparingtheAttentionofHumanswithNeuralModelsofCode.In Proceedingsofthe36thIEEE/ACMInternationalConferenceonAutomatedSoftwareEngineering(ASE).IEEE,867–879. [72] DanielPark.2023.Open-LLM-Leaderboard-Report. https://github.com/dsdanielpark/Open-LLM-Leaderboard-Report [73] CristianoPatrício,JoãoC.Neve,andLuísF.Teixeira.2023.ExplainableDeepLearningMethodsinMedicalImageClassification:ASurvey.ACM Comput.Surv.56,4(2023),85:1–85:41. [74] ChanathipPornprasit,ChakkritTantithamthavorn,JirayusJiarpakdee,MichaelFu,andPatanamonThongtanunam.2021.PyExplainer:Explaining thePredictionsofJust-In-TimeDefectModels.InProceedingsofthe36thIEEE/ACMInternationalConferenceonAutomatedSoftwareEngineering (ASE).IEEE,407–418. [75] Md.RafiqulIslamRabin,VincentJ.Hellendoorn,andMohammadAminAlipour.2021.Understandingneuralcodeintelligencethroughprogram simplification.InProceedingsofthe29thACMJointEuropeanSoftwareEngineeringConferenceandSymposiumontheFoundationsofSoftware Engineering(ESEC/FSE).ACM,441–452. [76] DiliniRajapaksha,ChakkritTantithamthavorn,JirayusJiarpakdee,ChristophBergmeir,JohnGrundy,andWrayL.Buntine.2022.SQAPlanner: GeneratingData-InformedSoftwareQualityImprovementPlans.IEEETrans.SoftwareEng.48,8(2022),2814–2835. [77] XiaoxueRen,ZhenchangXing,XinXia,DavidLo,XinyuWang,andJohnGrundy.2019. NeuralNetwork-basedDetectionofSelf-Admitted TechnicalDebt:FromPerformancetoExplainability.ACMTrans.Softw.Eng.Methodol.28,3(2019),15. [78] MarcoTúlioRibeiro,SameerSingh,andCarlosGuestrin.2016. "WhyShouldITrustYou?":ExplainingthePredictionsofAnyClassifier.In Proceedingsofthe22ndACMSIGKDDInternationalConferenceonKnowledgeDiscoveryandDataMining(KDD).ACM,1135–1144. [79] DanielRodríguez-Cárdenas,DavidN.Palacio,DipinKhati,HenryBurke,andDenysPoshyvanyk.2023.BenchmarkingCausalStudytoInterpret LargeLanguageModelsforSourceCode.InProceedingsofthe39thIEEEInternationalConferenceonSoftwareMaintenanceandEvolution(ICSME). IEEE,329–334. [80] CynthiaRudin.2019.Stopexplainingblackboxmachinelearningmodelsforhighstakesdecisionsanduseinterpretablemodelsinstead.Nature machineintelligence1,5(2019),206–215. [81] CynthiaRudinandJoannaRadin.2019. WhyareweusingblackboxmodelsinAIwhenwedon’tneedto?AlessonfromanexplainableAI competition.HarvardDataScienceReview1,2(2019),1–9. [82] NayanB.Ruparelia.2010.Softwaredevelopmentlifecyclemodels.ACMSIGSOFTSoftw.Eng.Notes35,3(2010),8–13. [83] RamprasaathR.Selvaraju,MichaelCogswell,AbhishekDas,RamakrishnaVedantam,DeviParikh,andDhruvBatra.2020.Grad-CAM:Visual ExplanationsfromDeepNetworksviaGradient-BasedLocalization.Int.J.Comput.Vis.128,2(2020),336–359. [84] EmadShihab,AkinoriIhara,YasutakaKamei,WalidM.Ibrahim,MasaoOhira,BramAdams,AhmedE.Hassan,andKen-ichiMatsumoto.2013. Studyingre-openedbugsinopensourcesoftware.Empir.Softw.Eng.18,5(2013),1005–1042. [85] AvantiShrikumar,PeytonGreenside,andAnshulKundaje.2017.LearningImportantFeaturesThroughPropagatingActivationDifferences.In Proceedingsofthe34thInternationalConferenceonMachineLearning(ICML),Vol.70.PMLR,3145–3153. [86] TimoSpeith.2022.Areviewoftaxonomiesofexplainableartificialintelligence(XAI)methods.InProceedingsofthe2022ACMConferenceon Fairness,Accountability,andTransparency(FAccT).2239–2250. |
[87] ThiloSpinner,UdoSchlegel,HannaSchäfer,andMennatallahEl-Assady.2020. explAIner:AVisualAnalyticsFrameworkforInteractiveand ExplainableMachineLearning.IEEETrans.Vis.Comput.Graph.26,1(2020),1064–1074. [88] M.SrinivasandLalitM.Patnaik.1994.Adaptiveprobabilitiesofcrossoverandmutationingeneticalgorithms.IEEETrans.Syst.ManCybern.24,4 (1994),656–667. [89] MateuszStaniakandPrzemyslawBiecek.2018.ExplanationsofModelPredictionswithliveandbreakDownPackages.RJ.10,2(2018),395. [90] JiamouSun,ZhenchangXing,QinghuaLu,XiweiXu,LimingZhu,ThongHoang,andDehaiZhao.2023.SilentVulnerableDependencyAlert PredictionwithVulnerabilityKeyAspectExplanation.InProceedingsofthe45thIEEE/ACMInternationalConferenceonSoftwareEngineering(ICSE). IEEE,970–982. ManuscriptsubmittedtoACM40 S.Caoetal. [91] RuoxiSun,MinhuiXue,GarethTyson,TianDong,ShaofengLi,ShuoWang,HaojinZhu,SeyitCamtepe,andSuryaNepal.2023.Mate!AreYou ReallyAware?AnExplainability-GuidedTestingFrameworkforRobustnessofMalwareDetectors.InProceedingsofthe31thACMJointEuropean SoftwareEngineeringConferenceandSymposiumontheFoundationsofSoftwareEngineering(ESEC/FSE).ACM,1573–1585. [92] MukundSundararajan,AnkurTaly,andQiqiYan.2017.AxiomaticAttributionforDeepNetworks.InProceedingsofthe34thInternationalConference onMachineLearning(ICML),Vol.70.PMLR,3319–3328. [93] SahilSuneja,YufanZhuang,YunhuiZheng,JimLaredo,AlessandroMorari,andUdayanKhurana.2023.IncorporatingSignalAwarenessinSource CodeModeling:anApplicationtoVulnerabilityDetection.ACMTrans.Softw.Eng.Methodol.32,6(2023),145:1–145:40. [94] IlyaSutskever,OriolVinyals,andQuocV.Le.2014.SequencetoSequenceLearningwithNeuralNetworks.InProceedingsofthe28thAnnual ConferenceonNeuralInformationProcessingSystems(NeurIPS).3104–3112. [95] MichaelvanLent,WilliamFisher,andMichaelMancuso.2004.AnExplainableArtificialIntelligenceSystemforSmall-unitTacticalBehavior.In Proceedingsofthe19thNationalConferenceonArtificialIntelligence,16thConferenceonInnovativeApplicationsofArtificialIntelligence(AAAI/IAAI). AAAIPress/TheMITPress,900–907. [96] AshishVaswani,NoamShazeer,NikiParmar,JakobUszkoreit,LlionJones,AidanN.Gomez,LukaszKaiser,andIlliaPolosukhin.2017.Attentionis AllyouNeed.InProceedingsofthe31stAnnualConferenceonNeuralInformationProcessingSystems(NeurIPS).5998–6008. [97] PetarVelickovic,GuillemCucurull,ArantxaCasanova,AdrianaRomero,PietroLiò,andYoshuaBengio.2018. GraphAttentionNetworks.In Proceedingsofthe6thInternationalConferenceonLearningRepresentations(ICLR). [98] YaoWan,YangHe,ZhangqianBi,JianguoZhang,HongyuZhang,YuleiSui,GuandongXu,HaiJin,andPhilipS.Yu.2023.DeepLearningforCode Intelligence:Survey,BenchmarkandToolkit.arXivpreprintarXiv:2401.00288(2023). [99] YaoWan,WeiZhao,HongyuZhang,YuleiSui,GuandongXu,andHaiJin.2022.WhatDoTheyCapture?-AStructuralAnalysisofPre-Trained LanguageModelsforSourceCode.InProceedingsofthe44thIEEE/ACMInternationalConferenceonSoftwareEngineering(ICSE).ACM,2377–2388. [100] ChongWang,XinPeng,ZhenchangXing,YueZhang,MingweiLiu,RongLuo,andXiujieMeng.2023.XCoS:ExplainableCodeSearchBasedon QueryScopingandKnowledgeGraph.ACMTrans.Softw.Eng.Methodol.32,6(2023),140:1–140:28. [101] SiminWang,LiguoHuang,AmiaoGao,JidongGe,TengfeiZhang,HaitaoFeng,IshnaSatyarth,MingLi,HeZhang,andVincentNg.2023. Machine/DeepLearningforSoftwareEngineering:ASystematicLiteratureReview.IEEETrans.SoftwareEng.49,3(2023),1188–1231. [102] XinWang,JinLiu,LiLi,XiaoChen,XiaoLiu,andHaoWu.2020.DetectingandExplainingSelf-AdmittedTechnicalDebtswithAttention-based NeuralNetworks.InProceedingsofthe35thIEEE/ACMInternationalConferenceonAutomatedSoftwareEngineering(ASE).IEEE,871–882. [103] YuWang,KeWang,andLinzhangWang.2023.AnExplanationMethodforModelsofCode.InProceedingsofthe38thAnnualACMSIGPLAN ConferenceonObject-OrientedProgramming,Systems,Languages,andApplications(OOPSLA).801–827. [104] YueWang,WeishiWang,ShafiqR.Joty,andStevenC.H.Hoi.2021. CodeT5:Identifier-awareUnifiedPre-trainedEncoder-DecoderModels forCodeUnderstandingandGeneration.InProceedingsofthe26thConferenceonEmpiricalMethodsinNaturalLanguageProcessing(EMNLP). AssociationforComputationalLinguistics,8696–8708. [105] YuekunWang,YuhangYe,YuemingWu,WeiweiZhang,YinxingXue,andYangLiu.2023. ComparisonandEvaluationofCloneDetection TechniqueswithDifferentCodeRepresentations.InProceedingsofthe45thIEEE/ACMInternationalConferenceonSoftwareEngineering(ICSE).IEEE, 332–344. [106] CodyWatson,NathanCooper,DavidNader-Palacio,KevinMoran,andDenysPoshyvanyk.2022.ASystematicLiteratureReviewontheUseof DeepLearninginSoftwareEngineeringResearch.ACMTrans.Softw.Eng.Methodol.31,2(2022),32:1–32:58. |
[107] SupatsaraWattanakriengkrai,PatanamonThongtanunam,ChakkritTantithamthavorn,HideakiHata,andKenichiMatsumoto.2022.Predicting DefectiveLinesUsingaModel-AgnosticTechnique.IEEETrans.SoftwareEng.48,5(2022),1480–1496. [108] KatharinaWeitz,DominikSchiller,RubenSchlagowski,TobiasHuber,andElisabethAndré.2019."Doyoutrustme?":IncreasingUser-Trustby IntegratingVirtualAgentsinExplainableAIInteractionDesign.InProceedingsofthe19thACMInternationalConferenceonIntelligentVirtual Agents(IVA).ACM,7–9. [109] MartinWhite,MicheleTufano,ChristopherVendome,andDenysPoshyvanyk.2016.Deeplearningcodefragmentsforcodeclonedetection.In Proceedingsofthe31stIEEE/ACMInternationalConferenceonAutomatedSoftwareEngineering(ASE).ACM,87–98. [110] RatnadiraWidyasari,GedeArthaAzriadiPrana,StefanusA.Haryono,YuanTian,HafilNoerZachiary,andDavidLo.2022.XAI4FL:enhancing spectrum-basedfaultlocalizationwithexplainableartificialintelligence.InProceedingsofthe30thIEEE/ACMInternationalConferenceonProgram Comprehension(ICPC).ACM,499–510. [111] JanDeWinter.2010.ExplanationsinSoftwareEngineering:ThePragmaticPointofView.MindsMach.20,2(2010),277–289. [112] ClaesWohlin.2014.Guidelinesforsnowballinginsystematicliteraturestudiesandareplicationinsoftwareengineering.InProceedingsofthe18th InternationalConferenceonEvaluationandAssessmentinSoftwareEngineering(EASE).ACM,38:1–38:10. [113] BozhiWu,SenChen,CuiyunGao,LinglingFan,YangLiu,WeipingWen,andMichaelR.Lyu.2021.WhyanAndroidAppIsClassifiedasMalware: TowardMalwareClassificationInterpretation.ACMTrans.Softw.Eng.Methodol.30,2(2021),21:1–21:29. [114] WenxinXiao,HaoHe,WeiweiXu,YuxiaZhang,andMinghuiZhou.2023.HowEarlyParticipationDeterminesLong-TermSustainedActivityin GitHubProjects?.InProceedingsofthe31thACMJointEuropeanSoftwareEngineeringConferenceandSymposiumontheFoundationsofSoftware Engineering(ESEC/FSE).ACM,29–41. [115] XinqiangXie,XiaochunYang,BinWang,andQiangHe.2022.DevRec:Multi-RelationshipEmbeddedSoftwareDeveloperRecommendation.IEEE Trans.SoftwareEng.48,11(2022),4357–4379. ManuscriptsubmittedtoACMASystematicLiteratureReviewonExplainabilityforML/DL-basedSEResearch 41 [116] FengyuYang,GuangdongZeng,FaZhong,WeiZheng,andPengXiao.2023.InterpretableSoftwareDefectPredictionIncorporatingMultipleRules. InProceedingsofthe30thIEEEInternationalConferenceonSoftwareAnalysis,EvolutionandReengineering(SANER).IEEE,940–947. [117] JiaYang,CaiFu,FengyangDeng,MingWen,XiaoweiGuo,andChuanhaoWan.2023.TowardInterpretableGraphTensorConvolutionNeural NetworkforCodeSemanticsEmbedding.ACMTrans.Softw.Eng.Methodol.32,5(2023),115:1–115:40. [118] LanxinYang,JinweiXu,YifanZhang,HeZhang,andAlbertoBacchelli.2023.EvaCRC:EvaluatingCodeReviewComments.InProceedingsofthe 31thACMJointEuropeanSoftwareEngineeringConferenceandSymposiumontheFoundationsofSoftwareEngineering(ESEC/FSE).ACM,275–287. [119] YanmingYang,XinXia,DavidLo,andJohnC.Grundy.2022.ASurveyonDeepLearningforSoftwareEngineering.ACMComput.Surv.54,10s (2022),206:1–206:73. [120] SurajYatish,JirayusJiarpakdee,PatanamonThongtanunam,andChakkritTantithamthavorn.2019.Miningsoftwaredefects:shouldweconsider affectedreleases?.InProceedingsofthe41stInternationalConferenceonSoftwareEngineering(ICSE).IEEE/ACM,654–665. [121] WeiJieYeo,WihanvanderHeever,RuiMao,ErikCambria,RanjanSatapathy,andGianmarcoMengaldo.2023.AComprehensiveReviewon FinancialExplainableAI.arXivpreprintarXiv:2309.11960(2023). [122] XinYin,ChongyangShi,andShuxinZhao.2021.LocalandGlobalFeatureBasedExplainableFeatureEnvyDetection.InProceedingsoftheIEEE 45thAnnualComputers,Software,andApplicationsConference(COMPSAC).IEEE,942–951. [123] ZhitaoYing,DylanBourgeois,JiaxuanYou,MarinkaZitnik,andJureLeskovec.2019.GNNExplainer:GeneratingExplanationsforGraphNeural Networks.InProceedingsofthe33rdAnnualConferenceonNeuralInformationProcessingSystems(NeurIPS).9240–9251. [124] HaoYu,YilingLou,KeSun,DezhiRan,TaoXie,DanHao,YingLi,GeLi,andQianxiangWang.2022. AutomatedAssertionGenerationvia InformationRetrievalandItsIntegrationwithDeeplearning.InProceedingsofthe44thIEEE/ACM44thInternationalConferenceonSoftware Engineering(ICSE).ACM,163–174. [125] ÉloiZablocki,HédiBen-Younes,PatrickPérez,andMatthieuCord.2022. ExplainabilityofDeepVision-BasedAutonomousDrivingSystems: ReviewandChallenges.Int.J.Comput.Vis.130,10(2022),2425–2452. [126] MatthewD.ZeilerandRobFergus.2014.VisualizingandUnderstandingConvolutionalNetworks.InProceedinsofthe13thEuropeanConferenceon ComputerVision(ECCV)(LectureNotesinComputerScience,Vol.8689).Springer,818–833. [127] AndreasZellerandRalfHildebrandt.2002.SimplifyingandIsolatingFailure-InducingInput.IEEETrans.SoftwareEng.28,2(2002),183–200. [128] HeZhang,MuhammadAliBabar,andPaoloTell.2011. Identifyingrelevantstudiesinsoftwareengineering. Inf.Softw.Technol.53,6(2011), 625–637. |
[129] JianZhang,ShangqingLiu,XuWang,TianlinLi,andYangLiu.2023.LearningtoLocateandDescribeVulnerabilities.InProceedingsofthe38th IEEE/ACMInternationalConferenceonAutomatedSoftwareEngineering(ASE).IEEE,332–344. [130] ZhuoZhang,YanLei,MengYan,YueYu,JiachiChen,ShangwenWang,andXiaoguangMao.2022. ReentrancyVulnerabilityDetectionand Localization:ADeepLearningBasedTwo-phaseApproach.InProceedingsofthe37thIEEE/ACMInternationalConferenceonAutomatedSoftware Engineering(ASE).ACM,83:1–83:13. [131] NengwenZhao,JunjieChen,ZhouWang,XiaoPeng,GangWang,YongWu,FangZhou,ZhenFeng,XiaohuiNie,WenchiZhang,KaixinSui, andDanPei.2020.Real-timeincidentpredictionforonlineservicesystems.InProceedingsofthe28thACMJointEuropeanSoftwareEngineering ConferenceandSymposiumontheFoundationsofSoftwareEngineering(ESEC/FSE).ACM,315–326. [132] NengwenZhao,HonglinWang,ZeyanLi,XiaoPeng,GangWang,ZhuPan,YongWu,ZhenFeng,XidaoWen,WenchiZhang,KaixinSui,andDan Pei.2021.Anempiricalinvestigationofpracticalloganomalydetectionforonlineservicesystems.InProceedingsofthe29thACMJointEuropean SoftwareEngineeringConferenceandSymposiumontheFoundationsofSoftwareEngineering(ESEC/FSE).ACM,1404–1415. [133] WeiZheng,TianrenShen,XiangChen,andPeiranDeng.2022.InterpretabilityapplicationoftheJust-in-Timesoftwaredefectpredictionmodel.J. Syst.Softw.188(2022),111245. [134] BoleiZhou,AdityaKhosla,ÀgataLapedriza,AudeOliva,andAntonioTorralba.2016.LearningDeepFeaturesforDiscriminativeLocalization.In Proceedingsofthe26IEEEConferenceonComputerVisionandPatternRecognition(CVPR).IEEEComputerSociety,2921–2929. [135] JiayuanZhou,MichaelPacheco,JinfuChen,XingHu,XinXia,DavidLo,andAhmedE.Hassan.2023.CoLeFunDa:ExplainableSilentVulnerability FixIdentification.InProceedingsofthe45thIEEE/ACMInternationalConferenceonSoftwareEngineering(ICSE).IEEE,2565–2577. [136] JuliaElZiniandMarietteAwad.2023. OntheExplainabilityofNaturalLanguageProcessingDeepModels. ACMComput.Surv.55,5(2023), 103:1–103:31. [137] DeqingZou,YaweiZhu,ShouhuaiXu,ZhenLi,HaiJin,andHengkaiYe.2021.InterpretingDeepLearning-basedVulnerabilityDetectorPredictions BasedonHeuristicSearching.ACMTrans.Softw.Eng.Methodol.30,2(2021),23:1–23:31. Received20February2007;revised12March2009;accepted5June2009 ManuscriptsubmittedtoACM |
2401.14886 Coca: Improving and Explaining Graph Neural Network-Based Vulnerability Detection Systems SicongCao XiaobingSun∗ XiaoxueWu YangzhouUniversity YangzhouUniversity YangzhouUniversity Yangzhou,China Yangzhou,China Yangzhou,China DX120210088@yzu.edu.cn xbsun@yzu.edu.cn xiaoxuewu@yzu.edu.cn DavidLo LiliBo BinLi SingaporeManagementUniversity YangzhouUniversity YangzhouUniversity Singapore Yangzhou,China Yangzhou,China davidlo@smu.edu.sg YunnanKeyLaboratoryofSoftware lb@yzu.edu.cn Engineering Yunnan,China lilibo@yzu.edu.cn WeiLiu YangzhouUniversity Yangzhou,China weiliu@yzu.edu.cn ABSTRACT applyCocaoverthreetypicalGNN-basedvulnerabilitydetectors. Recently,GraphNeuralNetwork(GNN)-basedvulnerabilityde- ExperimentalresultsshowthatCocacaneffectivelymitigatethe tectionsystemshaveachievedremarkablesuccess.However,the spuriouscorrelationissue,andprovidemoreusefulhigh-quality lackofexplainabilityposesacriticalchallengetodeployblack- explanations. boxmodelsinsecurity-relateddomains.Forthisreason,several CCSCONCEPTS approacheshavebeenproposedtoexplainthedecisionlogicofthe detectionmodelbyprovidingasetofcrucialstatementspositively •Securityandprivacy→Softwaresecurityengineering;• contributingtoitspredictions.Unfortunately,duetotheweakly- Softwareanditsengineering→Softwaremaintenancetools. robustdetectionmodelsandsuboptimalexplanationstrategy,they KEYWORDS havethedangerofrevealingspuriouscorrelationsandredundancy issue. ContrastiveLearning,CausalInference,Explainability Inthispaper,weproposeCoca,ageneralframeworkaiming ACMReferenceFormat: to1)enhancetherobustnessofexistingGNN-basedvulnerabil- SicongCao,XiaobingSun,XiaoxueWu,DavidLo,LiliBo,BinLi,andWei itydetectionmodelstoavoidspuriousexplanations;and2)pro- Liu.2024.Coca:ImprovingandExplainingGraphNeuralNetwork-Based videbothconcise andeffective explanationstoreasonaboutthe VulnerabilityDetectionSystems.In2024IEEE/ACM46thInternationalCon- detectedvulnerabilities.Cocaconsistsoftwocorepartsreferred ferenceonSoftwareEngineering(ICSE’24),April14–20,2024,Lisbon,Portugal. toasTrainerandExplainer.Theformeraimstotrainadetection ACM,NewYork,NY,USA,13pages.https://doi.org/10.1145/3597503.3639168 modelwhichisrobusttorandomperturbationbasedoncombina- torialcontrastivelearning,whilethelatterbuildsanexplainerto 1 INTRODUCTION derivecrucialcodestatementsthataremostdecisivetothedetected Softwarevulnerabilities,sometimescalledsecuritybugs,areweak- vulnerabilityviadual-viewcausalinferenceasexplanations.We nessesinaninformationsystem,securityprocedures,internalcon- trols,orimplementationsthatcouldbeexploitedbyathreatactor *XiaobingSunisthecorrespondingauthor. foravarietyofmaliciousends[36].Assuchweaknessesareun- avoidableduringthedesignandimplementationofthesoftware, anddetectingvulnerabilitiesintheearlystagesofthesoftwarelife Permissiontomakedigitalorhardcopiesofallorpartofthisworkforpersonalor cycleiscriticallyimportant[60,70]. classroomuseisgrantedwithoutfeeprovidedthatcopiesarenotmadeordistributed forprofitorcommercialadvantageandthatcopiesbearthisnoticeandthefullcitation Benefiting from the great success of Deep Learning (DL) in onthefirstpage.Copyrightsforcomponentsofthisworkownedbyothersthanthe code-centric software engineering tasks, an increasing number author(s)mustbehonored.Abstractingwithcreditispermitted.Tocopyotherwise,or oflearning-basedvulnerabilitydetectionapproaches[6,21,42,43] republish,topostonserversortoredistributetolists,requirespriorspecificpermission and/orafee.Requestpermissionsfrompermissions@acm.org. havebeenproposed.Comparedtoconventionalapproaches[5,9, ICSE’24,April14–20,2024,Lisbon,Portugal 12,26]thatheavilyrelyonhand-craftedvulnerabilityspecifica- ©2024Copyrightheldbytheowner/author(s).PublicationrightslicensedtoACM. tions,DL-basedapproachesfocusonconstructingcomplexNeural ACMISBN979-8-4007-0217-4/24/04...$15.00 https://doi.org/10.1145/3597503.3639168 Network(NN)modelstoautomaticallylearnimplicitvulnerability 4202 naJ 62 ]RC.sc[ 1v68841.1042:viXraICSE’24,April14–20,2024,Lisbon,Portugal SicongCao,XiaobingSun,XiaoxueWu,DavidLo,LiliBo,BinLi,andWeiLiu patternsfromsourcecodewithouthumanintervention.Recently, statementsaspossible)explanations.Tothisend,wedeveloptwo inspiredbytheabilitytoeffectivelycapturestructuredsemantic corepartsinCocareferredtoasTrainer(abbreviatedasCoca𝑇𝑟𝑎) information(e.g.,control-anddata-flow)ofsourcecode,GraphNeu- andExplainer(Coca𝐸𝑥𝑝 forshort). ralNetworks(GNN)havebeenwidelyadoptedbystate-of-the-art Coca Design. In the model construction phase, Coca𝑇𝑟𝑎 first neuralvulnerabilitydetectors[11,40,67,80]. appliessixkindsofsemantic-preservingtransformationsasdata Whiledemonstratedsuperiorperformance,duetotheblack-box augmentationoperatorstogeneratediversefunctionallyequivalent natureofNNmodels,GNN-basedapproachesfallshortintheca- variantsforeachcodesampleinthedataset.Then,givenanoff-the- pability to explain why a given code is predicted as vulnerable shelfGNN-basedvulnerabilitydetectionmodel,Coca𝑇𝑟𝑎combines |
[11,59].Suchalackof explainabilitycouldhindertheiradop- self-supervisedwithsupervisedcontrastivelearningtolearnrobust tionwhenappliedtoreal-worldusageassubstitutesfortraditional featurerepresentationsbygroupingsimilarsampleswhilepushing sourcecodeanalyzers[20].Torevealthedecisionlogicbehindthe awaythedissimilarsamples.Theserobustfeaturerepresentations binarydetectionresults(vulnerableornot),severalapproaches willbefedintotheclassifiertotrainarobustness-enhancedvul- have beenproposed toprovideadditional explanatoryinforma- nerabilitydetectionmodel.Inthevulnerabilityexplanationphase, tion.Theseeffortscanbebroadlycastintotwocategories,namely weproposeamodel-agnosticextensionbasedondual-viewcausal GlobalExplainabilityandLocalExplainability.Globalexplanation inferencecalledCoca𝐸𝑥𝑝,whichintegratesfactualwithcounter- approachesleveragetheexplainabilitybuiltinspecificarchitectures factualreasoningtoderivecrucialcodestatementsthataremost tounderstandwhatfeaturesthatinfluencethepredictionsofthe decisivetothedetectedvulnerabilityasexplanations. models.Acommonself-explainingapproachisattentionmechanism ImplementationandEvaluations.Weprovidetheprototype [66],whichusesweightsofattentionlayersinsidethenetworkto implementationof Cocaoverthreestate-of-the-artGNN-based determinetheimportanceofeachinputtoken.Forexample,LineVul vulnerabilitydetectionapproaches(Devign[80],ReVeal[11],and [28]leveragestheself-attentionmechanisminsidetheTransformer DeepWuKong[16]).Weextensivelyevaluateourapproachwithrep- architecturetolocatevulnerablestatementsforexplanation.How- resentativebaselinesonalarge-scalevulnerabilitydatasetcompris- ever,theglobalexplanationisderivedfromthetrainingdataand ingwell-labeledprogramsextractedfromreal-worldmainstream thusitmaynotbeaccurateforaparticulardecisionofaninstance projects.ExperimentalresultsshowthatCocacaneffectivelyim- [55].Amorepopularapproachislocalexplanation[31,83],which provethevulnerabilitydetectionperformanceofexistingNNmod- adoptsperturbation-basedmechanismssuchasLEMNA[30]to elsandprovidehigh-qualityexplanations. providejustificationsforindividualpredictions.Thehigh-levelidea Contributions.Thispapermakesthefollowingcontributions: behindthisapproachistosearchforimportantfeaturespositively • Weuncoverthespuriouscorrelationsandredundancyproblems contributingtothemodel’spredictionbyremovingorreplacinga inexistingGNN-basedexplainablevulnerabilitydetectors,and subsetofthefeaturesintheinputspace.IVDetect[40]leverages pointoutthatthesetwoissuesneedtobetreatedtogether. GNNExplainer[75]tosimplifythetargetinstancetoaminimal • WeproposeCoca1,anovelframeworkforimprovingandex- PDGsub-graphconsistingofasetofcrucialstatementsalongwith plainingGNN-basedvulnerabilitydetectionsystems,inwhich programdependencieswhileretainingtheinitialmodelprediction. Coca𝑇𝑟𝑎improvestherobustnessofdetectionmodelsbasedon However,theseapproachesfacetwochallengesthatlimittheir combinatorialcontrastivelearningtoavoidspuriousexplana- potentials.Firstly,perturbation-basedexplanationtechniquesas- tions,whileCoca𝐸𝑥𝑝 derivesbothconciseandeffectivecode sumethatthedetectionmodelisquiterobust,i.e.,theseremoved/p- statementsasexplanationsviadual-viewcausalinference. reservedstatementsareconsistentwiththegroundtruth.Unfortu- • WeprovideprototypeimplementationsofCocaoverthreestate- nately,asreportedinrecentworks[56,74],simplecodeedits(e.g., of-the-artGNN-basedvulnerabilitydetectionapproaches.The variablerenaming)caneasilymisleadNNmodelstoaltertheirpre- extensive experiments show substantial improvements Coca dictions.Asaresult,theweakrobustnessofdetectionmodelscould bringsintermsofthedetectioncapacityandexplainability. leadtospuriousexplanationsevenifthevulnerablecodeiscor- rectlyidentified.Secondly,mostpriormethodsfocusongenerating 2 BACKGROUND explanationsfromtheperspectiveoffactualreasoning[40,56,61], 2.1 ProblemFormulation i.e.,providingasubsetoftheinputprogramforwhichmodelsmake thesamepredictionastheydofortheoriginalone.However,such Insteadofexploringnewmodelsformoreeffectivevulnerability extractedexplanationsmaynotbeconciseenough,coveringmany detection,wefocusonamorepracticalscenario,i.e.,explainingthe redundantstatementswhicharebenignbuthighlyrelevanttothe decisionlogicofoff-the-shelfGNN-basedvulnerabilitydetection model’sprediction.Therefore,itstillrequiresextensivemanpower modelsinapost-hocmannerasaninputcodesnippetispredicted toanalyzeandinspectnumerousexplanationresults. asvulnerable.Inparticular,followingthedefinitioninrecentworks OurWork.Totacklethesechallenges,weproposeCoca,anovel [33,40],ourproblemisformalizedas: frameworktoimproveandexplainGNN-basedvulnerabilitydetec- Definition1(VulnerabilityExplanation).Givenaninput tionsystemsviacombinatorialContrastivelearninganddual-view program 𝑃 = {𝑠 1,···,𝑠 𝑚} which is detected as vulnerable, the Causalinference.Thekeyinsightsunderlyingourapproachin- explanationisasetofcrucialstatements{𝑠 𝑖,···,𝑠 𝑗}(1≤𝑖 ≤ 𝑗 ≤ clude(❶)enhancingtherobustnessofexistingneuralvulnerability 𝑚)thataremostrelevanttothedecisionofthemodel,where𝑠 𝑢 |
detectionmodelstoavoidspuriousexplanations,aswellas(❷) (𝑖 ≤𝑢 ≤ 𝑗)denotesthe𝑢-thstatementinprogram𝑃. providingbothconcise (preservingasmallfragmentofcodefor manualreview)andeffective(coveringasmanytrulyvulnerable 1https://github.com/CocaVul/Coca2023/4/20 19:28 ASE.svg Phase1: Data Augmentation Phase2: Feature Extraction Token-Level Graph Transformation Construction Statement-Level Feature Transformation Embedding SE Variants Program Graph Phase3: Contrastive Learning Coca:ImprovingandExplainingGraphNeuralNetwork-BasedVulnerabilityDetectionSystems ICSE’24,April14–20,2024,Lisbon,Portugal file:///C:/Users/CSC/Desktop/ASE.svg 1/3 evitsartnoC redocnE Sample 1 Sample 1+ ... Sample k reifissalC Source Code Vulnerability Detector Feature Extraction Source Code Vulnerable Benign Projector whichiscomputedas: f { u i i }n . n f ec . t l A . s( r e( i . e ) t . t e . {um ) .r; .n{ .; } Tra Ons pf eo rr am toa rt sion f { u i i }n . n f ec . t l A . s( r e( v . e ) a . t r . {u0 ) .r; .n{ .; } Enf cq odeg r q Maximize L𝑁𝐶𝐸 = |B1 | 𝑖∑︁ ∈B−log 𝑎∈(cid:205)e Ax (p 𝑖)(𝑧 e𝑖 x· p𝑧 (𝑧𝑗( 𝑖𝑖 ·)/ 𝑧𝜏 𝑎) /𝜏) (1) f k g k+ Minimize where𝑧 𝑖 =𝑔(𝑓(𝑥˜𝑖))representsthelow-dimensionalembeddingof 漏洞、严重性(为什么要检测) f {u n .c .B .() f {u n .c .B .() Negatives Buffer anarbitrarysample𝑥˜𝑖 amongaugmentedvariants.𝑗(𝑖)istheindex v fa or r k (e .y .; .) v wa hr il ek e (y .; ..) oftheothervieworiginatingfromthesamesource.𝜏 ∈R+isthe 问题背景 分类(静态、DL-based) {...} {...} k- k- k- k- return key; return key; temperatureparametertoscaletheloss,andA(𝑖)≡B\{𝑖}. 为什么要可解释(黑盒模型难以理解、一些经验研究也呼吁需要可解释性) Unlabeled Data Augmented Contrastive 2.3 ExplanationforGNN-basedModels 基于可解释人工智能(XAI)--全局可解释(MSR’22)、局部可解释(FSE’21) Programs Augmentation Variants Learning 研究背景 AlthoughGraphNeuralNetworks(GNN)-basedcodemodelshave 基于程序约简(Delta Debugging)--FSE’21 Figure1:Contrastivecoderepresentationlearningpipeline. achievedremarkablesuccessinavarietyofSEtasks(e.g.,codere- 两种方法都需要与检测模型进行交互,通过扰动输入 模型鲁棒性弱 (删除相关的语句)以获取对模型预测影响较大的子 trieval[58]andautomatedprogramrepair[14]),thelackofexplain- 引言 集,但其前提是模型足够精确(鲁棒),即模型不会 abilitycreateskeybarrierstotheiradoptioninpractice.Recently, 存在问题 因为一些无关的扰动而轻易地改变预测结果 XAI倾向于包含足够信息以不改变预测结果,但其解 Inotherwords,ourgoalturnstodevelopanexplanationframe- severalstudieshaveattemptedtoexplainthedecisionsofGNNs 解释针对性差 释可能会包括冗余语句,导致解释结果不够紧凑;程 workapplicabletoanyGNN-basedvulnerabilitydetectortoprovide viafactualreasoning[49,75]orcounterfactualreasoning[45,47]. 序 适约 用简 于需 批要 量与 的模 漏型 洞进 检行 测多 结次 果交 ,互 此以 外获 ,取 程解 序释 约结 简果 倾, 向不 于 notonlybinaryresults,butalsoafewlinesofcode(i.e.,asubset FactualReasoning.Factualreasoning-basedapproachesfocus 只包含最关键信息的解释,导致解释结果不够全面; oftheinputprogram)asexplanatoryinformation,tohelpsecurity onseekingasub-graphwithasufficientsetofedges/featuresthat practitionersunderstandwhyitisdetectedasvulnerable. producethesamepredictionasusingthewholegraph.Formally, 通 果过 既结 充合 分事 ,实 又推 必理 要与 (反 Les事 s)实推理,使得解释结 Stability Enhancement givenaninputgraphG𝑘 = {V𝑘,E𝑘}withitslabel𝑦ˆ𝑘 predicted 提出方法 基于对比学习和反事实推理的可解释漏洞检测 Stable 2.2 ContrastiveLearningforCode bythetrainedGNNmodel,theconditionforfactualreasoningcan Trainer beproducedasfollowing: 通 样过 本漏 进洞 行样 对本 比与 ,补 提丁 高样 模本 型, 的漏 鲁洞 棒样 性本 (与 Str语 on义 ger等 )价 Detection Model D leu are nt io ngth (e Cl Lim ) hit ae sd ela mb ee rl ged edda at sa ain pd roo mw in ss intr gea mm ett has ok ds, foc ron letr aa rs nt ii nv ge argmax𝑃(ℓ |𝐴 𝑘 ⊙𝑀 𝑘,𝑋 𝑘 ⊙𝐹 𝑘)=𝑦ˆ𝑘 (2) ℓ∈L Vulnerability better fVeulanetruabriliety representation of code without supervision from Detector Explainer where L isthesetofgraphlabelsand ⊙ denoteselement-wise labels[22,34,46,79].ThegoalofCListomaximizetheagreement Explainability in 全局可解释 Source Code Explainab thbel eet wD saeet meecnt eioo snr ai mgi pn la el )d waE hx tp a ila la ena n mti don iis nts imp io zs ii nt giv te hd ea at ga r( ea en ma eu ng tm be en twte ed ev ne ors ri io gn ino af l m Gu 𝑘l ’t sip adli jc aa ct eio nn cy;𝑀 m𝑘 atr∈ ix{ 𝐴0 𝑘,1 ∈}V {0𝑘 ,× 1V }V𝑘 𝑘r ×ep Vr 𝑘es ,e wn hts ileth 𝐹e 𝑘e ∈dg {0e ,m 1}a Vsk 𝑘×o 𝑑f Code Models 局部可解释 dataanditsnegativedatainthevectorspace.Figure1presents isthefeaturemaskofG𝑘’snodefeaturematrix𝑋 𝑘 ∈RV𝑘×𝑑 .V𝑘 thetypicalcode-orientedCLpipeline.Unlabeledprogramsarefirst isthenumberofnodesinthe𝑘-thgraphand𝑑isthedimensionof 背景与挑战 Robustness in 数据层面:对抗样本 transformed into functionally equivalent (FE) variants via data nodefeatures. |
Code Models 模型层面:对比学习 augmentation.Inthiswork,weapplythefollowingsixtoken-or CounterfactualReasoning.Counterfactualreasoning-basedap- 挑战1:如何提高检测模型的鲁棒性? statement-levelaugmentationoperatorsintroducedbypriorworks proachesseekanecessarysetofedges/featuresthatleadtodifferent 挑战 [10,34,46]toconstructFEprogramvariants: predictions once they are removed. Similarly, the condition for 挑战2:如何简洁但全面地提供解释信息? counterfactualreasoningcanbeformulatedas: • Function/VariableRenaming(FR/VR)substitutesthefunction/vari- ablenamewitharandomtokenextractedfromthevocabulary argmax𝑃(ℓ |𝐴 𝑘 −𝐴 𝑘 ⊙𝑀 𝑘,𝑋 𝑘 −𝑋 𝑘 ⊙𝐹 𝑘)≠𝑦ˆ𝑘 (3) ℓ∈L setconstructedonthepre-trainingdataset. • OperandSwap(OS) istoswaptheoperandsofbinarylogical Afteroptimization,thesub-graphG 𝑘′ willbe𝐴 𝑘 ⊙𝑀 𝑘 withthe operations.Inparticular,theoperatorwillalsobechangedto sub-features𝑋 𝑘 ⊙𝐹 𝑘,whichisthegeneratedexplanationsforthe makesurethemodifiedexpressionisthelogicalequivalentto predictionofG𝑘.Inourscenario,theextractedsub-graphG 𝑘′ will befurthermappedtoitscorrespondingcodesnippetasexplanations theonebeforemodificationwhenweswaptheoperandsofa forGNN-basedvulnerabilitydetectors. logicaloperator. • StatementPermutation(SP)randomlyswapstwolinesofstate- 3 MOTIVATION mentsthathavenodependency(e.g.,twoconsecutivedeclaration 3.1 SpecialConcernsforDL-basedSecurity statements)oneachotherinabasicblockinafunctionbody. • LoopExchange(LX)replacesforloopswithwhileloopsorvice Applications versa. Incontrasttootherdomains,explanationsforsecuritysystems • Block Swap (BS) swaps the then block of a chosen if state- shouldsatisfycertainspecialrequirements[25,69].Inthiswork, mentwiththecorrespondingelseblock.Wenegatetheoriginal weprimarilyfocusontwoaspects,i.e.,effectivenessandconciseness. branchingconditiontopreservesemanticequivalence. Effectiveness.Themaingoalofanexplanationapproachisto • SwitchtoIf(SI)replacesaswitchstatementinafunctionbody uncoverthedecisionlogicofblack-boxmodels.Thus,thevulner- withitsequivalentifstatement. abilityexplainershouldbeabletocapturemostrelevantfeatures Then,theseaugmentedvariantsarefedintothefeatureencoder employedbydetectionmodelsforprediction.Forexample,given 𝑓 𝑞 (or𝑓 𝑘)withaprojectionheadtoproducebetterglobalprogram asetofdetectedvulnerablecode,itwouldbeperfectifthepro- embeddingsviaminimizingthecontrastiveloss.Awidelyadopted videdexplanationsonlycovervulnerability-relatedcontextwithout contrastivelossinSEtasksisNoiseContrastiveEstimate(NCE)[13], additionalprogramstatements[15].ICSE’24,April14–20,2024,Lisbon,Portugal 2023Sic/o7ng/2Ca7o, X1iao7bi:n2gS1un,XiaoxueWu,DavidLo,LiliBo,BinLi,andWeiLiu ASE.svg Definition2(EffectiveExplanations).Anexplanationresult iseffectiveifstatementswhichdescribetheoffendingexecution Explanations trace/contextofthedetectedvulnerabilityarecovered. Conciseness.Pickingupfeatures/statementshighlyrelevantto themodel’spredictionisanecessaryprerequisiteforgoodexplana- tions.However,itmaybedifficultandtime-consumingforsecurity Detection Trainer Model practitionerstounderstandandanalyzenumerousexplanationre- sults.Thus,narrowingdownthescopeofmanualreviewisalso importantinpractice. Definition3(ConciseExplanations).Anexplanationresult isconcisewhenitonlycontainsasmallnumberofcrucialstatements Source Vulnerability Explainer sufficientforsecurityexpertstounderstandtherootcauseofthe Code Detection detectedvulnerability. 3.2 WhyNotFine-GrainedDetectors? Sincethevulnerabilityexplanationsareasetofcrucialstatements derivedfromthepredictionsofDLmodels,anintuitivesolution istoconstructafine-grainedmodeltolocatevulnerability-related statements,aspriorworksdo[7,8,32,41].However,thelackof large-scaleandhuman-labeleddatasetscreatekeybarrierstothe adoptionofthesestatement-levelapproachesinpractice.Bycon- trast,weaimtoseekamodel-independent(orpost-hoc)wayto provideexplanations,insteadofreplacingthem. 3.3 WhyNotExistingExplainers? Although the explainability of DL models has been extensively studiedinnon-securitydomains[27,77],wearguethatexisting explanationapproachesfacetwocriticalchallengeswhendirectly appliedtoGNN-basedvulnerabilitydetectionsystems. Weak Robustness. As reported in [11, 33, 59], existing neural vulnerabilitydetectorsfocusonpickingupdatasetnuancesfor prediction,asopposedtorealvulnerabilityfeatures.Unfortunately, therobustnessofmostexplanationapproaches(e.g.,LIME[57], SHAP[48])areweak,andtheirexplanationsforthesamesampleare easytobealteredduetosmallperturbations,orevenrandomnoise [25,69].Asaresult,explanationsbuiltontopofthedetectionresults fromsuchweakly-robustmodelsjustrevealspuriouscorrelations, whicharehardtobetoleratedbysecurityapplications. HardtoBalanceEffectivenessandConciseness.Post-hocap- proachesmostlyexplainthepredictionsmadebyDLmodelsfrom theperspectiveoffactualreasoning[29,40],whichfavorsasuffi- cientsubsetwhichcontainsenoughinformationtomakethesame |
predictionastheydofortheoriginalprogram.However,suchex- tractedexplanationsmayproducealargenumberoffalsealarms, posingabarriertoadoption.What’sworse,sincetheexistingpost- hocexplanationapproachesmainlyleverageperturbation-based mechanisms(e.g.,LEMNA[30])totrackinputfeaturesthatare highlyrelevanttothemodel’sprediction,theexplanationperfor- mancewilldeterioratefurtherduetotheweakrobustnessofde- tectionmodelstorandomperturbations.Onthecontrary,coun- terfactualexplanations[18]containthemostcrucialinformation, whichconstitutesminimalchangestotheinputunderwhichthe modelchangesitsmind.However,justbecauseofthis,theymay onlycoverasmallsubsetofthegroundtruth. file:///C:/Users/CSC/Desktop/ASE.svg 1/1 ssentsuboR elbanialpxE tnemecnahnE noitceteD Figure2:TheworkflowofCoca. 3.4 KeyInsightsBehindOurDesign Inthisstudy,weprimarilyfocusonprovidingbotheffectiveand conciseexplanationsforsecuritypractitionerstogaininsightsinto whyagivenprogramwasdetectedasvulnerable.Thekeyinsight ofCocaisthattheeffectivenessandconcisenessofexplanations canbeimprovedinatwo-stageprocess.Thisisinspiredbythe observationthattherobustnessofdetectionmodelsisanecessary prerequisiteforeffectiveexplanations,whilethetrade-offbetweenthe effectivenessandconcisenessmainlydependsontheadoptedexplana- tionstrategy.Therefore,byemployingthetwo-stageprocess,the specialconcernsforeffectivenessandconcisenessofexplanations inGNN-basedvulnerabilitydetectionsystemscanbewellsatisfied. Overview.Figure2presentstheworkflowof Coca,includingtwo corecomponents:TrainerandExplainer.GivenacraftedGNN-based vulnerabilitydetectionmodelM,onemajordifferencebetween ourframeworkandexistingapproachesliesinthetrainingstrategy ofthemodel.Specifically,insteadofemployingcross-entropyloss, our Trainer module leverages combinatorial contrastive loss to trainamorerobustdetectoragainstrandomperturbationstoavoid spuriousexplanations.Thus,inthevulnerabilitydetectionphase, we still transform the input program into graphs and leverage thewell-trainedmodeltolearncodefeaturerepresentationsfor predictionaspreviousworksdo.Intheexplainabledetectionphase, givenavulnerablecodedetectedbytherobustness-enhancedmodel, weproposeamodel-agnosticextension,calledExplainer,toprovide securitypractitionerswithbothconciseandeffectiveexplanations tounderstandmodeldecisionsviadual-viewcausalinference. 4 ROBUSTNESSENHANCEMENT Figure3depictsthearchitectureofourCocaTrainer(Coca𝑇𝑟𝑎for short).Inthisstage,weaimtotrainaneuralvulnerabilitydetection modelthatisrobusttorandomperturbation,whichisthecore mechanismusedinmostexplainers,toavoidspuriousexplanations. Specifically,givenacrafteddetectionmodelM,Coca𝑇𝑟𝑎 (❶)aug- mentsthevulnerable(orbenign)programsinthedatasetintoa setoffunctionallyequivalentvariantsviasemantics-preserving transformations;and(❷)leveragescombinatorialcontrastivelearn- ingtoforcethedetectionmodeltofocusoncriticalvulnerability semanticsthatareconsistentbetweenoriginalvulnerableprogramsCoca:ImprovingandExplainingGraphNeuralNetwork-BasedVulnerabilityDetectionSystems ICSE’24,April14–20,2024,Lisbon,Portugal 2023/10/28 19:25 ASE.svg Phase1: Data Augmentation Phase2: Combinatorial Contrastive Learning Functionally Injection Sites Equivalent Localization Variants Augmentation Operator Selection Transformation Application Dataset Mini-Batch file:///C:/Users/CSC/Desktop/ASE.svg 1/1 Training Detector Model Robust Self-Supervised Contrastive Learning Supervised Contrastive Learning Encoder Feature Projector Sample 1 Sample 1+ ... Sample k Encoder Feature Projector Below,weelaborateoneachcomponentofourcombinatorialcon- trastivelearningwithmoretechnicaldetails. FeatureEncoder.Toextractrepresentationsofsourcecode,we employthreetypicalGNN-basedmodels(Devign[80],ReVeal[11], 𝓛cse ol nf andDeepWuKong[16])asfeatureencoders𝑓(·).Notethatnomat- terdataaugmentationorcombinatorialcontrastivelearningare architecture-agnostic,ourCoca𝑇𝑟𝑎canbeeasilyextendedandinte- Sample 1 𝓛csu op n gratedintootherDL-basedSEmodelforrobustnessenhancement. Sample 1+ ... ProjectionHead.Toimprovetherepresentationqualityofthe Sample k featureencoderaswellastheconvergenceofcontrastivelearning, weaddaprojectionhead𝑔(·)consistingofaMulti-LayerPerceptron (MLP) [62] with a single hidden layer, to map the embeddings Figure3:ThearchitectureofCoca . 𝑇𝑟𝑎 learnedbythefeatureencoderintoalow-dimensionallatentspace tominimizethecontrastiveloss. ContrastiveLoss.Followingexistingapproaches[34,46],wefirst andtheirpositivepairs(includingperturbedvariantsandother employtheNCElossdefinedinEq.(1)asourself-supervisedloss vulnerablesamples),insteadofsubtleperturbation. 𝑠𝑒𝑙𝑓 functionL𝑐𝑜𝑛 .Specifically,givenasetof𝑁 randomlysampled 4.1 DataAugmentation unlabeledcodesamples{𝑥 𝑘}𝑘=1,···,𝑁,dataaugmentation(Section 4.1)isappliedoncetoobtaintheircorrespondingFEvariants.These Inspiredbytherecentworkswhichadoptobfuscation-basedadver- sarialcodeasrobustness-promotingviews[3,35],ourcoreinsightis samples{𝑥˜𝑖},where𝑥˜ 2𝑘−1and𝑥˜ 2𝑘 aretheoriginalandaugmented viewof𝑥 𝑘,respectively,arethenarrangedinthemini-batchB≡ thattheeffectivenessofperturbation-basedexplanationapproaches 𝑠𝑒𝑙𝑓 canalsobenefitfromtherobustness-enhanceddetectionmodels {1,···,2𝑁}tocomputeL𝑐𝑜𝑛 . |
viatransformedcodebecause1)existingperturbationapproaches Inaddition,inspiredbyarecentfinding[35]thattherobustness are not suitable for sparse and high-dimensional feature repre- enhancedintheself-supervisedpre-trainingphasemaynolonger sentationsofsourcecode[55];and2)semantics-preservingcode hold after supervised fine-tuning, we also adopt the Supervised transformationsinthediscretetokenspacewithoutchangingmodel Contrastive(SupCon)loss[37]duringthetrainingprocessbecause predictionscanbeapproximatelymappedtotheperturbationsin the use of label information encourages the feature encoder to thecontinuousembeddingspace. closelyalignsallsamplesfromthesameclassinthelatentspace Specifically,toconstructfunctionallyequivalentvariants,we tolearnmorerobust(intermsoforiginalsamplesandFEvariants) firstperformstaticanalysistoparseeachsourcecode𝑐 𝑖intoanAST and accurate (in terms of samples with the 𝑠𝑢sa 𝑝me label) cluster 𝑇 𝑐𝑖 andtraverseittosearchforpotentialinjectionlocations(i.e., representations.Formally,theSupConlossL𝑐𝑜𝑛 iswrittenas: ASTnodestowhichcanbeappliedaforementionedsixprogram 𝑠𝑢𝑝 1 ∑︁ −1 ∑︁ exp(𝑧 𝑖 ·𝑧 𝑞/𝜏) transformationsΦ={𝜙 1,𝜙 2,···,𝜙 6}).Onceaninjectionlocation L𝑐𝑜𝑛 = |B𝑙| |Q(𝑖)| log (cid:205) exp(𝑧 𝑖 ·𝑧 𝑎/𝜏) (4) 𝑛 𝑘 isfound,anapplicableaugmentationoperator𝜙 𝑗 ∈ Φwillbe 𝑖∈B𝑙 𝑞∈Q(𝑖) 𝑎∈A(𝑖) randomly selected and applied to get the transformed node𝑛′. Wethenadaptthecontextof𝑛 𝑘 accordingly,andtranslateitt𝑘 o whereB𝑙 correspondstothesubset(knownvulnerableorbenign theFEvariant𝑐′ 𝑗.Itisnoteworthythatdifferentfromsynthetic code)ofB,andQ(𝑖) ≡ {𝑞 ∈ A(𝑖) :𝑦˜𝑞 =𝑦˜𝑖}isthesetofindices samples[52,53]usedtomitigatethedatascarcityissueinclassifier ofallotherpositivesthatholdthesamelabelas𝑥˜𝑖 inB.1/|Q(𝑖)| isthepositivenormalizationfactorwhichservestoremovebias training,ourtransformedFEvariantsareregardedasaugmented presentinmultiplepositivessamplesandpreservethesummation viewsoforiginalsamplesduringcontrastivelearningtotraina overnegativesinthedenominatortoincreaseperformance. robustfeatureencoderthatcancapturerealvulnerabilityfeatures. Finally,thetotallossusedtotrainarobustfeatureencoderover Subsequently,wearrangeoriginalcodesamplesalongwiththeir thebatchisdefinedas: FEperturbedvariants(positives)asinputsinamini-batch.Inthis way,augmentedsamplesoriginatedfromonepairarenegatively L𝑡𝑜𝑡𝑎𝑙 =(1−𝜆)L𝑐𝑠 𝑜𝑒𝑙 𝑛𝑓 +𝜆L𝑐𝑠𝑢 𝑜𝑛𝑝 (5) correlated to any sample from other pairs within a mini-batch duringcontrastivelearning. where𝜆isaweightcoefficienttobalancethetwolossterms. Attheendofcombinatorialcontrastivelearning,theprojection 4.2 CombinatorialContrastiveLearning head𝑔(·)willbediscardedandthewell-trainedfeatureencoder𝑓(·) isfrozen(i.e.,containingexactlythesamenumberofparameters To train a detection model robust to random perturbations, we whenappliedtospecificdownstreamtasks)toproducethevector borrowthecontrastivelearningtechniquetolearnbetterfeature representationofaprogramforvulnerabilitydetection. representations.Despitethesimilarityintermsofthehigh-level designidea[4,23,34,46],i.e.,pre-trainingaself-supervisedfeature- 5 EXPLAINABLEDETECTION acquisitionmodeloveralargeunlabeledcodedatabase,andper- formingsupervisedfine-tuningoverlabeleddatasettotransferit Theexplainabledetectionstageaimsto(❶)trainaclassifierontop toaspecificdownstreamSEtask,weemployanadditionalsuper- oftherobustfeatureencoderforvulnerabilitydetection;and(❷) visedcontrastivelosstermtoeffectivelyleveragelabelinformation. buildaexplainertoderivecrucialstatementsasexplanations.ICSE’24,April14–20,2024,Lisbon,Portugal SicongCao,XiaobingSun,XiaoxueWu,DavidLo,LiliBo,BinLi,andWeiLiu 5.1 VulnerabilityDetection 𝐴 𝑘−𝐴 𝑘⊙𝑀 𝑘,𝑋 𝑘−𝑋 𝑘⊙𝐹 𝑘).𝑦ˆ𝑘,𝑠isthelabelotherthan𝑦ˆ𝑘 thathas Thegoalofthistaskistotrainabinaryclassifierabletoaccurately thelargestprobabilityscorepredictedbytheGNN-baseddetection predicttheprobabilitythatagivenfunctionisvulnerableornot. model. Inparticular,givenapopularGNN-basedvulnerabilitydetection Tosolvesuchaconstrainedoptimizationproblem,wefollow model,weonlyreplaceitsfeatureencoderwiththemorerobust [63],whichoptimizestheobjectivepartbyrelaxing𝑀 𝑘 and𝐹 𝑘 to one2(sharingthesameNNarchitecture)whichispre-trainedby realvalues𝑀 𝑘∗ ∈RV𝑘×V𝑘 and𝐹 𝑘∗ ∈RV𝑘×𝑑 ,andusing1-normto Coca𝑇𝑟𝑎.Thus,anycoarse-grained(function-orslice-level)vul- ensurethesparsityof𝑀 𝑘∗and𝐹 𝑘∗.Fortheconstraintpart,werelax nerabilitydetector,whichreceivesstructuralgraphrepresentations itaspairwisecontrastivelossL𝑓 andL𝑐: ofsourcecode(inwhichcodetokens/statementsarenodeswhile 1 semanticrelationsbetweennodesareedges)asinputsandemploys L𝑓 =ReLU( −𝑆 𝑓(𝑀 𝑘∗,𝐹 𝑘∗) 2 anoff-the-shelforcraftedGNNasitsfeatureencoderforvulnera- +𝑃(𝑦ˆ𝑘,𝑠 |𝐴 𝑘 ⊙𝑀 𝑘∗,𝑋 𝑘 ⊙𝐹 𝑘∗)) bilityfeaturelearning,canbeeasilyintegratedintoourframework. (7) 1 Forexample,whenReVeal[11]isselectedasthetargetdetection L𝑐 =ReLU( −𝑆 𝑐(𝑀 𝑘∗,𝐹 𝑘∗) 2 model,labeledcodesnippetsareparsedintoCodePropertyGraphs (CPGs)[73]andfedintotherobustfeatureencoder𝑓(·)(avanilla −𝑃(𝑦ˆ𝑘,𝑠 |𝐴 𝑘 −𝐴 𝑘 ⊙𝑀 𝑘∗,𝑋 𝑘 −𝑋 𝑘 ⊙𝐹 𝑘∗)) GGNN [39]) pre-trained by Coca𝑇𝑟𝑎 to produce corresponding Afterthat,theexplanationsub-graphG 𝑘′ =(𝐴 𝑘 ⊙𝑀 𝑘∗,𝑋 𝑘 ⊙𝐹 𝑘∗) vectorrepresentations.Then,theserepresentationsandtheirlabels isgeneratedby: areusedtotrainabuilt-inclassifier(aconvolutionallayerwith minimize∥𝑀 𝑘∗∥ 1+∥𝐹 𝑘∗∥ 1+𝛼L𝑓 +(1−𝛼)L𝑐 (8) maxpooling)withthetripletlossfunction. |
Intheinferencephase,givenaninputprogram,thevulnerability Where𝛼 controlsthetrade-offbetweenthestrengthoffactualand detectorfirstperformsstaticanalysistoextractitsgraphrepre- counterfactualreasoning.Byincreasing/deceasing𝛼,thegenerated sentationandmapsitasasinglevectorrepresentationusingthe explanationswillfocusmoreontheeffectiveness/conciseness. pre-trainedfeatureencoder.Then,theprogramrepresentationwill 6 EXPERIMENTS befedintothetrainedclassifierforprediction. 6.1 ResearchQuestions 5.2 VulnerabilityExplanation Inthispaper,weseektoanswerthefollowingRQs: Toderiveexplanationsonwhythedetectionmodelhasdecideon RQ1(DetectionPerformance):HoweffectiveareexistingGNN- thevulnerability,weproposeamodel-agnosticextensionbasedon basedapproachesenhancedviaCocaonvulnerabilitydetection? thedetectionresults,referredtoasExplainer(Coca𝐸𝑥𝑝 forshort). Thedisconnectionbetweenthelearnedfeaturesversustheac- Overview.SimilartothemostrelatedworkIVDetect[40],Coca𝐸𝑥𝑝 tualcauseofthevulnerabilitieshasraisedtheconcernsregarding aimstofindasub-graphG 𝑘′,whichcoversthekeynodes(tokens/s- theeffectivenessofDL-baseddetectionmodels.Thus,weinves- tatements)andedges(programdependencies)thataremostdecisive tigatewhethertheenhancedGNN-basedvulnerabilitydetectors tothepredictionlabel,fromthegraphrepresentationG𝑘 ofthe outperformtheiroriginalonesintermsofdetectionaccuracyand detectedvulnerablecode𝑘.Themaindifferenceliesinthatweaim theabilitytocapturerealvulnerabilityfeaturesafterrobustness toseekbothconciseandeffectiveexplanations.Hence,webuild enhancement. Coca𝐸𝑥𝑝 basedonadual-viewcausalinferenceframework[63] RQ2 (Explanation Performance): Is Coca more concise and whichintegratesfactualwithcounterfactualreasoningtomake effectivethanstate-of-the-artbaselineswhenappliedtogenerate atrade-offbetweenconcisenessandeffectiveness.Formally,the explanationsforGNN-basedvulnerabilitydetectors? extractionofG 𝑘′ canbeformulatedas: Wearguethatgeneratingcorrespondingexplanationsfordetec- minimize𝐶(𝑀 𝑘,𝐹 𝑘) tionresultsisjustthefirststepandthequalityevaluationofthemis alsoimportant.Withthismotivation,weevaluatetheperformance s.t.,𝑆 𝑓(𝑀 𝑘,𝐹 𝑘) >𝑃(𝑦ˆ𝑘,𝑠 |𝐴 𝑘 ⊙𝑀 𝑘,𝑋 𝑘 ⊙𝐹 𝑘), (6) of Cocaingeneratingconciseandeffectiveexplanations. 𝑆 𝑐(𝑀 𝑘,𝐹 𝑘) >−𝑃(𝑦ˆ𝑘,𝑠 |𝐴 𝑘 −𝐴 𝑘 ⊙𝑀 𝑘,𝑋 𝑘 −𝑋 𝑘 ⊙𝐹 𝑘) RQ3(AblationStudy):Howdovariousfactorsaffecttheoverall wheretheobjectivepart𝐶(𝑀 𝑘,𝐹 𝑘)measureshowconcisetheex- performanceof Coca? Weperformsensitivityanalysistounderstandtheinfluenceof planation is. It can be defined as the number of edges/features used to generate the explanation sub-graph G′, and computed different components of Coca, including the impact of (RQ3a) by𝐶(𝑀 𝑘,𝐹 𝑘) = ∥𝑀 𝑘∥ 0 + ∥𝐹 𝑘∥ 0, in which ∥𝑀𝑘 𝑘∥ 0 (/∥𝐹 𝑘∥ 0) rep- combinatorialcontrastivelearning,and(RQ3b)dual-viewcausal resentsthenumberof1’sinthebinaryedgemask𝑀 𝑘 (/feature inference. mask𝐹 𝑘)metrices.Theconstraintpart𝑆 𝑓(𝑀 𝑘,𝐹 𝑘)(/𝑆 𝑐(𝑀 𝑘,𝐹 𝑘))re- 6.2 Datasets flectswhetherthefactual(/counterfactual)explanationiseffective enough.Formally,thefactualexplanationstrength𝑆 𝑓(𝑀 𝑘,𝐹 𝑘)is SincethedetectioncapabilityofDL-basedmodelsbenefitsfrom consistentwiththeconditionforfactualreasoning,i.e.,𝑆 𝑓(𝑀 𝑘,𝐹 𝑘)= large-scaleandhigh-qualitydatasets,webuiltourevaluationbench- 𝑃(𝑦ˆ𝑘 | 𝐴 𝑘 ⊙𝑀 𝑘,𝑋 𝑘 ⊙𝐹 𝑘).Similarly,thecounterfactualexplana- markbymergingfivereliablehuman-labeleddatasetscollected tion strength𝑆 𝑐(𝑀 𝑘,𝐹 𝑘) is calculated as𝑆 𝑐(𝑀 𝑘,𝐹 𝑘) = −𝑃(𝑦ˆ𝑘 | fromreal-worldprojects,includingDevign[80],ReVeal[11],Big- Vul[24],CrossVul[51],andCVEFixes[2].Detailedstatisticsfor 2Notethatthefeatureencoderisfixedduringthewholetrainingphase. eachofthefivedatasetsisshowninTable1.Column2andColumnCoca:ImprovingandExplainingGraphNeuralNetwork-BasedVulnerabilityDetectionSystems ICSE’24,April14–20,2024,Lisbon,Portugal Table1:Thestatisticsofdatasets. Table2:Evaluationresultsonvulnerabilitydetectioninper- centagecomparedwithGNN-basedbaselines. Dataset #Vul #Non-vul #Total %Ratio Config Loss Approach Acc Pre Rec F1 Devign 11,888 14,149 26,037 45.66 ReVeal 1,664 16,505 18,169 9.16 Devign 89.74 32.59 31.40 31.98 Big-Vul 11,823 253,096 264,919 4.46 Default CE ReVeal 86.05 31.43 38.45 34.59 CrossVul 6,884 127,242 134,126 5.13 DeepWuKong 87.21 28.55 26.04 27.24 CVEFixes 8,932 159,157 168,089 5.31 Devign 88.15 34.68 37.12 35.86 Merged 29,844 305,827 335,671 8.89 Ours ReVeal 87.42 35.96 40.61 38.14 DeepWuKong 88.30 30.07 34.79 32.26 Devign 86.33 28.38 30.11 29.22 Coca𝑇𝑟𝑎 InfoNCE ReVeal 84.95 29.64 34.27 31.78 3arethenumberofvulnerableandnon-vulnerablefunctions,re- DeepWuKong 86.20 25.99 24.83 25.40 spectively.Column4indicatesthetotalnumberoffunctionsineach Devign 83.97 26.15 27.69 26.90 dataset.Column5denotestheratioofvulnerablefunctionsineach NCE ReVeal 81.52 26.73 31.76 29.03 dataset.Notethat,fortwomulti-languagedatasetsCrossVuland DeepWuKong 83.06 22.40 21.46 21.92 CVEFixes,weonlypreservecodesampleswritteninC/C++inour |
experimentstounifythewholedataset.Intotal,ourmergeddataset strictlyfollowingitsmethodselaboratedintheoriginalpaper.In contains335,671functions,ofwhich29,844(8.89%)arevulnerable. addition,tointegratetheseapproachesintoCoca,wealsoemploy tree-sittertouniformlyparseinputprogramsintotheirexpected 6.3 CocaImplementation graph representations (e.g., PDG, CPG). We randomly split the ForCoca𝑇𝑟𝑎,weparsedallthecodesnippetsinourmergeddataset benchmarkinto80%-10%-10%fortraining,validation,andtesting. intoASTsusingtree-sitter3andperformedthetransformation Foreachapproach,werepeatedtheexperiment10timestoaddress basedonaugmentationoperatorsdescribedinSection4.1togener- theimpactofrandomness[1,64]. ateperturbedvariants.Weappliedallsixtransformationswithan Results.Table2summarizestheexperimentalresultsofallthe equalprobabilityof0.5,whichleadsustoanaverageofthreetrans- studiedbaselinesandtheircorrespondingvariantsenhancedby formationsperprogram.Incontrastivelearning,anyGNN-based Coca𝑇𝑟𝑎onvulnerabilitydetection.Column"Config"presentsthe detectionmodelcanbeservedasanfeatureencoderinourframe- configurationofGNN-basedvulnerabilitydetectors,i.e.,construct- workandtrainedonanUbuntu18.04serverwith2NVIDIATesla ing detection models with default implementations (supervised V100GPU.Followingstandardpracticeincontrastivecoderepre- learningwithCEloss)orCoca𝑇𝑟𝑎(contrastivelearningwithNCE sentationlearning[34,79],wesetthesizeoftheprojectionhead andSupConloss).Overall,theaverageimprovementsofrobustness- to128,andusedAdam[38]foroptimizingwith256batchsizeand enhancedmodelsovertheirdefaultonesarepositive,rangingfrom 1𝑒-5learningrate.Thetemperatureparameter𝜏 ofcontrastiveloss 5.32%(DeepWuKong)to14.41%(ReVeal)onPrecision,from5.62% issetto0.07.Forfeatureencodertraining,werandomlysampled (ReVeal)to33.60%(DeepWuKong)onRecall,andfrom10.26%(Re- asubset(50%)ofvulnerableandbenignsamplesfromthemerged Veal) to 18.43% (DeepWuKong) on F1, respectively. In addition, dataset,respectively,toconstructB,andtheremainingsamples Coca𝑇𝑟𝑎(ReVeal)achievestheoverallbestperformance,withan areregardedastheunlabeleddataB𝑙 .Thefeatureencoderand Accuracyof87.42%,thePrecisionof35.96%,theRecallof40.61%, classifierofeachdetectionmodelweretrainedwith100maximum andtheF1of38.14%. epochsandearlystopping.ForCoca𝐸𝑥𝑝,weset𝛼to0.5tobalance Alltheseresultsdemonstratetheeffectivenessof Coca𝑇𝑟𝑎 in factualandcounterfactualreasoning. improvingthevulnerabilitydetectionperformanceofexistingGNN- basedcodemodels.Itindicatesthatincorporatingstructurallyper- 7 EXPERIMENTALRESULTS turbedsamples(e.g.,statementpermutation,loopexchange)into 7.1 RQ1:DetectionPerformance contrastive learning is beneficial for the graph-based model to focusonsecurity-criticalstructuralsemanticsratherthannoise Baselines.Weconsiderthreestate-of-the-artGNN-basedvulnera- information.TakingthegreatestimprovedmodelDeepWuKong bilitydetectors:1)Devign[80]modelsprogramsasgraphsand asanexample,asshowninthevisualizationsinFigure4,thefea- adoptsGGNN[39]tocapturestructuredvulnerabilitysemantics;2) turerepresentationslearnedbyCoca𝑇𝑟𝑎 (DeepWuKong)aremore ReVeal[11]adoptsgraphembeddingwithtripletlossfunctionto class-discriminative compared to the ones learned with default learnclass-separationvulnerabilityfeatures;and3)DeepWuKong cross-entropyloss.Weattributesuchimprovementstorobustness- [16]leveragesGCNtolearnbothunstructuredandstructuredvul- enhancedmodelstrulycapturingdiscriminativevulnerabilitypat- nerabilityinformationattheslice-level. ternsfromthecomparisonbetweenvulnerablesamplesandper- EvaluationMetrics.Weapplyfourwidelyusedmetrics[54],in- turbed/benignvariants. cludingAccuracy(Acc),Precision(Pre),Recall(Rec),andF1- score(F1),forevaluation. AnswertoRQ1:Coca𝑇𝑟𝑎comprehensivelyimprovestheper- ExperimentSetup.Fortheopen-sourceapproaches(ReVealand formanceofexistingGNN-basedvulnerabilitydetectorsinterms DeepWuKong),wedirectlyusetheirofficialimplementations.For ofallevaluationmetrics.Weattributetheimprovementstothe Devign,whichisnotpubliclyavailable,were-implementeditby robustness-enhancedmodelstrulypickinguprealvulnerability featuresforprediction. 3https://tree-sitter.github.io/tree-sitter/ICSE’24,April14–20,2024,Lisbon,Portugal SicongCao,XiaobingSun,XiaoxueWu,DavidLo,LiliBo,BinLi,andWeiLiu 2023/4/26 20:03 2023/4O/2r6ig i2n0a:l4.s3vg Enhanced.svg Table3:Evaluationresultsonvulnerabilityexplanationin percentagecomparedwithexplainablevulnerabilitydetec- tionbaselines. Config Approach MSP MSR MIoU (a)DeepWuKong(Default) (b)DeepWuKong(Coca ) 𝑇𝑟𝑎 Figure4:Visualizationsoffeaturerepresentationslearned byDeepWuKongtrainedwith/withoutCoca . 𝑇𝑟𝑎 file:///C:7/User.s/C2SC/Desktop/ORriginal.sQvg 2:ExplanationPerformance 1/1 Baselines.Weadoptthreerecentvulnerabilityexplanationapproaches asbaselines:1)IVDetect[40]leveragesGNNExplainer[75]topro- ducethekeyprogramdependencesub-graph(i.e.,alistofcrucial statementscloselyrelatedtothedetectedvulnerability)thataffect |
thedecisionofthemodelasexplanations;2)P2IM[61]borrows DeltaDebugging [76]toreduceaprogramsampletoaminimal file:///C:/Users/CSC/Desktop/Enhanced.svg 1/1 snippetwhichamodelneedstoarriveatandsticktoitsoriginal vulnerablepredictiontouncoverthemodel’sdetectionlogic;and 3)mVulPreter[82]combinestheattentionweightwiththevul- nerabilityprobabilityoutputtedbythemulti-granularitydetector tocomputetheimportancescoreforeachcodeslice. EvaluationMetrics.AswedescribedinSection3,anidealexpla- nationshouldcoverasmanytrulyvulnerablestatements(interms ofeffectiveness)aspossiblewithinalimitedscope(intermsofcon- ciseness).Thus,weusethefine-grainedVulnerability-Triggering Paths(VTP)[15,17]metricstoevaluatethequalityofexplanations, whichareformallydefinedasfollows: • MeanStatementPrecision(MSP):𝑀𝑆𝑃 = 𝑁1 (cid:205) 𝑖𝑁 =1𝑆𝑃 𝑖 where 𝑆𝑃 𝑖 = |𝑆 𝑒 ∩𝑆 𝑝|/|𝑆 𝑒| stands for the proportion of contextual statementstrulyrelatedtothedetectedvulnerabilitysample𝑖in theexplanations. • Mean Statement Recall (MSR): 𝑀𝑆𝑅 = 𝑁1 (cid:205) 𝑖𝑁 =1𝑆𝑅 𝑖, where 𝑆𝑅 𝑖 = |𝑆 𝑒 ∩𝑆 𝑝|/|𝑆 𝑝| denotesthathowmanycontextualstate- mentsinthetriggeringpathofthedetectedvulnerabilitysample 𝑖canbecoveredinexplanations. • MeanIntersectionoverUnion(MIoU):𝑀𝐼𝑜𝑈 = 𝑁1 (cid:205) 𝑖𝑁 =1𝐼𝑜𝑈 𝑖, where𝐼𝑜𝑈 𝑖 =|𝑆 𝑒∩𝑆 𝑝|/|𝑆 𝑒∪𝑆 𝑝|reflectsthedegreeofoverlapbe- tweentheexplanatorystatementsandthecontextualstatements onthevulnerability-triggeringpath. Here,𝑆 𝑒 denotesthesetofexplanatorystatementsprovidedbyex- plainers,while𝑆 𝑝 denotesthesetoflabeledvulnerability-contexts (groundtruth)inthedataset.|·|representsthesizeofaset. ExperimentSetup.WestillemploythreeaforementionedGNN- basdvulnerabilitydetectors(withdefault/robustness-enhancedcon- figurations)toprovidepredictionlabelsforIVDetect4,P2IM,and Coca𝐸𝑥𝑝.FormVulPreter,wefollowtheofficialimplementationto produceexplanationsbecauseitsdetectionandexplanationmodule ishighlycoupled.Fortwobaselines(IVDetectandmVulPreter) whichrequireahuman-selected𝑘 valuetodecidethesizeofthe 4SinceIVDetectimplementsitsownvulnerabilitydetectorbasedonFA-GCN,wedo notuseotherdetectionmodelsasalternatives. tluafeD mVulPreter 25.86 29.01 22.88 IVDetect 32.54 23.79 17.06 P2IM(Devign) 27.99 43.85 22.56 P2IM(ReVeal) 31.04 46.10 28.94 P2IM(DeepWuKong) 26.57 38.12 23.11 Coca𝐸𝑥𝑝 (Devign) 33.84 44.06 30.89 Coca𝐸𝑥𝑝 (ReVeal) 35.61 52.94 34.36 Coca𝐸𝑥𝑝 (DeepWuKong) 29.77 40.16 25.83 𝑎𝑟𝑇acoC IVDetect 39.81 31.64 25.19 P2IM(Devign) 33.01 48.33 29.27 P2IM(ReVeal) 40.62 55.73 36.29 P2IM(DeepWuKong) 32.97 44.85 28.10 Coca𝐸𝑥𝑝 (Devign) 43.61 52.98 39.64 Coca𝐸𝑥𝑝 (ReVeal) 49.52 58.39 44.97 Coca𝐸𝑥𝑝 (DeepWuKong) 40.33 47.61 34.22 explanations,wefollow[40,82]tonarrowdownthescopeofcan- didatestatementsto5,whilethesizeofexplanationsproducedby ourapproachandP2IMareautomaticallydecidedbythemselves viaoptimization.Following[17,61],weevaluatetheseexplanation approachesonanothervulnerabilitydatasetD2A[78]becauseit islabeledwithclearlyannotatedvulnerability-contextswhichare morereliablethanotherdiff-basedgroundtruths[15].Weran- domly select 10,000 vulnerable samples which can be correctly detectedfromtheD2AdatasettocalculatetheVTPmetrics. Results.Table3showstheperformancecomparisonofCoca𝐸𝑥𝑝 withrespecttostate-of-the-artexplanationapproaches.Ascanbe seen,basedonthepredictionsofpopulargraph-basedvulnerability detectors(withdefaultimplementations),Coca𝐸𝑥𝑝 substantially outperformsallthecomparedexplanationtechniquesonallmetrics. TakingthebestcomparisonbaselineP2IM(ReVeal)asanexample, Coca𝐸𝑥𝑝 (ReVeal)outperformsitby14.72%inMSP,14.84%inMSR, and18.73%inMIoU,respectively. Inaddition,althoughthereisstillacertaingapfromourbest- performingCoca𝐸𝑥𝑝,wefindthattheperformanceofeachexpla- nationbaselinecanbeimprovedtovaryingdegreeswhenapplied torobustness-enhanceddetectionmodels.Themainreasonleading tothisresultisthatthemorerobustfeaturerepresentationsgained bycontrastivelearningcanbetterreflectthepotentialvulnerable behaviourofprogramsandboostvulnerabilitysemanticcompre- hension.Amongthem,Coca𝐸𝑥𝑝 (ReVeal)yieldsthebestexplana- tionperformancesonallmetrics(especiallyMIoU),demonstrating thatourdual-viewcausalinferencemakesagreattrade-offbetween theeffectiveness(coveringasmanytrulyvulnerablestatements aspossible)andconciseness(limitingthenumberofcandidates formanualreview)ofexplanations.Meanwhile,wenoticethatthe attention-basedexplainermVulPreterperformsextremelypoorly onthevulnerabilityexplanationtask.Thereasonisthatattention weightsarederivedfromthetrainingdata[81].Thus,itmaynotbe accurateforaparticulardecisionofaninstance.Onthecontrary, Coca𝑇𝑟𝑎andtheothertwovulnerabilityexplainers(IVDetectand P2IM)constructanadditionalexplanationmodelforanindividualPPhhaassee22:: FFeeaattuurree EExxttrraaccttiioonn CCCCCooooonnnnnGGGGGsssssrrrrrtttttrrrrraaaaauuuuupppppccccchhhhhttttt iiiiiooooonnnnn EEEEEmmmmmFFFFFeeeeebbbbbaaaaaeeeeetttttddddduuuuudddddrrrrriiiiieeeeennnnn ggggg PPPrrrooogggrrraaammm GGGrrraaappphhhVVuullnneerraabbllee BBeenniiggnn |
PPPhhhaaassseee111::: DDDaaatttaaa AAAuuugggmmmeeennntttaaatttiiiooonnn PPPhhhaaassseee222::: CCCooonnntttrrraaassstttiiivvveee LLLeeeaaarrrnnniiinnnggg FFFFEEEE VVVVaaaarrrriiiiaaaannnnttttssss SSSST TT TT TT TttttTTTTr rr rr rr raaaaa aa aa aa attttoooo eeeen nn nn nn nkkkk mmmms ss ss ss seeeef ff ff ff feeeennnno oo oo oo onnnn----r rr rr rr rLLLLm mm mm mm mtttt----eeee LLLLa aa aa aa avvvvt tt tt tt teeeei ii ii ii ieeee vvvvo oo oo oo ollll eeeen nn nn nn nllll SSSSSSSSSS SSSSSaaaaaaaaaa aaaaammmmmmmmmm mmmmmppppp...............ppppp pppppllllllllll llllleeeeeeeeee eeeee 1111111111 kkkkk+++++ rrrreeeeddddooooccccnnnnEEEE eeeevvvviiiittttssssaaaarrrrttttnnnnooooCCCC rrrrreeeeeiiiiifffffiiiiissssssssssaaaaalllllCCCCC VVuullnneerraabbiilliittyy DDaattaasseett EEnnhhaanncceedd DDeetteeccttoorr Classifier Projector f{ f{ u u ii }ii }n.n. eenfnfc.c. llttA.A. rr( (( ( sseeii) .) . eetttt . . uu{{ee..rr..mm))nn..;; ;;..{{ }} Tra On psf eo rr am toa rt sion f{ f{ u u ii }ii }n.n. eenfnfc.c. llttA.A. rr( (( ( sseevv) .) . eettaa . . uu{{rr..rr..00))nn..;; ;;..{{ }} En ff c kq odeg gr kq +Maximize Minimize f{ f{ u u vf rvf rn.n.ao eao ec.c.rr trr tB.B. {{( ( u u..k(k()) rr..e.e. nn..y.y. }}kk;.;. ee)) yy;; f{ f{ u u vw rvw rn.n.ah eah ec.c.ri tri tB.B. {{( ( l ul u..kk)) e re r..ee n n..((yy }}kk..;; ee.. yy.. ;;)) N kk--ega kkti --ves kk B --uff kker -- Unlabeled Data Augmented Contrastive Programs Augmentation Variants Learning File: openssl/crypto/asn1/asn1_lib.c Commit: https://github.com/openssl/openssl/blob/9b10986d7742a5105ac8c5f4eba8b103caf57ae9/ P2IM P2IM+ COCA GT Vulnerability Type: Buffer Overrun Robustness Enhancement 1 v o i d b n _ s q r _ n o r m a l ( iBN nt_ U nL ,O N BG N _* Ur L, ONc Go n *s tt m pB )N_ULONG *a, 00 00 00 00 2 { 00 00 00 00 3 int i, j, max; 11 00 00 00 4 const BN_ULONG *ap; 11 11 00 00 5 BN_ULONG *rp; 00 00 00 00 6 ap = a; 11 11 00 00 7 rp = r; 00 11 11 11 8 rp[0] = rp[max - 1] = 0; 11 00 11 00 9 rp++; 00 00 11 11 Trainer 10 j = n; 11 00 00 00 11 if (--j > 0) { 11 11 11 00 1 12 3 a rp p+ [+ j; ] = bn_mul_words(rp, ap, j, ap[-1]); 1 01 0 1 11 1 0 10 1 0 10 1 Source Code VDulenteercatbioilnity Explainer Explanations 14 rp += 2; 00 11 00 00 15 } 16 } 00 00 00 00 Coca:ImprovingandExplainingGraphNeuralNetwork-BasedVulnerabilityDetectionSystems ICSE’24,April14–20,2024,Lisbon,Portugal 1 2 3F C Vi o u l me ln : m e v { o rip at oe : b n i h ils t i d is t tpl y/ ns c T:r tb/y y/g p np i et t i_o :h / u ,sBa bs u qn .c f1 jrfo e/ ma ,_r s O/ nn o1 vp mo_ ee arrl ni rb s u xms. nc l ;a/o lpe (n iBss N nl/b _ tl U ob L n/9 O ,b N 10 G B9 N86 * _d r U77 , L4 O2a c N51 o G0 n 5a s *c t t8c m 5 pBf4 )Neb _a U8b L1 O03 Nca Gf5 7 *ae a9 ,/ Pm rV e 0 0 10 0 1tu erl IVD 0 0 00 0 0etect P2 0 0 00 0 0IM CO 0 0 0CA G 0 0 0T Table D4 et: eC eess llss bbee aann nntt iiss aauu llbb ppoo xxRR EEcttnneemm nnee oocc iittnn ccaa eehh tteenn DDEEo ton DD SSMMee CCoortt oouueeoot ddrrccdd cceettee eeiiooll r nn ibu TTrraat VVii Ann DDuuee lli eerr nntteeeerro ccaattp bbiiooiillnniittyyn ps rooEExx EEpp a xxllfaa ppnn llaaaa c iitt nniioo eednn rrss hifferentex Mpl Sa Pnati Mon SRapp Mro Ia oc Uhes. 4 const BN_ULONG *ap; 00 11 11 0 0 GNNExplainer 21.40 43.28 14.68 5 BN_ULONG *rp; 11 11 00 0 0 PGExplainer 25.39 47.86 20.17 6 ap = a; 11 11 11 0 0 Devign CF-GNNExplainer 34.10 29.65 22.79 7 rp = r; 00 11 11 1 1 8 rp[0] = rp[max - 1] = 0; 00 00 00 1 0 Coca𝐸𝑥𝑝 43.61 52.98 39.64 9 rp++; 11 00 00 1 1 10 j = n; 11 11 00 0 0 GNNExplainer 23.06 47.28 17.11 11 if (--j > 0) { 11 11 11 1 0 12 ap++; 11 00 11 0 0 ReVeal PGExplainer 26.84 51.34 21.39 13 rp[j] = bn_mul_words(rp, ap, j, ap[-1]); 00 11 11 1 1 CF-GNNExplainer 39.11 34.72 28.66 1 14 5 } rp += 2; 11 00 11 0 0 Coca𝐸𝑥𝑝 49.52 58.39 44.97 16 } 00 00 00 0 0 GNNExplainer 18.40 37.15 16.97 PGExplainer 25.56 46.81 22.64 Figure5:QualitativestudyofourCocavs.baselines. DeepWuKong CF-GNNExplainer 36.79 27.09 23.96 Coca𝐸𝑥𝑝 40.33 47.61 34.22 instanceinamodel-independentmannertoprovideexplanatory information,effectivelyavoidingthedecisionbias. themutualinformationofaprediction;2)PGExplainer[49]uses |
Togainamoreintuitiveunderstandingofhoweffectiveand anexplanationnetworkonauniversalembeddingofthegraph conciseourgeneratedexplanationsare,weperformaqualitative edgestoprovideexplanationsformultipleinstances;and3)CF- studytoevaluatethequalityofexplanationsgeneratedbyCoca GNNExplainerextendsGNNExplainerbygeneratingexplanations andotherexplainers.Toensurethefairness,theexplanationspro- basedoncounterfactualreasoning. videdbytwoexplainers(P2IMandCoca)notdependentonspecific ExperimentSetup.ToanswerRQ3a,webuiltthreevariantsof detectors,aregeneratedfromtherobustness-enhancedmodelRe- Coca𝑇𝑟𝑎byreplacingourcombinatorialcontrastivelearningloss Veal.Figure5showsacorrectlydetectedvulnerablefunctioninthe withNCE,InfoNCE,andCEloss,andfollowthesametraining,val- D2Adataset.Column"GT"denotesthegroundtruth.Itcontains idation,andtestingdatasetinRQ1forevaluation.ToanswerRQ3b, aBufferOverrunvulnerabilityinrp(atline13)whencallingthe wealsorespectivelybuildthreevariantsof Coca𝐸𝑥𝑝 byreplacing functionbn_mul_words().Statementsatline7andline9areits ourdual-viewcausalinferencewithGNNExplainer,PGExplainer, correspondingvulnerability-contextsannotatedbyD2A.Overall, andCF-GNNExplainer,andadoptthesameevaluationdatasetin allthreevulnerablestatementsarecoveredbyCoca,whilemVul- RQ2forevaluation. Preter,IVDetect,andP2IMcouldonlyreportone,two,andtwoof EvaluationMetrics.WeusethesamemetricsasinRQ1andRQ2. them,respectively.Furthermore,intermsoftheconciseness,three offiveexplanatorystatementsprovidedbyCocaaretruepositives, 7.3.1 RQ3a:ImpactofCombinatorialContrastiveLearning. Table2 witharecallof60%.Bycontrast,87.5%,71.43%,and71.43%state- presentstheexperimentalresultsofCoca𝑇𝑟𝑎(Ours)anditsvariants mentsincludedintheexplanationsofmVulPreter,IVDetect,and trainedunderdifferentlossfunctions(Column"Loss").Theresults P2IMarefalsepositives.Therefore,Cocacanprovideasmanytruly demonstratethecontributionofourcombinatorialcontrastivelearn- vulnerablestatementsaspossiblewithinalimitedscopetohelp ingtotheoveralldetectionperformanceof Coca𝑇𝑟𝑎.Inparticular, securitypractitionersunderstandthedetectionresultsprovidedby wecanobservethatdetectionmodelswhicharetrainedwithtra- GNN-basedvulnerabilitydetectionsystems. ditionalcross-entropylossoutperformtheirvariantstrainedwith self-supervisedcontrastiveloss(InfoNCEandNCE).Itisreasonable AnswertoRQ2:Coca𝐸𝑥𝑝 issuperiortothestate-of-the-art becausefine-tuningonthelabeledvulnerabilitydatasetmaysignifi- explainersintermsoftheeffectivenessandconciseness.When cantlyalterthedistributionoflearnedfeaturerepresentations.Asa appliedtothebestdetectionmodelCoca𝑇𝑟𝑎(ReVeal),Coca𝐸𝑥𝑝 result,therobustnessandaccuracyoflearneddeeprepresentations improvesMSP,MSR,andMIoUoverthebest-performingbaseline enhancedbyself-supervisedpre-trainingmaynotlongerholdafter P2IMby21.91%,4.77%,and23.92%,respectively. supervisedfine-tuning.Onthecontrary,thesupervisedcontrastive learningallowsustoeffectivelyleveragelabelinformation,which 7.3 RQ3:AblationStudy groupsthesamplesbelongingtothesameclassaswellasthese- manticallyequivalentvariantswhilesimultaneouslypushingaway Baselines.ForRQ3a,wecompareCoca𝑇𝑟𝑎 withthreerepresenta- thedissimilarsamples.Accordingly,thedownstreamtask-specific tivelossfunctions:1)NCE[13]framescontrastivelearningasaself- generalizationandrobustnesscanberetainedasmuchaspossible. supervisedbinaryclassificationproblem,whichpredictswhethera datapointcamefromthenoisedistributionorthetruedatadistri- 7.3.2 RQ3b:ImpactofDual-ViewCausalInference. Table4shows bution;2)InfoNCE[65]generalizesNCElossbycomputingthe theperformanceof Coca𝐸𝑥𝑝 anditsthreevariants.Theresults probabilityofselectingthepositivesampleacrossabatchanda demonstratethatourdual-viewcausalinferencepositivelycon- queueofnegatives;and3)Cross-Entropy(CE),themostwidely tributestovulnerabilityexplanation.Takingthebestperformed usedsupervisedlossfordeepclassificationmodels.ForRQ3b,we detectionmodelReVealasanexample,ourCoca𝐸𝑥𝑝 improvesGN- compareCoca𝐸𝑥𝑝 withthefollowingGNN-specficexplanation NExplainer,PGExplainer,andCF-GNNExplainerby114.74%,84.50%, approaches:1)GNNExplainer[75]selectsadiscriminativesub- and26.62%respectively,intermsofMSP,by23.50%,13.73%,and graphthatretainsimportantedges/nodefeaturesviamaximizing 68.17%respectively,intermsofMSR,andby162.83%,110.24%,and33 11 3 1 ssttrr11 ssttrr ssttrr 44 55 ssttrr11 ssttrrll &&sseenn ttrr((ss 11ttrr)) 11 rr 66 88 111 33 aa ssttrriinnggPPttrr 333 aaa rrr 1111 ssttrriinnggPPttrr 44 1100 pp 444 nn mmaaxx 55 66 77 nnn mmmaaaxxx 555 666 777 88 rrpp 888 rrrppp 99 rrpp 999 rrrppp 1100 1111 PPhhaassee22:: FFeeaattuurree EExxttrraaccttiioonn 111000 111111 jj CCCCCooooonnnnnGGGGGsssssrrrrrtttttrrrrraaaaauuuuupppppccccchhhhhttttt iiiiiooooonnnnn jjj 1122 aapp 1133 EEEEEmmmmmFFFFFeeeeebbbbbaaaaaeeeeetttttddddduuuuudddddrrrrriiiiieeeeennnnn ggggg PPPrrrooogggrrraaammm GGGrrraaappphhh aaappp jj 111222 111333 Projector jjj |
PPPhhhaaassseee111::: DDDaaatttaaa AAAuuugggmmmeeennntttaaatttiiiooonnn PPPhhhaaassseee222::: CCCooonnntttrrraaassstttiiivvveee LLLeeeaaarrrnnniiinnnggg f{ f{ u u ii }ii }n.n. eenfnfc.c. llttA.A. rr( (( ( sseeii) .) . eetttt . . uu{{ee..rr..mm))nn..;; ;;..{{ }} Tra On psf eo rr am toa rt sion f{ f{ u u ii }ii }n.n. eenfnfc.c. llttA.A. rr( (( ( sseevv) .) . eettaa . . uu{{rr..rr..00))nn..;; ;;..{{ }} En ff c kq odeg gr kq +Maximize Minimize 1 r FFFFEEEE VVVVaaaarrrriiiiaaaannnnttttssss SSSST TT TT TT TttttTTTTr rr rr rr raaaaa aa aa aa attttoooo eeeen nn nn nn nkkkk mmmms ss ss ss seeeef ff ff ff feeeennnno oo oo oo onnnn----r rr rr rr rLLLLm mm mm mm mtttt----eeee LLLLa aa aa aa avvvvt tt tt tt teeeei ii ii ii ieeee vvvvo oo oo oo ollll eeeen nn nn nn nllll SSSSSSSSSS SSSSSaaaaaaaaaa aaaaammmmmmmmmm mmmmmppppp...............ppppp pppppllllllllll llllleeeeeeeeee eeeee 1111111111 kkkkk+++++ rrrreeeeddddooooccccnnnnEEEE eeeevvvviiiittttssssaaaarrrrttttnnnnooooCCCC rrrrreeeeeiiiiifffffiiiiissssssssssaaaaalllllCCCCC f{ f{ u u vf rvf rn.n.ao eao ec.c.rr trr tB.B. {{( ( u u..k(k()) rr..e.e. nn..y.y. }}kk;.;. ee)) yy;; f{ f{ u u vw rvw rn.n.ah eah ec.c.ri tri tB.B. {{( ( l ul u..kk)) e re r..ee n n..((yy }}kk..;; ee.. yy.. ;;)) N kk--ega kkti --ves kk B --uff kker -- 3 a Unlabeled Data Augmented Contrastive Programs Augmentation Variants Learning 4 VVuullnneerraabbiilliittyy DDaattaasseett EEnnhhaanncceedd DDeetteeccttoorr n max 5 6 7 Phase1: Data Augmentation Phase2: Combinatorial Contrastive Learning 特征嵌入 10 j9 11 8 rprp 1 OOO TTTpppIII AAAeeennnLLL rrrAAAaaauuurrrjjjooo aaaeee nnnggg ppp Dccc tttccc sssm pm pm poooaaattt afffllliii rrr ooolll te ie ie ioooiii acccrrrSSSzzz nnnnnn mmmaaa saaa ttteee etttaaaSSSttt lll iiiaaa tiii ttteee oooiiiooo tttiiicccttt iiiooo nnnnnneee oootttnnn iiisss nnnooo nnn MFEu Vq in nauc i-rit Bvii aoa annl tet cans hllty Se Slf u-S SSSS SSSSS SS SpSSSS SSSSS SS SSSSS SSSSS SS Su aaaa aaaaa aa aeaaaa aaaaa aa aaaaa aaaaa aa ap mmmm mmmmm mm mrmmmm mmmmm mm mmmmm mmmmm mm mve ipppp ppppp pp pr s............ ............... ...... ...pppp ppppp pp ppppp ppppp pp pv ellll lllll ll lllll lllll ll lllll lllll ll li eeee eeeee ee eds eeee eeeee ee eeeee eeeee ee e e 1111 11111 11 11111 11111 11 1Ckkkk kkkkk kk kd ++++ +++++ ++ + oC no trn at sra tis rrr rrrr rr reee eeee ee eddd dddd dd dooo oooo oo occc cccc cc cnnn nnnn nn nEEE EEEE EE Evt eiv Le eee eeee ee errr rrrr rr ruuu uuuu uu uttt tttt tt taaa aaaa aa aeee eeee ee eFFF FFFF FF Fe L ae rna rrrr rrrrr rr roooo ooooo oo otttt ttttt tt tcccc ccccc cc ceeee eeeee ee ejjjj jjjjj jj joooo ooooo oo orrrr rrrrr rr rPPPP PPPPP PP Pir nn ging 𝓛𝓛𝓛𝓛 cccc ssss uuee ooooll ppnnnnff rg otni cn ei ta er DT tl se ud bo oRM 数漏 据洞 库 基 的 静于 漏 态代 洞 分码 特 析复 征合 建图 模 抽抽 控控象象 制制语语 流流法法 图图树树 CCAA FFGGSSTT 节节点点嵌嵌入入 被 样测 本 漏漏 洞洞 检测非 模漏 型洞 ap r 边边嵌嵌入入 12 13 3 a 待测程序 数数据据流流图图DDFFGG 训练样本 模 训型 练 j 4 F C Vi o ul me ln: m eo rip ate : b n h ils t is t tpl y/ s c T:r /y y/g pp i et to :h / u Ba bs un .c f1 fo e/ ma r s O/n o1 vp_ eel rni rb s us. nc l/openssl/blob/9b10986d7742a5105ac8c5f4eba8b103caf57ae9/ P2IM P2IM+ COCA GT sssseennttssuubbooRRttnneemmeeccnnaahhnnEE DDMMeetteeooccddtteeiioollnn TTrraaiinneerr EExxppllaannaattiioonnss 基于 的流 检敏 测感 模图 型神 构经 建网络 样样样本本本数数数据据据平平平衡衡衡 漏漏漏洞洞洞特特特征征征挖挖挖掘掘掘 1 r n max 5 6 7 1 2 {v o i d b n _ s q r _ n o rm a l ( iB nN t_U L nO ,N G B N* _r U, L Oc No Gn s *t tm pB )N_ULONG *a, 0 00 0 0 00 0 0 00 0 0 00 0 eellbbaanniiaallppxxEEnnooiittcceetteeDD SSCCoooouuddrrcceeee VVDDuulleenntteeeerrccaattbbiiooiillnniittyy EExxppllaaiinneerr 3 a 8 rp 3 4 i cn ot n sti, Bj N, _ Um La Ox NG; *ap; 1 11 1 0 10 1 0000 0 00 0 预预处处理理 词词嵌嵌入入 4 9 rp 5 6 B aN p_ U =L O aN ;G *rp; 0 10 1 0 10 1 00 00 0 00 0 ( Rb e) aF sa oc nt iu na gl 文文本本 漏 症漏 症漏 症洞 状洞 状洞 状 |
n max 5 6 7 10 j 11 ap 7 8 9 1 0 r r r jp p p [ + == 0 + ] ; nr ;; = rp[max - 1] = 0; 0 1 0 10 1 0 1 1 0 0 01 0 0 0 111 1 01 1 0 1 0 1 01 0 1 0 历 修历 修历 修 DD史 复史 复史 复 iiffff漏 报漏 报漏 报洞 告洞 告洞 告 层层次次 网网注注 络络意意力力 漏 成漏 成漏 成洞 因洞 因洞 因 关关联联规规则则挖挖掘掘 10 9 11 8 rprp j 12 13 1 1 1 1 1 11 2 3 4 5 6 } }if ( a r r- p p p- + [ j + j + ; ] => 0 = 2 ;) b{ n_mul_words(rp, ap, j, ap[-1]); 1 1 0 0 01 1 0 0 0 1 1 1 1 01 1 1 1 0 1 0 1 0 01 0 1 0 0 0 0 1 0 00 0 1 0 0 (a) Ground Truth (c) C Ro eu an st oe nr if na gctual AASSTT抽抽取取 AASS 漏 症漏 症TT分分 洞 状洞 状类类 症 概症 概状 率状 率 关关联联矩矩阵阵 j 成成因因 12 ap 13 新新新 报报报漏漏漏 告告告洞洞洞 层层次次 网网注注 络络意意力力 漏 成漏 成洞 因洞 因 概概率率 分分类类概概率率优优化化 j 文文本本 1 2F C Vi o u l me ln : m e vo {rip at o e : b n i h ils t i d s t tpl y/ s c T:r b/y y/g p np i et t _o :h / u sBa bs u qn .c f1 rfo e/ ma _r s O/ nn o1 vp o_ ee rl rni rb s u ms. nc l a/o lpe (n B iss N nl/b _ tl U ob L n/9 O ,b N 10 G B9 N86 * _d r U77 , L4 O2a c N51 o G0 n 5a s *c t t8c m5 pBf4 )Neb _a U8b L1 O03 Nca Gf5 7 *ae a9 ,/ Pm rV e 0 00 0tu erl IVD 0 00 0etect P2 0 00 0IM CO 0 0CA G 0 0T EEnnFFggeeiinnaatteeuueerrrreeiinn gg EEmm DPEPbb hxhee edd pa atedd s slace eiinn ntgg i ao tn io nDD eetteeccttoorr B VB Ve ue unn llnniigg eenn rraabbllee 预预处处理理 词词嵌嵌入入 漏漏洞洞类类型型 3 int i, j, max; 11 00 00 0 0 4 const BN_ULONG *ap; 00 11 11 0 0 SSoouurrccee CCooddee EExxppllaaiinneerr EExxppllaannaattiioonnss 5 BN_ULONG *rp; 11 11 00 0 0 6 ap = a; 11 11 11 0 0 7 rp = r; 00 11 11 1 1 8 rp[0] = rp[max - 1] = 0; 00 00 00 1 0 9 rp++; 11 00 00 1 1 VVuullnneerraabbllee BBeenniiggnn 漏漏漏洞洞洞代代代码码码 10 j = n; 11 11 00 0 0 代代码码 TTTrrraaannnsssfffooorrrmmmeeerrr xx1122 1 11 2 i f ( a- p- +j + ;> 0) { 1 11 1 1 01 0 1 11 1 1 0 0 0 补补补丁丁丁代代代码码码 序序列列化化AASSTT 解解解解码码码码器器器器 编编编编码码码码器器器器 1 13 4 r rp p[ j +] = = 2 ;bn_mul_words(rp, ap, j, ap[-1]); 0 10 1 1 01 0 1 11 1 1 0 1 0 安安全全属属性性 注注注注意意意意力力力力机机机机制制制制 15 } 16 } 00 00 00 0 0 历 修历 修历 修史 复史 复史 复漏 报漏 报漏 报洞 告洞 告洞 告 经经验验研研究究 漏漏洞洞补补丁丁 代代码码 漏漏洞洞代代码码 序序列列化化AASSTT 自自动动漏漏洞洞 测测测测试试试试用用用用例例例例过过过过滤滤滤滤 新新新漏漏漏洞洞洞 修修复复模模型型 报报报告告告 自自然然语语言言描描述述 安安全全属属性性 波波波波束束束束搜搜搜搜索索索索 ICSE’24,April14–20,2024,Lisbon,Portugal SicongCao,XiaobingSun,XiaoxueWu,DavidLo,LiliBo,BinLi,andWeiLiu IT enterprise with at least five years of experience in software 111 11 11 333 aaa rrr 33 aa rr 33 aa rr securitytoparticipateinouruseF Ci orl me: m sn ie ttt :/ qu hr tt tr pd/ stu :/yn /g.c i. thub.com/torvalds/linux/commit/a21b7f0cff1906a93a0130b74713b15a0b36481d Run 1 Run 2 ExperimentTasks.WerandomVluylnersabeillitye Tcyptee: Mdissi5ng0 Revleause lofn Meemroary baftleer Efffeuctinve cLitfeitiomen (CsWE-401) 1 static ssize_t qrtr_tun_write_iter(struct kiocb *iocb, 0 0 444 44 44 fromtestingsetsinRQ2,andin 2 d e{ p e n d e n t l y a s s i g n e d 1 0str uuc nt ii qov u_i eter *from) 0 0 3 kbuf = kzalloc(len, GFP_KERNEL); 1 0 samplestoeachparticipant.Fo4r e a cifh (!skabumf)ple,wepresentitsde- 1 1 nnn mmmaaaxxx 555 666 777 nn mmaaxx 55 66 77 nn mmaaxx 55 66 77 5 return -ENOMEM; 0 1 scriptionsandvulnerability-con6t e x t sifa (n!cnopoy_tfarotme_ditebr_yfulDl(k2bAuf, alesn,w freolml))as 0 1 7 return -EFAULT; 1 0 888 rrrppp 88 rrpp 88 rrpp theircorrespondingexplanation8 9 s p rr roe et tv u= ri ndq r ret er td_ e <n bd 0yp o ?i Cn rt e_ top o :cs t lae( n&.;tuTn-h>eep, pkbaufr,t ilceni)-; 0 1 1 0 10 } 0 0 999 99 99 pantsarethenaskedtoanswer1)whethertheexplanationcovers rrrppp rrpp rrpp enoughinformationtounderstandthevulnerability;and2)whether 111000 111111 1100 1111 1100 1111 jjj jj jj theexplanationisconciseenoughtomakefurtherdecisions.We 111222 aaappp 111333 1122 aapp 1133 1122 aapp 1133 use4-pointlikertscale[44](1-disagree;2-somewhatdisagree;3- jjj jj jj somewhatagree;4-agree)tomeasurethedifficulties. Results.Overall,ouruserstudyreveals,tosomeextent,thatCoca Figure 6: The PDG sub-graphs (shaded areas) induced by presentsasmanytrulyvulnerablestatementsaspossiblewithin |
(a)Coca ,(b)factualreasoning(GNNExplainer),and(c) 𝐸𝑥𝑝 anacceptedscopetohelpsecuritypractitionersunderstandthe counterfactualreasoning(CF-GNNExplainer),respectively. detectedvulnerability.ForeffectivenessofCoca,86%oftheanswers Groundtruths(i.e.,vulnerablenodesandedges)arehigh- arepositive(i.e.,score≥3),12%are2(somewhatdisagree),and2% lightedinred. are1(disagree).ForconcisenessofCoca,onlythree(6%)responses arenegative(i.e.,score≤2),whichmeansthatexplanationspro- videdbyCocacanhelpthemintuitivelyunderstandthevulnerable 56.91%respectively,intermsofMIoU.Theresultsdemonstratethe codewithoutcheckingnumerousirrelevantalarms. importanceofcombiningfactualwithcounterfactualreasoningfor generatingbothconciseandeffectiveexplanations.Foramoreintu- itiveunderstanding,westilltakethecaseinqualitativestudy(Fig- 8.2 ThreatstoValidity ure5)asanexample.WeemploytheDeepWuKong,whichadopts ThreatstoInternalValiditycomefromthequalityofourex- PDGsasinputrepresentations,asthevulnerabilitydetectortode- perimentaldatasets.Weevaluatethedetectionandexplanation rivesub-graphexplanations.AsshowninFigure6,factual-based performanceofCocaonfivewidely-usedreal-worldbenchmarks, approachGNNExplainerrevealsmorerichvulnerability-contexts andtheannotatedD2Adataset,respectively.However,existingvul- butalsocoversredundantnodesandedges,whilecounterfactual- nerabilitydatasetshavebeenreportedtoexhibitvaryingdegrees based approach CF-GNNExplainer has more precise prediction ofqualityissuessuchasnoisylabelsandduplication.Toreduce but tends to be conservative and low in coverage. By contrast, thelikelihoodofexperimentbiases,followingCroftetal.’s[19] Cocapresentsallpotentialvulnerablestatementswithinanac- standard practice, we employ two experienced security experts ceptedscope.Inaddition,wecanfindthatfactualreasoning-based tomanuallyconfirmthecorrectnessofvulnerabilitylabels,and approaches(GNNExplainerandPGExplainer)arehigherinMSR, leverageacodeclonedetectortoremoveduplicatesamples. whilecounterfactualreasoning-basedapproach(CF-GNNExplainer) ThreatstoExternalValidityrefertothegeneralizabilityofour ishigherinMSPwhencomparingwitheachother.Thisobserva- approach.WeonlyconductourexperimentsonC/C++datasets, tionfurtherconfirmsthenecessityofcombiningthestrengthsof andthusourexperimentalresultsmaynotgeneralizabletoother factualandcounterfactualreasoningwhilemitigatingeachothers’ programminglanguagessuchasJavaandPython.Tomitigatethe weaknesses. threat,weemploytree-sitter,whichsupportsawiderangeof languages,toimplementCocaandbaselines. AnswertoRQ3:Combinatorialcontrastivelearninganddual- ThreatstoConstructValidityrefertothesuitabilityofevaluation view causal inference play different roles in our explanation. measuresusedforquantifyingtheperformanceofvulnerability Combiningthemtogethercanproducesignificantimprovements. explanation.Wemainlyadoptthesamemetricsfollowingarecent workregardingDL-basedvulnerabilitydetectorsassessment[15]. 8 DISCUSSION Inthefuture,weplantoemployothermetrics,suchasConsistency 8.1 PreliminaryUserStudy andStability[33,69],formorecomprehensiveevaluation. ToelaboratethepracticalvalueofCoca,wefurtherperformasmall- scaleuserstudytoinvestigatewhethereffectiveandconciseexplana- 9 RELATEDWORK tionscanprovidemoreinsightsandinformationtohelpfollowing 9.1 DL-basedVulnerabilityDetection analysisandrepair.Consideringapracticalapplicationpipeline, weintegratedCocaintoDeepWuKong,thebest-performingmodel Priorworksfocusonrepresentingsourcecodeassequencesand inRQ1&RQ2,togenerateexplanations. useLSTM-likemodelstolearnthesyntacticandsemanticinfor- Participants.WeinvitethreeMSstudentswithtwotofiveyears mationofvulnerabilities[41–43,72].Recently,alargenumberof ofexperienceindevelopingmedium/large-scaleC/C++projectsor works[11,16,67,68,71,80]turntoleveragingGNNstoextractrich interninginsomesecuritycompaniesforaperiodoftimeasour andwell-definedsemanticsoftheprogramstructurefromgraph participants.Wealsoinvitetwosecurityexpertsfromaprominent representationsfordownstreamvulnerabilitydetectiontasks.ForCoca:ImprovingandExplainingGraphNeuralNetwork-BasedVulnerabilityDetectionSystems ICSE’24,April14–20,2024,Lisbon,Portugal example,AMPLE[71]simplifiestheinputprogramgraphtoallevi- KeyLaboratoryforNovelSoftwareTechnologyofNanjingUniver- atethelong-termdependencyproblemsandfuseslocalandglobal sity(No.KFKT2022B17),theOpenFoundationofYunnanKeyLabo- heterogeneousnoderelationsforbetterrepresentationlearning. ratoryofSoftwareEngineering(No.2023SE201),theChinaScholar- Incontrasttothesestudiesthataimtodesignnovelneuralmod- shipCouncilFoundation(Nos.202209300005,202308320436),and elsforeffectivevulnerabilitydetection,ourgoalistoexplaintheir theNationalResearchFoundation,underitsInvestigatorshipGrant decisionlogicinamodel-independentmanner.Thus,existingGNN- (NRF-NRFI08-2022-0002).Anyopinions,findingsandconclusions |
basedapproachesareorthogonaltoourworkandcouldbeadopted orrecommendationsexpressedinthismaterialarethoseoftheau- togetherfordevelopingmorepracticalsecuritysystems. thor(s)anddonotreflecttheviewsofNationalResearchFoundation, Singapore. 9.2 ExplainabilityonModelsofCode REFERENCES The requirement for explainability is more urgent in security- [1] AndreaArcuriandLionelC.Briand.2011.Apracticalguideforusingstatistical relatedapplications[25,50,69]becauseitishardtoestablishtrust teststoassessrandomizedalgorithmsinsoftwareengineering.InProceedingsof onthesystemdecisionfromsimplebinary(vulnerableorbenign) the33rdInternationalConferenceonSoftwareEngineering(ICSE).ACM,1–10. [2] GuruPrasadBhandari,AmaraNaseer,andLeonMoonen.2021.CVEfixes:au- resultswithoutcredibleevidence.Asthemostrepresentativeat- tomatedcollectionofvulnerabilitiesandtheirfixesfromopen-sourcesoftware. tempt,IVDetect[40]buildsanadditionalmodelbasedonbinary InProceedingsofthe17thInternationalConferenceonPredictiveModelsandData detectionresultstoderivecrucialstatementsthataremostrelevant AnalyticsinSoftwareEngineering(PROMISE).ACM,30–39. [3] PavolBielikandMartinT.Vechev.2020. AdversarialRobustnessforCode.In tothedetectedvulnerabilityasexplanations.Chakrabortyetal.[11] Proceedingsofthe37thInternationalConferenceonMachineLearning(ICML), adoptedLEMNA[30]tocomputethecontributionofeachcode Vol.119.896–907. [4] NghiD.Q.Bui,YijunYu,andLingxiaoJiang.2021.Self-SupervisedContrastive tokentowardstheprediction. LearningforCodeRetrievalandSummarizationviaSemantic-PreservingTrans- Ourapproachfallsintothecategoryoflocalexplainability,more formations.InProceedingsofthe44thInternationalACMSIGIRConferenceon specifically,perturbation-basedapproach.Akeydifferenceisexist- ResearchandDevelopmentinInformationRetrieval(SIGIR).ACM,511–521. [5] SicongCao,BiaoHe,XiaobingSun,YuOuyang,ChaoZhang,XiaoxueWu, ingapproachesmostlygenerateexplanationsfromasingleview TingSu,LiliBo,BinLi,ChuanleiMa,JiajiaLi,andTaoWei.2023. ODDFuzz: (eitherfactualorcounterfactualreasoning)andcannotsatisfyspe- DiscoveringJavaDeserializationVulnerabilitiesviaStructure-AwareDirected cial concerns in security domains. By contrast, Coca proposes GreyboxFuzzing.InProceedingsofthe44thIEEESymposiumonSecurityand Privacy(SP).IEEE,2726–2743. dual-viewcausalinference,whichcombinesthestrengthsoffac- [6] SicongCao,XiaobingSun,LiliBo,YingWei,andBinLi.2021. BGNN4VD: tualandcounterfactualreasoningwhilemitigatingeachothers’ ConstructingBidirectionalGraphNeural-NetworkforVulnerabilityDetection. Inf.Softw.Technol.136(2021),106576. weaknesses,toprovidebotheffectiveandconciseexplanations. [7] SicongCao,XiaobingSun,LiliBo,RongxinWu,BinLi,andChuanqiTao.2022. MVD:Memory-RelatedVulnerabilityDetectionBasedonFlow-SensitiveGraph NeuralNetworks.InProceedingsofthe44thIEEE/ACMInternationalConference 10 CONCLUSIONANDFUTUREWORK onSoftwareEngineering(ICSE).ACM,1456–1468. [8] SicongCao,XiaobingSun,LiliBo,RongxinWu,BinLi,XiaoxueWu,ChuanqiTao, Inthispaper,weproposeCoca,ageneralframeworktoimprove TaoZhang,andWeiLiu.2024.LearningtoDetectMemory-relatedVulnerabilities. ACMTrans.Softw.Eng.Methodol.33,2(2024),43:1–43:35. andexplainGNN-basedvulnerabilitydetectionsystems.Usinga [9] SicongCao,XiaobingSun,XiaoxueWu,LiliBo,BinLi,RongxinWu,WeiLiu, combinatorialcontrastivelearning-basedtrainingschemeanda BiaoHe,YuOuyang,andJiajiaLi.2023.ImprovingJavaDeserializationGadget ChainMiningviaOverriding-GuidedObjectGeneration.InProceedingsofthe dual-viewcausalinference-basedexplanationapproach,Cocais 45thIEEE/ACMInternationalConferenceonSoftwareEngineering(ICSE).IEEE, designedto1)enhancetherobustnessofexistingneuralvulnerabil- 397–409. itydetectionmodelstoavoidspuriousexplanations,and2)provide [10] SaikatChakraborty,ToufiqueAhmed,YangruiboDing,PremkumarT.Devanbu, andBaishakhiRay.2022. NatGen:generativepre-trainingby"naturalizing" bothconciseandeffectiveexplanationstoreasonaboutthedetected sourcecode.InProceedingsofthe30thACMJointEuropeanSoftwareEngineering vulnerabilities.ByapplyingandevaluatingCocaoverthreetypical ConferenceandSymposiumontheFoundationsofSoftwareEngineering(ESEC/FSE). GNN-basedvulnerabilitydetectionmodels,weshowthatCocacan ACM,18–30. [11] SaikatChakraborty,RahulKrishna,YangruiboDing,andBaishakhiRay.2022. effectivelyimprovetheperformanceofexistingGNN-basedvulner- DeepLearningbasedVulnerabilityDetection:AreWeThereYet? IEEETrans. abilitydetectionmodels,andprovidehigh-qualityexplanations. SoftwareEng.48,9(2022),3280–3296. [12] Checkmarx.2023. https://www.checkmarx.com/. Inthefuture,weplantoexploreamoreautomateddataaug- [13] TingChen,SimonKornblith,MohammadNorouzi,andGeoffreyE.Hinton.2020. mentationapproachtofurtherimprovetherobustnessofDL-based ASimpleFrameworkforContrastiveLearningofVisualRepresentations.In |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.