text
stringlengths 64
2.99M
|
---|
detectionmodels.Inaddition,weaimtoworkwithourindustry Proceedingsofthe37thInternationalConferenceonMachineLearning(ICML), Vol.119.1597–1607. partnerstodeployCocaintheirproprietarysecuritysystemsto [14] ZiminChen,VincentJ.Hellendoorn,PascalLamblin,PetrosManiatis,Pierre- testitseffectivenessinpractice. AntoineManzagol,DanielTarlow,andSubhodeepMoitra.2021.PLUR:AUni- fying,Graph-BasedViewofProgramLearning,Understanding,andRepair.In Proceedingsofthe34thAnnualConferenceonNeuralInformationProcessingSys- tems(NeurIPS).23089–23101. ACKNOWLEDGMENTS [15] XiaoCheng,XuNie,LiNingke,HaoyuWang,ZhengZheng,andYuleiSui.2022. HowAboutBug-TriggeringPaths?-UnderstandingandCharacterizingLearning- ThisresearchissupportedbytheNationalNaturalScienceFounda- BasedVulnerabilityDetectors.IEEETrans.DependableSecur.Comput.(2022). tionofChina(No.62202414,No.61972335,andNo.62002309),theSix [16] XiaoCheng,HaoyuWang,JiayiHua,GuoaiXu,andYuleiSui.2021.DeepWukong: StaticallyDetectingSoftwareVulnerabilitiesUsingDeepGraphNeuralNetwork. TalentPeaksProjectinJiangsuProvince(No.RJFW-053);theJiangsu ACMTrans.Softw.Eng.Methodol.30,3(2021),38:1–38:33. “333”ProjectandYangzhouUniversityTop-levelTalentsSupport [17] XiaoCheng,GuanqinZhang,HaoyuWang,andYuleiSui.2022.Path-sensitive codeembeddingviacontrastivelearningforsoftwarevulnerabilitydetection. Program(2019),PostgraduateResearch&PracticeInnovationPro- InProceedingsofthe31stACMSIGSOFTInternationalSymposiumonSoftware gramofJiangsuProvince(KYCX22_3502),theOpenFundsofState TestingandAnalysis(ISSTA).ACM,519–531.ICSE’24,April14–20,2024,Lisbon,Portugal SicongCao,XiaobingSun,XiaoxueWu,DavidLo,LiliBo,BinLi,andWeiLiu [18] JürgenCito,IsilDillig,VijayaraghavanMurali,andSatishChandra.2022.Coun- EngineeringConferenceandSymposiumontheFoundationsofSoftwareEngineering terfactualExplanationsforModelsofCode.InProceedingsofthe44thIEEE/ACM (ESEC/FSE).ACM,292–303. InternationalConferenceonSoftwareEngineering:SoftwareEngineeringinPractice [41] ZhenLi,DeqingZou,ShouhuaiXu,ZhaoxuanChen,YaweiZhu,andHaiJin.2022. (ICSE-SEIP).IEEE,125–134. VulDeeLocator:ADeepLearning-BasedFine-GrainedVulnerabilityDetector. [19] RolandCroft,MuhammadAliBabar,andM.MehdiKholoosi.2023.DataQual- IEEETrans.DependableSecur.Comput.19,4(2022),2821–2837. ityforSoftwareVulnerabilityDatasets.InProceedingsofthe45thIEEE/ACM [42] ZhenLi,DeqingZou,ShouhuaiXu,HaiJin,YaweiZhu,andZhaoxuanChen.2022. InternationalConferenceonSoftwareEngineering(ICSE).IEEE,121–133. SySeVR:AFrameworkforUsingDeepLearningtoDetectSoftwareVulnerabilities. [20] HoaKhanhDam,TruyenTran,andAdityaGhose.2018.Explainablesoftwarean- IEEETrans.DependableSecur.Comput.19,4(2022),2244–2258. alytics.InProceedingsofthe40thInternationalConferenceonSoftwareEngineering: [43] ZhenLi,DeqingZou,ShouhuaiXu,XinyuOu,HaiJin,SujuanWang,Zhijun NewIdeasandEmergingResults(ICSE-NIER).ACM,53–56. Deng,andYuyiZhong.2018.VulDeePecker:ADeepLearning-BasedSystemfor [21] HoaKhanhDam,TruyenTran,TrangPham,ShienWeeNg,JohnGrundy,and VulnerabilityDetection.InProceedingsofthe25thAnnualNetworkandDistributed AdityaGhose.2021. AutomaticFeatureLearningforPredictingVulnerable SystemSecuritySymposium(NDSS).TheInternetSociety. SoftwareComponents.IEEETrans.SoftwareEng.47,1(2021),67–85. [44] RensisLikert.1932.Atechniqueforthemeasurementofattitudes.Archivesof [22] YangruiboDing,LucaBuratti,SaurabhPujar,AlessandroMorari,BaishakhiRay, psychology(1932). andSaikatChakraborty.2022. TowardsLearning(Dis)-SimilarityofSource [45] WanyuLin,HaoLan,andBaochunLi.2021.GenerativeCausalExplanationsfor CodefromProgramContrasts.InProceedingsofthe60thAnnualMeetingofthe GraphNeuralNetworks.InProceedingsofthe38thInternationalConferenceon AssociationforComputationalLinguistics(ACL).AssociationforComputational MachineLearning(ICML),Vol.139.6666–6679. Linguistics,6300–6312. [46] ShangqingLiu,BozhiWu,XiaofeiXie,GuozhuMeng,andYangLiu.2023.Con- [23] YangruiboDing,SaikatChakraborty,LucaBuratti,SaurabhPujar,Alessandro traBERT:EnhancingCodePre-trainedModelsviaContrastiveLearning.InPro- Morari,GailE.Kaiser,andBaishakhiRay.2023.CONCORD:Clone-AwareCon- ceedingsofthe45thIEEE/ACMInternationalConferenceonSoftwareEngineering trastiveLearningforSourceCode.InProceedingsofthe32ndACMSIGSOFT (ICSE).IEEE. InternationalSymposiumonSoftwareTestingandAnalysis(ISSTA).ACM,26–38. [47] AnaLucic,MaartjeA.terHoeve,GabrieleTolomei,MaartendeRijke,andFab- [24] JiahaoFan,YiLi,ShaohuaWang,andTienN.Nguyen.2020. AC/C++Code rizioSilvestri.2022.CF-GNNExplainer:CounterfactualExplanationsforGraph VulnerabilityDatasetwithCodeChangesandCVESummaries.InProceedingsof NeuralNetworks.InProceedingsofthe25thInternationalConferenceonArtificial |
the17thInternationalConferenceonMiningSoftwareRepositories(MSR).ACM, IntelligenceandStatistics(AISTATS),Vol.151.4499–4511. 508–512. [48] ScottM.LundbergandSu-InLee.2017.AUnifiedApproachtoInterpretingModel [25] MingFan,WenyingWei,XiaofeiXie,YangLiu,XiaohongGuan,andTingLiu. Predictions.InProceedingsofthe31stAnnualConferenceonNeuralInformation 2021.CanWeTrustYourExplanations?SanityChecksforInterpretersinAndroid ProcessingSystems(NeurIPS).4765–4774. MalwareAnalysis.IEEETrans.Inf.ForensicsSecur.16(2021),838–853. [49] DongshengLuo,WeiCheng,DongkuanXu,WenchaoYu,BoZong,HaifengChen, [26] Flawfinder.2023. http://www.dwheeler.com/FlawFinder. andXiangZhang.2020. ParameterizedExplainerforGraphNeuralNetwork. [27] RuthC.FongandAndreaVedaldi.2017. InterpretableExplanationsofBlack InProceedingsofthe34thAnnualConferenceonNeuralInformationProcessing BoxesbyMeaningfulPerturbation.InProceedingsofthe16thIEEEInternational Systems(NeurIPS). ConferenceonComputerVision(ICCV).IEEEComputerSociety,3449–3457. [50] AzqaNadeem,DaniëlVos,ClintonCao,LucaPajola,SimonDieck,RobertBaum- [28] MichaelFuandChakkritTantithamthavorn.2022. LineVul:ATransformer- gartner,andSiccoVerwer.2022.SoK:ExplainableMachineLearningforComputer basedLine-LevelVulnerabilityPrediction.InProceedingsofthe19thIEEE/ACM SecurityApplications.arXivpreprintarXiv:2208.10605(2022). InternationalConferenceonMiningSoftwareRepositories(MSR).IEEE,608–620. [51] Georgios Nikitopoulos, Konstantina Dritsa, Panos Louridas, and Dimitris [29] TomGanz,MartinHärterich,AlexanderWarnecke,andKonradRieck.2021. Mitropoulos.2021.CrossVul:across-languagevulnerabilitydatasetwithcommit ExplainingGraphNeuralNetworksforVulnerabilityDiscovery.InProceedingsof data.InProceedingsofthe29thACMJointEuropeanSoftwareEngineeringCon- the14thACMWorkshoponArtificialIntelligenceandSecurity(AISec@CCS).ACM, ferenceandSymposiumontheFoundationsofSoftwareEngineering(ESEC/FSE). 145–156. ACM,1565–1569. [30] WenboGuo,DongliangMu,JunXu,PuruiSu,GangWang,andXinyuXing.2018. [52] YuNong,YuzheOu,MichaelPradel,FengChen,andHaipengCai.2022.Gen- LEMNA:ExplainingDeepLearningbasedSecurityApplications.InProceedings eratingrealisticvulnerabilitiesvianeuralcodeediting:anempiricalstudy.In ofthe25thACMSIGSACConferenceonComputerandCommunicationsSecurity Proceedingsofthe30thACMJointEuropeanSoftwareEngineeringConference (CCS).ACM,364–379. andSymposiumontheFoundationsofSoftwareEngineering(ESEC/FSE).ACM, [31] HaoyuHe,YuedeJi,andH.HowieHuang.2022.Illuminati:TowardsExplaining 1097–1109. GraphNeuralNetworksforCybersecurityAnalysis.InProceedingsofthe7th [53] YuNong,YuzheOu,MichaelPradel,FengChen,andHaipengCai.2023.VULGEN: IEEEEuropeanSymposiumonSecurityandPrivacy(EuroS&P).IEEE,74–89. RealisticVulnerabilityGenerationViaPatternMiningandDeepLearning.In [32] DavidHin,AndreyKan,HuamingChen,andMuhammadAliBabar.2022.LineVD: Proceedingsof45thIEEE/ACMInternationalConferenceonSoftwareEngineering Statement-levelVulnerabilityDetectionusingGraphNeuralNetworks.InPro- (ICSE).IEEE,2527–2539. ceedingsofthe19thIEEE/ACMInternationalConferenceonMiningSoftwareRepos- [54] MarcusPendleton,RichardGarcia-Lebron,Jin-HeeCho,andShouhuaiXu.2017. itories(MSR).IEEE,596–607. ASurveyonSystemsSecurityMetrics.ACMComput.Surv.49,4(2017),62:1– [33] YutaoHu,SuyuanWang,WenkeLi,JunruPeng,YuemingWu,DeqingZou, 62:35. andHaiJin.2023.InterpretersforGNN-BasedVulnerabilityDetection:AreWe [55] ChanathipPornprasit,ChakkritTantithamthavorn,JirayusJiarpakdee,Michael ThereYet?.InProceedingsofthe32ndACMSIGSOFTInternationalSymposiumon Fu,andPatanamonThongtanunam.2021.PyExplainer:ExplainingthePredictions SoftwareTestingandAnalysis(ISSTA).ACM,1407–1419. ofJust-In-TimeDefectModels.InProceedingsofthe36thIEEE/ACMInternational [34] ParasJain,AjayJain,TianjunZhang,PieterAbbeel,JosephGonzalez,andIon ConferenceonAutomatedSoftwareEngineering(ASE).IEEE,407–418. Stoica.2021.ContrastiveCodeRepresentationLearning.InProceedingsofthe [56] Md.RafiqulIslamRabin,VincentJ.Hellendoorn,andMohammadAminAlipour. 26thConferenceonEmpiricalMethodsinNaturalLanguageProcessing(EMNLP). 2021.Understandingneuralcodeintelligencethroughprogramsimplification. AssociationforComputationalLinguistics,5954–5971. InProceedingsofthe29thACMJointEuropeanSoftwareEngineeringConference [35] JinghanJia,ShashankSrikant,TamaraMitrovska,ChuangGan,ShiyuChang, andSymposiumontheFoundationsofSoftwareEngineering(ESEC/FSE).ACM, SijiaLiu,andUna-MayO’Reilly.2023. CLAWSAT:TowardsBothRobustand 441–452. AccurateCodeModels.InProceedingsofthe30thIEEEInternationalConference [57] MarcoTúlioRibeiro,SameerSingh,andCarlosGuestrin.2016."WhyShouldI onSoftwareAnalysis,EvolutionandReengineering(SANER).IEEE,212–223. TrustYou?":ExplainingthePredictionsofAnyClassifier.InProceedingsofthe |
[36] ArnoldJohnson,KelleyDempsey,RonRoss,SarbariGupta,DennisBailey,etal. 22ndACMSIGKDDInternationalConferenceonKnowledgeDiscoveryandData 2011. Guideforsecurity-focusedconfigurationmanagementofinformation Mining(KDD).ACM,1135–1144. systems.NISTspecialpublication800,128(2011),16–16. [58] YucenShi,YingYin,ZhengkuiWang,DavidLo,TaoZhang,XinXia,YuhaiZhao, [37] PrannayKhosla,PiotrTeterwak,ChenWang,AaronSarna,YonglongTian,Phillip andBowenXu.2022.Howtobetterutilizecodegraphsinsemanticcodesearch?. Isola,AaronMaschinot,CeLiu,andDilipKrishnan.2020.SupervisedContrastive InProceedingofthe30thACMJointEuropeanSoftwareEngineeringConference Learning.InProceedingsofthe34thAnnualConferenceonNeuralInformation andSymposiumontheFoundationsofSoftwareEngineering(ESEC/FSE).ACM, ProcessingSystems(NeurIPS). 722–733. [38] DiederikP.KingmaandJimmyBa.2015.Adam:AMethodforStochasticOptimiza- [59] BenjaminSteenhoek,MdMahbuburRahman,RichardJiles,andWeiLe.2018. tion.InProceedingsofthe3rdInternationalConferenceonLearningRepresentations AnEmpiricalStudyofDeepLearningModelsforVulnerabilityDetection.In (ICLR). Proceedingsofthe45thInternationalConferenceonSoftwareEngineering(ICSE). [39] YujiaLi,DanielTarlow,MarcBrockschmidt,andRichardS.Zemel.2016.Gated IEEE/ACM. GraphSequenceNeuralNetworks.InProceedingsofthe4thInternationalConfer- [60] XiaobingSun,ZhenleiYe,LiliBo,XiaoxueWu,YingWei,TaoZhang,andBinLi. enceonLearningRepresentations(ICLR). 2023.Automaticsoftwarevulnerabilityassessmentbyextractingvulnerability [40] YiLi,ShaohuaWang,andTienN.Nguyen.2021.Vulnerabilitydetectionwithfine- elements.J.Syst.Softw.204(2023),111790. grainedinterpretations.InProceedingofthe29thACMJointEuropeanSoftwareCoca:ImprovingandExplainingGraphNeuralNetwork-BasedVulnerabilityDetectionSystems ICSE’24,April14–20,2024,Lisbon,Portugal [61] SahilSuneja,YunhuiZheng,YufanZhuang,JimAlainLaredo,andAlessandro [72] Xiaoxue Wu, Jinjin Shen, Wei Zheng, Lidan Lin, Yulei Sui, and Abubakar Morari.2021.ProbingModelSignal-AwarenessviaPrediction-PreservingInput OmariAbdallahSemasaba.2023. RNNtcs:Atestcaseselectionmethodfor Minimization.InProceedingsofthe29thACMJointEuropeanSoftwareEngineering RecurrentNeuralNetworks.Knowl.BasedSyst.279(2023),110955. ConferenceandSymposiumontheFoundationsofSoftwareEngineering(ESEC/FSE). [73] FabianYamaguchi,NicoGolde,DanielArp,andKonradRieck.2014.Modeling ACM,945–955. andDiscoveringVulnerabilitieswithCodePropertyGraphs.InProceedingsof [62] BruceWSuter.1990.ThemultilayerperceptronasanapproximationtoaBayes the35thIEEESymposiumonSecurityandPrivacy(SP).IEEEComputerSociety, optimaldiscriminantfunction.IEEEtransactionsonneuralnetworks1,4(1990), 590–604. 291. [74] ZhouYang,JiekeShi,JundaHe,andDavidLo.2022.NaturalAttackforPre-trained [63] JuntaoTan,ShijieGeng,ZuohuiFu,YingqiangGe,ShuyuanXu,YunqiLi,and ModelsofCode.InProceedingsofthe44thIEEE/ACMInternationalConferenceon YongfengZhang.2022.LearningandEvaluatingGraphNeuralNetworkExplana- SoftwareEngineering(ICSE).ACM,1482–1493. tionsbasedonCounterfactualandFactualReasoning.InProceedingsofthe31st [75] ZhitaoYing,DylanBourgeois,JiaxuanYou,MarinkaZitnik,andJureLeskovec. ACMWebConference(WWW).ACM,1018–1027. 2019. GNNExplainer:GeneratingExplanationsforGraphNeuralNetworks. [64] ChakkritTantithamthavorn,ShaneMcIntosh,AhmedE.Hassan,andKenichi InProceedingsofthe33rdAnnualConferenceonNeuralInformationProcessing Matsumoto.2017.AnEmpiricalComparisonofModelValidationTechniquesfor Systems(NeurIPS).9240–9251. DefectPredictionModels.IEEETrans.SoftwareEng.43,1(2017),1–18. [76] AndreasZellerandRalfHildebrandt.2002. SimplifyingandIsolatingFailure- [65] AäronvandenOord,YazheLi,andOriolVinyals.2018.RepresentationLearning InducingInput.IEEETrans.SoftwareEng.28,2(2002),183–200. withContrastivePredictiveCoding.arXivpreprintarXiv:1807.03748(2018). [77] QuanshiZhang,YingNianWu,andSong-ChunZhu.2018.InterpretableConvo- [66] AshishVaswani,NoamShazeer,NikiParmar,JakobUszkoreit,LlionJones, lutionalNeuralNetworks.InProceedingsofthe28thIEEEConferenceonComputer AidanN.Gomez,LukaszKaiser,andIlliaPolosukhin.2017. AttentionisAll VisionandPatternRecognition(CVPR).ComputerVisionFoundation/IEEECom- youNeed.InProceedingsofthe31stAnnualConferenceonNeuralInformation puterSociety,8827–8836. ProcessingSystems(NeurIPS).5998–6008. [78] YunhuiZheng,SaurabhPujar,BurnL.Lewis,LucaBuratti,EdwardA.Epstein, [67] HuantingWang,GuixinYe,ZhanyongTang,ShinHweiTan,SongfangHuang, BoYang,JimLaredo,AlessandroMorari,andZhongSu.2021.D2A:ADataset DingyiFang,YansongFeng,LizhongBian,andZhengWang.2021.Combining BuiltforAI-BasedVulnerabilityDetectionMethodsUsingDifferentialAnaly- |
Graph-BasedLearningWithAutomatedDataCollectionforCodeVulnerability sis.InProceedingsofthe43rdIEEE/ACMInternationalConferenceonSoftware Detection.IEEETrans.Inf.ForensicsSecur.16(2021),1943–1958. Engineering:SoftwareEngineeringinPractice(ICSE-SEIP).IEEE,111–120. [68] WenboWang,TienN.Nguyen,ShaohuaWang,YiLi,JiyuanZhang,andAashish [79] JiayuanZhou,MichaelPacheco,JinfuChen,XingHu,XinXia,DavidLo,and Yadavally.2023. DeepVD:TowardClass-SeparationFeaturesforNeuralNet- AhmedE.Hassan.2023.CoLeFunDa:ExplainableSilentVulnerabilityFixIdenti- workVulnerabilityDetection.InProceedingsofthe45thIEEE/ACMInternational fication.InProceedingsofthe45thIEEE/ACMInternationalConferenceonSoftware ConferenceonSoftwareEngineering(ICSE).IEEE. Engineering(ICSE).IEEE. [69] AlexanderWarnecke,DanielArp,ChristianWressnegger,andKonradRieck.2020. [80] YaqinZhou,ShangqingLiu,JingKaiSiow,XiaoningDu,andYangLiu.2019. EvaluatingExplanationMethodsforDeepLearninginSecurity.InProceedings Devign:EffectiveVulnerabilityIdentificationbyLearningComprehensivePro- ofthe5thIEEEEuropeanSymposiumonSecurityandPrivacy(EuroS&P).IEEE, gramSemanticsviaGraphNeuralNetworks.InProceedingsofthe33rdAnnual 158–174. ConferenceonNeuralInformationProcessingSystems(NeurIPS).10197–10207. [70] YingWei,LiliBo,XiaobingSun,BinLi,TaoZhang,andChuanqiTao.2023. [81] ChengchengZhu,JialeZhang,XiaobingSun,BingChen,andWeizhiMeng. AutomatedeventextractionofCVEdescriptions.Inf.Softw.Technol.158(2023), 2023.ADFL:Defendingbackdoorattacksinfederatedlearningviaadversarial 107178. distillation.Comput.Secur.132(2023),103366. [71] Xin-ChengWen,YupanChen,CuiyunGao,HongyuZhang,JieM.Zhang,and [82] DeqingZou,YutaoHu,WenkeLi,YuemingWu,HaojunZhao,andHaiJin. QingLiao.2023.VulnerabilityDetectionwithGraphSimplificationandEnhanced 2022. mVulPreter:AMulti-GranularityVulnerabilityDetectionSystemWith GraphRepresentationLearning.InProceedingsofthe45thIEEE/ACMInternational Interpretations.IEEETrans.DependableSecur.Comput.(2022). ConferenceonSoftwareEngineering(ICSE).IEEE,2275–2286. [83] DeqingZou,YaweiZhu,ShouhuaiXu,ZhenLi,HaiJin,andHengkaiYe.2021. InterpretingDeepLearning-basedVulnerabilityDetectorPredictionsBasedon HeuristicSearching.ACMTrans.Softw.Eng.Methodol.30,2(2021),23:1–23:31. |
2401.15468 4202 naJ 72 ]ES.sc[ 1v86451.1042:viXra Large Language Model for Vulnerability Detection: Emerging Results and Future Directions XinZhou, Ting Zhang,andDavidLo SchoolofComputingandInformationSystems,SingaporeManagementUniversity,Singapore {xinzhou.2020,tingzhang.2019}@phdcs.smu.edu.sg,davidlo@smu.edu.sg ABSTRACT weexploreanddesigndiversepromptstoeffectivelyapplyLLMs Previouslearning-basedvulnerabilitydetectionmethodsreliedon forvulnerabilitydetection.Wespecificallystudiedtwoprominent eithermedium-sizedpre-trainedmodelsorsmallerneuralnetworks state-of-the-artLargeLanguageModels(LLMs):GPT-3.5andGPT- fromscratch.RecentadvancementsinLargePre-TrainedLanguage 4,bothservingasthefoundationalmodelsforChatGPT.Ourex- Models(LLMs)haveshowcasedremarkablefew-shotlearningca- perimental results revealed that with appropriate prompts,GPT- pabilitiesinvarioustasks.However,theeffectiveness ofLLMsin 3.5achievescompetitiveperformancewithCodeBERT,andGPT-4 detectingsoftwarevulnerabilitiesislargelyunexplored.Thispaper outperformedCodeBERTby34.8%intermsofAccuracy.Insum- aimstobridgethisgapbyexploringhowLLMsperformwithvar- mary,ourcontributionsareasfollows: iousprompts,particularlyfocusingontwostate-of-the-artLLMs: • WeconductexperimentswithdiversepromptsforLLMs,encom- GPT-3.5andGPT-4.OurexperimentalresultsshowedthatGPT-3.5 passingtaskandroledescriptions,projectinformation,andex- achieves competitiveperformancewiththepriorstate-of-the-art amplesfromCommonWeakness Enumeration(CWE)and the vulnerability detection approach and GPT-4 consistently outper- training set. We recognize LLMs as promising modelsfor vul- formedthestate-of-the-art. nerabilitydetection. • Wepinpointseveralpromisingfuturedirectionsforleveraging 1 INTRODUCTION LLMsinvulnerabilitydetection,andweencouragethecommu- Software vulnerabilities arethe prevalent issues in softwaresys- nitytodelveintothesepossibilities. tems,posingvariousriskssuchasthecompromiseofsensitivein- formation[1]andsystemfailures[2].Toaddressthischallenge,re- 2 PROPOSEDAPPROACH searchershaveproposedmachinelearning(ML)anddeeplearning ChatGPTandIn-ContextLearning.ChatGPT(Plus)isbuiltupon (DL)approachesforidentifyingvulnerabilitiesinsourcecode[3– closed-sourcelarge-size LLMs known asthe GPT-3.5 and GPT-4. 6].WhilepreviousML/DL-basedvulnerabilitydetectionapproaches Muchpriorresearchemploysmedium-sizepre-trainedmodelslike have demonstratedpromising results,they have primarilyrelied CodeBERTandCodeT5.Thesemodelsarecommonlyfine-tuned, oneithermedium-sizepre-trainedmodelssuchasCodeBERT[4,7] updatingallparameterstoalignwithlabeledtrainingdata.[13,14]. or training smaller neural networks (such as Graph Neural Net- Thoughveryeffective,fine-tuningdemandslargeGPUresources works[5])fromscratch. toloadandupdateallparametersofpre-trainedmodels[11,15].As Recent developments in Large Pre-Trained Language Models large-sizeLLMs(e.g.,ChatGPT)havealargenumberofparameters, (LLMs)havedemonstratedimpressivefew-shotlearningcapabili- itisvery challenging tofine-tune themusingGPUcardswidely tiesacrossdiversetasks[8–12].However,theperformanceofLLMs used in academia. An alternative and widely adopted approach onsecurity-orientedtasks,particularlyvulnerabilitydetection,re- for large-size LLMs, as introduced in GPT-3, is in-context learn- mainslargelyunexplored.Moreover,LLMsaregraduallystarting ing(ICL) [12,16].ICLinvolves freezing theparametersofLLMs tobeusedinsoftwareengineering(SE),asseeninautomatedpro- andutilizingsuitablepromptstoimparttask-specificknowledgeto gram repair [8]. However, these studies predominantly focuson themodels.Unlikefine-tuning,ICLrequiresnoparameterupdate usingLLMsforgeneration-basedtasks.Itremainsunclearwhether whichsignificantly reducesthelargeGPUresourcerequirement. LLMscanbeeffectivelyutilizedinclassificationtasksandoutper- To perform the inference/testing, ICL makes predictions based formthemedium-sizepre-trainedmodelssuchasCodeBERT,specif- ontheprobabilityofgenerating thenexttoken𝑡 given theunla- icallyinthevulnerabilitydetectiontask. beleddatainstance𝑥 andtheprompt𝑃.Thentheoutputtoken𝑡 Tofillintheresearchgaps,thispaperinvestigatestheeffective- ismappedintothepredictioncategoriesbytheverbalizer (intro- nessofLLMsinidentifyingvulnerablecodes,i.e.,acriticalclassi- ducedbelow). ficationtaskwithinthedomainofsecurity.Furthermore,theeffi- Prompt Basics. A promptis a textual string that has two slots: cacyofLLMsheavilyreliesonthequalityofprompts(taskdescrip- (1)an input slot [𝑋] for theoriginal input data𝑥 and (2) anan- tionsandotherrelevantinformation)providedtothemodel.Thus, swerslot [𝑍] forthepredictedanswer𝑧.Theverbalizer,denoted Permissiontomakedigitalorhardcopiesofpartorallofthisworkforpersonalor as𝑉,isafunctionthatmapsthepredictedanswer𝑧toaclass𝑦ˆin classroomuseisgrantedwithoutfeeprovidedthatcopiesarenotmadeordistributed thetargetclassset𝑌,formally𝑉 :𝑍 →𝑌.Forinstance,astraight- |
f oo nr tp hr eo fifi rt so tr pc ao gm e.m Ce or pc yia rl iga hd tv sa fn ot ra tg he ira dn -d path rta yt cc oo mpi pes onb ee nar tsth oi fs tn ho isti wce ora knd mt uh se tf bu ell hc oi nta ot rio edn . forwardpromptandverbalizerareshownasfollows:𝑓𝑝𝑟𝑜𝑚𝑝𝑡(𝑥)= Forallotheruses,contacttheowner/author(s). “𝐶𝑜𝑑𝑒𝑖𝑠 [𝑋].𝐼𝑡𝑖𝑠 [𝑍].”and𝑉 = +, 𝑖𝑓 𝑍 =𝑣𝑢𝑙𝑛𝑒𝑟𝑎𝑏𝑙𝑒, ICSE-NIER’24,April14–20,2024,Lisbon,Portugal (cid:26) −, 𝑖𝑓 𝑍 =non-vulnerable, ©2024Copyrightheldbytheowner/author(s). where𝑉 isthedefinedverbalizerwherethetoken“vulnerable”is ACMISBN979-8-4007-0500-7/24/04. https://doi.org/10.1145/3639476.3639762 mappedintothepositiveclass.ChatGPTmaygenerateresponsesICSE-NIER’24,April14–20,2024,Lisbon,Portugal XinZhou,TingZhang,andDavidLo Table1:Ourpromptdesigns. Theseexamplesshowcasethecharacteristicsandpatternsofvul- No. PromptType PromptTemplate verbalizer nerabilities, equipping LLMs with valuable insights. This allows Nowyouneedtoidentifywhethera methodcontainsavulnerabilityornot. ustoextendthemodel’sknowledgebeyondthelimitationsofthe Ifhasanypotentialvulnerability, +:thiscodeisvulnerable trainingdatabyleveragingexternalsources,specificallytheCWE P TaskDescription output:’thiscodeisvulnerable’. -:thiscodeisnon-vulnerable Otherwise,output: system. ’thiscodeisnon-vulnerable’. Thecodeis[X].Let’sstart:[Z] KnowledgeintheTrainingSet(A4):Thetrainingdataencompasses Youareanexperienceddeveloperwho +:thiscodeisvulnerable A1 RoleDescription knowsthesecurityvulnerabilityverywell -:thiscodeisnon-vulnerable valuabletask-specificknowledgepertinenttoagiventask.How- Thecodeisfrom[thenameofproject]. +:thiscodeisvulnerable A2 ProjectInformation Thefilenameis[thenameoffile]. -:thiscodeisnon-vulnerable ever,wecanonlyaccommodatealimitednumberofinput-output Hereareexamplesofthemostdangerous samplesbecauseChatGPThasamaximumtokenlimitof4,096.In CWEtypes. DangerousCWE Example1:intreturnChunkSize... +:thiscodeisvulnerable thisstrategy,werandomlyselectKsamplesfromthetrainingdata A3 TypesExamples Lable1:thiscodeisvulnerable. -:thiscodeisnon-vulnerable Example2:static... toleveragetheknowledgeembeddedwithinthetrainingdataset. ... Vulnerable/non-vulnerablesamplesarebothusedinthisstrategy. Herearesampledexamplesfromthe trainingdata. SelectiveKnowledgeintheTrainingSet(A5):Incontrasttotheafore- Example1:int... RandomlySampled +:thiscodeisvulnerable A4 Code Lable1:thiscodeisvulnerable. -:thiscodeisnon-vulnerable mentionedstrategy,weadoptedadifferentapproachbyretrieving Example2:static... Lable2:thiscodeisnon-vulnerable. thetopKmostsimilarmethodsfromthetrainingdata,insteadof ... randomlysampling.Theseretrievedmethodsservedasexamples Herearethemostsimilarcodesfrom thetrainingdata. tofurnishLLMswithpertinentknowledge,aidingtheirdecision- Example1:int... A5 RetrievedSimilar Lable1:thiscodeisnon-vulnerable. +:thiscodeisvulnerable makingprocesswhenevaluatingthetestdata.Toperformthere- Code -:thiscodeisnon-vulnerable Example2:static... trievalprocess,weemployedCodeBERT[7]totransformthecode Lable2:thiscodeisvulnerable. ... snippets into semantic vectors. Subsequently, we quantified the similaritybetweentwocodesnippetsbycalculatingthecosinesim- thatdifferfromourpredefinedlabelwords.Tosimplifytheprocess, ilarityoftheirrespectivesemanticvectors.Foragiventestcode, wemanuallycheckthepredictedclassesofChatGPT’sgenerated we retrieved the top K similar methods along with their corre- answerswhentheydivergefromourspecifiedlabelwords.Forex- spondingvulnerabilitylabelsfromthetrainingdata. ample, we map the answer "it is vulnerable because ..." into the "vulnerable"class. 3 PRELIMINARY EVALUATION PromptDesigns.Table1showsourdesignedprompts(thebase prompt + several augmentations). OpenAI allowsusers to guide Inthiswork,weaimtoanswer asingleresearch question:How ChatGPTthroughtwotypesofmessages/prompts:1)thesystem effectiveisChatGPTwithdifferent prompt designsinvul- message,influencingChatGPT’soverallbehaviorssuchasadjust- nerabilitydetectioncomparedtobaselines? ingthepersonalityofChatGPT,and2)theusermessage,contain- DatasetandModel.Weusethevulnerability-fixingcommitdataset ing requestsfor ChatGPT toaddress and respond to.We use an recentlycollectedbyPanetal.[19].Togetthevulnerablefunctions emptysystemmessageandusermessagesthatarelistedasprompts fromvulnerability-fixingcommits,wefollowedFanetal.[20]to inTable1.Initially,wedesignedastraightforwardprompt(P):“Now firstcollectsoftwareversionspriortoavulnerability-fixingcom- you needtoidentifywhethera methodcontainsavulnerabilityor mit,andthenlabeledfunctionswithlineschanged inapatchas not.” asthebaseprompt.Thisbasepromptonlybrieflydescribes vulnerable.Allremainingfunctionsinafiletouchedbyacommit thetaskwewantLLMstodo.ToprovideLLMswithmorevaluable wereregardedasnon-vulnerable.Asthesecuritypatchesdataset |
task-specificinformation,weproposediverseaugmentations(A*) coversalargenumberofsoftwarerepositoriesimplementedindi- tothebaseprompt,includingthefollowing: verse programming languages, it is challenging to write all the RoleDescription(A1):WeexplicitlydefinedtheroleofLLMsinthis corresponding parsers (used to split functions from files) for all languages.Thus,inthispreliminaryevaluation,weonlyfocuson task:“Youareanexperienceddeveloperwhoknowsthesecurityvul- softwarerepositoriesimplementedinC/C++.Tobuildourtestset, nerabilityverywell”.ThisstrategyaimstoremindLLMstochange we first randomly sampled 20 open-sourcesoftware repositories theirworkingmodetoasecurity-relatedone. implemented in C/C++ fromthe original test set split by Pan et ProjectInformation(A2):Recently,Lietal.[17]proposethestate- al.[19]andusedtheirvulnerabilityfixestogetthevulnerablefunc- of-the-art LLM for code namely StarCoder. Li et al. found that tions(positivesamples) for ourtestset. Due totheconsiderable addingthefilenameinthepromptscansubstantiallyimprovethe cost [21] associated with querying ChatGPT on a large test set, effectivenessofStarCoder.WefollowedthemtoprovideLLMswith welimitedoursamplingto20repositories.Despitetherestricted theprojectnamesandfilenamesassociatedwiththetargetcode. samplesize,ourpreliminaryexperimentwasdesignedtoshowcase ExternalSourceKnowledge(A3):TheCWEsystemoffersawealth thepotentialofChatGPT.Forourtraining/validationsets,weused ofinformationaboutsoftwarevulnerabilitiessuchascodeexam- all the vulnerability fixes of the C/C++ repositories in the origi- plesoftypicalvulnerablecode.Leveraging suchresources could nal training/validation sets split by Pan et al. and extracted vul- possiblyenhancethepromptgenerationprocessforvulnerability nerablefunctions(positivesamples).Toobtainnegative samples, detectiontasks.Inthisstudy,wecollectedthevulnerablecodeex- i.e.,non-vulnerablefunctions,forthetest/training/validationsets, amplesthatrepresentthetop25mostdangerousCommonWeak- weemployedarandomsamplingtechnique.Foreachvulnerable ness Enumeration (CWE) types identified in the year 2022 [18]. function,weselectedonefunctionatrandomfromnon-vulnerable 2LargeLanguageModelforVulnerabilityDetection: EmergingResultsandFutureDirections ICSE-NIER’24,April14–20,2024,Lisbon,Portugal Table2:ResultsofChatGPTwithdiversepromptsandthefine-tunedCodeBERT Model Prompt Prompt/ModelDescription Param. Accuracy Precision Recall F1score F0.5score P providethetaskdescriptiontoLLM - 50.0 Nan 0.0 Nan Nan P+A1 providetheroledescriptiontoLLM - 50.0 Nan 0.0 Nan Nan P+A2 providetheprojectname - 50.0 Nan 0.0 Nan Nan providevulnerablecodeexamplesfrom P+A3 - 59.1 72.2 29.5 41.9 56.0 25mostdangerousCWETypesin2022 ChatGPT K=1 51.8 75.2 6.0 10.7 20.5 (GPT-3.5) P+A4 randomlysampleKinput-outputpairs K=3 58.8 65.8 41.2 50.2 58.4 K=5 61.4 80.0 30.3 43.9 60.1 K=1 55.4 67.2 21.2 32.3 46.9 P+A5 retrievetopKmostsimilarcode K=3 56.7 60.1 39.9 48.0 54.6 K=5 59.8 63.2 47.2 54.0 59.2 P+A4+A5 combineP4andP5together(bothtop3) K=3 62.7 76.3 36.8 49.7 62.8 CodeBERT - full-parameterfine-tuned - 60.3 62.3 53.3 57.3 60.1 Table3:ResultsofGPT-4andbaselinesonthefirsthalfofthetestset Model Prompt Prompt/ModelDescription Param. Accuracy Precision Recall F1score F0.5score ChatGPT P+A4+A5 combineP4andP5together K=3 63.6 77.8 38.0 51.1 64.3 (GPT-3.5) P providethetaskdescriptiontoLLM - 60.3 67.3 40.2 50.3 59.3 ChatGPT P+A3 codeexamplesfromCWETypes - 75.5 73.7 79.3 76.4 74.8 (GPT-4) P+A5 retrievetopKmostsimilarcode K=5 61.4 63.6 53.3 58.0 61.3 P+A4+A5 combineA4andA5together K=3 59.2 60.2 54.3 57.1 59.0 CodeBERT - fine-tuned,testedonahalftestset - 56.0 57.3 46.7 51.5 54.8 functions that were extracted from the same file as the vulnera- andtheP+A4+A5combinationachievedthebestF0.5(62.8%)score ble function. Different from the vulnerable function, those non- andAccuracy(62.7%).TheP+A4+A5combinationoutperformsthe vulnerablefunctionshadnotbeenmodifiedbythevulnerability- base prompt P by 25.4% in Accuracy. When comparing GPT-3.5 fixing commit.Finally, ourdataset has7,683/853/368methodsin tothestate-of-the-artapproach,GPT-3.5(P+A4+A5)outperformed thetraining/validation/testsets. CodeBERTby4.0%,22.5%,and4.5%intermsofAccuracy,Precision, Forstudiedmodels,weprimarilyfocusedourinvestigationon and F0.5, respectively. However, GPT-3.5 underperformed Code- ChatGPT (GPT-3.5) with the model name gpt-3.5-turbo while BERT by 44.8% and 15.3% in Recall and F1. These experimental alsodoinglimitedexperimentsonChatGPT(GPT-4).Forthebase- resultshighlightthedistinctstrengthsofCodeBERTandGPT-3.5. line model, we opted for one of the state-of-the-art approaches GPT-3.5 demonstrates significantly higher Precision scores, indi- |
(i.e., CodeBERT) according to a recent comprehensive empirical catingitsproficiencyinminimizingfalsepositives.Ontheother study[22]. hand,CodeBERTshowcasesamuchhigherRecallscore,signifying Evaluation. To measure the model’s effectiveness, we adopted itscapabilitytoidentifyagreaternumberofvulnerabilities.Over- widelyusedevaluationmetrics,i.e.,Accuracy,Precision,Recall,F1, all, GPT-3.5 demonstrates competitive performance when andF0.5.WeincorporatedtheF0.5metric,whichassignsgreater comparedtothefine-tunedCodeBERT. importancetoprecisionthantorecall.Thischoiceismotivatedby TheperformanceofGPT-4ispresentedinTable3.Notably,GPT- the developers’ aversion to false positives, as a low success rate 4isaccompaniedbyaconsiderablyhigher cost.Duetothehigh maydiminishtheirpatienceandconfidenceinthesystem[23].As costs,in this preliminary evaluation, we only evaluate GPT-4 in theICLmethodexhibitssomeinstability,inthispreliminarywork, the first half of the test set. To assess its performance, we em- werepeatedexperimentstwiceandreportedtheaverageresults. ployed four different prompts: the base prompt (P), the external sourceknowledgepromptP+A3,thepromptP+A5whichobtained Results. The performance of GPT-3.5 in vulnerability detection the second best F1 for GPT-3.5, and the prompt P+A4+A5 which was evaluated by integrating different prompts, and the results obtainedthebestF1forGPT-3.5.AsillustratedinTable3,GPT-4 aresummarizedinTable2.Experimentalresultsrevealedthatthe with the prompt P+A3 significantly outperformed the fine- base prompt yielded unsatisfactory outcomes, with GPT-3.5 pre- tunedCodeBERTby34.8%intermsofaccuracy. dictingeverytargetcodeasnon-vulnerable,resultinginanaccu- racyof50%andarecallof0%.Theinclusionofroledescriptions 4 THREATS TOVALIDITY andprojectinformationdidnotcontributetobetterperformance. However,incorporatingexamplesfromexternalsourceknowledge, OneconcernarisesduetothepossibilityofdataleakageinChat- specificallythe25mostdangerousCWEtypes,ledtosubstantial GPT.Thedatasetusedinourevaluationmayoverlapwiththedata performanceimprovements(18.2%inAccuracy).Furthermore,the usedfortrainingChatGPT.However,becauseChatGPTisaclosed- utilizationofrandomsamplingcodesfromthetrainingdataand sourcemodel,welackthemeanstovalidatewhethersuchanover- retrievingsimilarcodesalsoresultedinsignificantlybetterperfor- lapexists.Anotherthreattovalidityistheequaldataratiobetween mance(upto22.8%and19.6%inAccuracy)comparedtothebase vulnerable and non-vulnerable functions in the test set. We cre- prompt. Among all the prompt combinations studied, the P+A5 ateabalancedtestsettoalleviatecostslinkedtoChatGPTusage, combinationachievedthehighestF1score(54.0%)andRecall(47.2%), giventhattheexpensesoftheseexperimentsriseinproportionto thenumber oftest samples, particularlywith alarge number of 3ICSE-NIER’24,April14–20,2024,Lisbon,Portugal XinZhou,TingZhang,andDavidLo non-vulnerablefunctions.However,theequaldataratiodoesnot tunedLLMcanalleviatetheconcernsoforganizationsthatpriori- realistically represent theactual problemofvulnerability predic- tizedatasecurityandprivacywhilealsomakinguseoftheabun- tionwherethevulnerablecodeistheminorityinsoftwaresystems. dantopen-sourcevulnerabilitydata. Besides, thisdataratiowilllead toinflated metricsinthisstudy. PrecisionandRobustnessBoostinVulnerabilityDetection. We recognize this issue as a limitation of the preliminary study, Avulnerabilitydetectionsolutionwithahighprecisionisusually andinourfuturework,weaimtodevelopanewtestsetthatac- preferred.Additionally,thesolutionneedstoremainrobustagainst curatelyreflectsthereal-worlddataratiobetweenvulnerableand data perturbationsor adversarial attacks. With higher precision, non-vulnerablefunctions.Lastly,wefollowedFanetal.[20]tola- developerswillhavegreaterconfidenceinthereliabilityofdetec- belfunctionswithlineschangedinavulnerability-fixingcommit tions and consequently perform the requisite actions to address asvulnerable.Thislabelingheuristicmayoverestimatetheactual thedetectedvulnerabilities.Withhigherrobustness,amoresecure numberofvulnerablefunctionsduetothecommittangling,where andstablevulnerabilitydetectionmodelcanbeproduced,andbe some changed functions in the vulnerability-fixing commit may moreimmunetoadversarialattacks. not be directly relevant to addressing the vulnerability. We rec- Inthefuture,weplantoboostLLMs’precisionandrobustness ognize the challenge of accurately identifying real vulnerability- in this task: 1) To improve precision, we plan to employ ensem- relatedfunctionsinthevulnerability-fixingcommit.Inourfuture blelearning,apromisingtechniquetoimprovetheprecision(and work,weaimtoemployexistingtechniques(e.g.,[24])tountangle possiblyrecall)byidentifyingthecommonhigh-confidencepredic- thevulnerability-fixingcommits. tionsamongdifferentmodels.2)Toimproverobustness,weplan toextendanexistingwork[33]thatemploysasingleadversarial 5 RELATEDWORK transformation(renaming ofvariables)toenhance modelrobust- |
WeareunabletofindvulnerabilitydetectionworkusingGPT-3.5 ness.Wewilldelveintovariousothertypesofadversarialtransfor- andGPT-4intheacademicliterature.However,thereareseveral mationsandassesstheireffectivenessinenhancingtherobustness relatedstudiesinthegray(non-peer-reviewed)literature.Forex- ofLLMs. ample,BurpGPT[25]integratesChatGPTwiththeBurpSuite[26] EnhancingEffectivenessinLong-TailedDistribution.Inthis todetectvulnerabilitiesinwebapplications.Incontrast,thesoft- study,weformulatethetaskasabinaryclassification(vulnerable ware studied in our study is not confined to a specific domain. ornon-vulnerable). Moving one step further,developers mayre- Also,vuln_GPT,anLLMdesignedtodiscoverandaddresssoftware quirethetooltoindicatethespecificvulnerabilitytype(e.g.,CWE vulnerabilities,wasintroducedrecently[27].Different from[27], types)associatedwiththedetectedvulnerablecode.Thisadditional ourstudyfocusesonimproving promptsforvulnerability detec- informationiscrucialforabetterunderstandingandresolutionof tion.Aparallelwork[28]alsoexplorestheuseofChatGPTinvul- thevulnerability.However,arecentstudy[34]revealed thatvul- nerabilitydetection.Theymainlyenhancepromptsthroughstruc- nerabilitydataexhibitalong-taileddistributionintermsofCWE tural and sequential auxiliary information. Three distinctions of types:asmallnumberofCWEtypeshaveasubstantialnumberof ourworkfrom[28]are:1)WealsostudiedGPT-4whiletheydid samples,whilenumerousCWEtypeshaveveryfewsamples.The not;2)WeintegrateknowledgefromtheCWEsystemandsimilar studyalsopointedoutthatLLMsstruggletoeffectivelyhandlevul- samplesfromthetrainingsettoenhanceprompts,adimensionnot nerabilitiesintheselesscommontypes.Thislong-taileddistribu- consideredin[28];3)Weidentifiedpromisingfuturestudydirec- tioncouldposeachallengeforLLMs-basedvulnerabilitydetection tionswhicharenotthoroughlydiscussedin[28]. solutions. Inthefuture,weplanto1)explorewhetherLLMs,specifically 6 FUTUREWORK ChatGPT,caneffectivelydetecttheseinfrequentvulnerabilitiesor notand2)proposeasolution(e.g.,generatingmoresamplesforthe There are plenty of exciting paths to explore in future research. lesscommontypesviadataaugmentation)toaddresstheimpact Here,we’rejusthighlightingafewofthesepotentialdirections: ofthelong-taileddistributionofvulnerabilitydata. Local and SpecializedLLMs-based Vulnerability Detection. TrustandSynergywithDevelopers.AI-poweredsolutionsfor ThisstudyfocusesonChatGPT.However,ChatGPTrequiresdata vulnerabilitydetection,includingthiswork,havelimitedinterac- tobesenttothird-partyservices.Thismayrestricttheutilization tion with developers. They may face challenges in establishing of ChatGPT-related vulnerability detection tools among specific trustand synergy withdevelopersduring practicaluse. Toover- organizations, such as major tech corporations or governments. comethis, futureworks shouldinvestigate more effective strate- Theseorganizations,e.g.,[29–31],regardtheirsourcecodeaspro- giestofostertrustandcollaborationbetweendevelopersandAI- prietary,sensitive,orclassifiedmaterial,andasaresult,theyare poweredsolutions[35].Bynurturingtrustandsynergy,AI-powered unabletotransmitorshareitwiththird-partyservices.Addition- solutionsmayevolveintosmartworkmatestobetterassistdevel- ally,ChatGPT modelsarenotspecializedforvulnerabilitydetec- opers. tionandthusmaynottakefulladvantageoftherichopen-source vulnerabilitydata. 7 CONCLUSION Toaddresstheabove-mentioned limitations,inthefuture,we aimtoproposealocalandspecializedLLM solutionforvulnera- Inthisstudy,weexploredtheefficacyandpotentialofLLMs(i.e., bilitydetection.Thesolutionwillbuildupongeneral-purposeand ChatGPT)invulnerabilitydetection.Weproposedsomeinsightful open-sourcecodeLLMs,e.g.,Llama[32],whichwillbetunedfor promptenhancements suchasincorporatingtheexternalknowl- vulnerabilitydetectionwiththerelevantvulnerabilitycorpus.The edgeandchoosingvaluablesamplesfromthetrainingset.Wealso 4LargeLanguageModelforVulnerabilityDetection: EmergingResultsandFutureDirections ICSE-NIER’24,April14–20,2024,Lisbon,Portugal identified many promising directions for futurestudy. We made [15] EdwardJ.Hu,YelongShen,PhillipWallis,ZeyuanAllen-Zhu,YuanzhiLi,Shean ourreplicationpackage1publiclyavailableforfuturestudies. Wang,andWeizhuChen.Lora:Low-rankadaptationoflargelanguagemodels. ArXiv,abs/2106.09685,2022. Acknowledgement.Thisresearch /projectissupportedbythe [16] TomBrown,BenjaminMann,NickRyder,MelanieSubbiah,JaredDKaplan, PrafullaDhariwal,ArvindNeelakantan,PranavShyam,GirishSastry,Amanda National Research Foundation, under its Investigatorship Grant Askell,etal. Languagemodelsarefew-shotlearners.Advancesinneuralinfor- (NRF-NRFI08-2022-0002).Anyopinions,findingsandconclusions mationprocessingsystems,33:1877–1901,2020. [17] RaymondLi,LoubnaBenAllal, YangtianZi,NiklasMuennighoff,DenisKo- or recommendations expressed in this material are those of the |
cetkov,ChenghaoMou,MarcMarone,ChristopherAkiki,JiaLi,JennyChim, author(s)anddonotreflecttheviewsofNationalResearchFoun- etal. Starcoder:maythesourcebewithyou! arXivpreprintarXiv:2305.06161, dation,Singapore. 2023. 1https://github.com/soarsmu/ChatGPT-VulDetection REFERENCES [18] https://cwe.mitre.org/top25/archive/2022/2022_cwe_top25.html,2022. [19] ShengyiPan,LingfengBao,XinXia,DavidLo,andShanpingLi. Fine-grained [1] MicrosoftExchangeFlaw:AttacksSurgeAfterCodePublished. https://www. commit-levelvulnerabilitytypepredictionbycwetreestructure. In45thInter- bankinfosecurity.com/ms-exchange-flaw-causes-spike-intrdownloader-gen- nationalConferenceonSoftwareEngineering,ICSE2023,2023. trojans-a-16236,2022. [20] JiahaoFan,YiLi,ShaohuaWang,andTienNNguyen. Ac/c++codevulnera- [2] DeanTurner,MarcFossi,EricJohnson,TrevorMack,JosephBlackbird,Stephen bilitydatasetwithcodechangesandcvesummaries.InProceedingsofthe17th Entwisle,MoKingLow,DavidMcKinney,andCandidWueest.Symantecglobal InternationalConferenceonMiningSoftwareRepositories,pages508–512,2020. internetsecuritythreatreport–trendsforjuly-december07.SymantecEnterprise [21] https://openai.com/pricing,2023. Security,13:1–36,2008. [22] BenjaminSteenhoek,MdMahbuburRahman,RichardJiles,andWeiLe. An [3] HazimHanifandSergioMaffeis.Vulberta:Simplifiedsourcecodepre-training empiricalstudyofdeeplearningmodelsforvulnerabilitydetection. 45thInter- forvulnerabilitydetection.In2022InternationalJointConferenceonNeuralNet- nationalConferenceonSoftwareEngineering,ICSE,2023. works(IJCNN),pages1–8.IEEE,2022. [23] PavneetSinghKochhar,XinXia,DavidLo,andShanpingLi.Practitioners’expec- [4] MichaelFuandChakkritTantithamthavorn.Linevul:atransformer-basedline- tationsonautomatedfaultlocalization. InProceedingsofthe25thInternational levelvulnerabilityprediction.InProceedingsofthe19thInternationalConference SymposiumonSoftwareTestingandAnalysis,pages165–176,2016. onMiningSoftwareRepositories,pages608–620,2022. [24] YiLi,ShaohuaWang,andTienNNguyen. Utango:untanglingcommitswith [5] Van-AnhNguyen,DaiQuocNguyen,VanNguyen,TrungLe,QuanHungTran, context-aware,graph-based,codechangeclusteringlearningmodel.InProceed- andDinhPhung.Regvd:Revisitinggraphneuralnetworksforvulnerabilityde- ingsofthe30thACMJointEuropeanSoftwareEngineeringConferenceandSym- tection.InProceedingsoftheACM/IEEE44thInternationalConferenceonSoftware posiumontheFoundationsofSoftwareEngineering,pages221–232,2022. Engineering:CompanionProceedings,pages178–182,2022. [25] AlexandreTeyar.Burpgpt-chatgptpoweredautomatedvulnerabilitydetection [6] YaqinZhou,ShangqingLiu,JingkaiSiow,XiaoningDu,andYangLiu.Devign:Ef- tool.https://burpgpt.app/#faq,2023. fectivevulnerabilityidentificationbylearningcomprehensiveprogramseman- [26] PortSwigger. Burp suite - application security testing software. https:// ticsviagraphneuralnetworks. Advancesinneuralinformationprocessingsys- portswigger.net/burp. tems,32,2019. [27] Vicarius. vuln_gptdebutsasai-poweredapproachtofindandremediatesoft- [7] ZhangyinFeng,DayaGuo,DuyuTang,NanDuan,XiaochengFeng,MingGong, warevulnerabilities.https://venturebeat.com/ai/got-vulns-vuln_gpt-debuts-as- LinjunShou,BingQin,TingLiu,DaxinJiang,andMingZhou.Codebert:Apre- ai-powered-approach-to-find-and-remediate-software-vulnerabilities/,2023. trainedmodelforprogrammingandnaturallanguages. InTrevorCohn,Yulan [28] ChenyuanZhang,HaoLiu,JiutianZeng,KejingYang,YuhongLi,andHuiLi. He,andYangLiu,editors,FindingsoftheAssociationforComputationalLinguis- Prompt-enhancedsoftwarevulnerabilitydetectionusingchatgpt.arXivpreprint tics:EMNLP2020,OnlineEvent,16-20November2020,volumeEMNLP2020of arXiv:2308.12697,2023. FindingsofACL,pages1536–1547.AssociationforComputationalLinguistics, [29] Jon Harper. Pentagon testing generative AI in ‘global information domi- 2020. nance’ experiments. https://defensescoop.com/2023/07/14/pentagon-testing- [8] ChunqiuStevenXiaandLingmingZhang. Keeptheconversationgoing:Fix- generative-ai-in-global-information-dominance-experiments/,2023. ing 162 out of 337 bugs for $0.42 each using chatgpt. In arXiv preprint [30] KyleChua. SamsungBansUseofGenerativeAIToolsonCompany-Owned arXiv:2304.00385,2023. DevicesOverSecurityConcerns. https://www.tech360.tv/samsung-bans-use- [9] TingZhang,IvanaClairineIrsan,FerdianThung,andDavidLo. Cupid:Lever- generative-ai-tools,2023. agingchatgptformoreaccurateduplicatebugreportdetection. arXivpreprint [31] KyleChua. AppleBansInternalUseofChatGPTandGitHubCopilotOver arXiv:2308.10022,2023. FearofLeaks.https://www.tech360.tv/apple-bans-internal-use-chatgpt-github- [10] TingZhang,IvanaClairineIrsan,FerdianThung,andDavidLo.Revisitingsen- copilot-over-fears-of-leaks,2023. |
timentanalysisforsoftwareengineeringintheeraoflargelanguagemodels. [32] HugoTouvron,LouisMartin,KevinStone,PeterAlbert,AmjadAlmahairi,Yas- arXivpreprintarXiv:2310.11113,2023. mineBabaei,NikolayBashlykov,SoumyaBatra,PrajjwalBhargava,ShrutiBhos- [11] MartinWeyssow,XinZhou,KisubKim,DavidLo,andHouariSahraoui.Explor- ale,etal.Llama2:Openfoundationandfine-tunedchatmodels.arXivpreprint ingparameter-efficientfine-tuningtechniquesforcodegenerationwithlarge arXiv:2307.09288,2023. languagemodels.arXivpreprintarXiv:2308.10462,2023. [33] ZhouYang,JiekeShi,JundaHe,andDavidLo. Naturalattackforpre-trained [12] XinZhou,BowenXu,KisubKim,DongGyunHan,ThanhLe-Cong,JundaHe, modelsofcode. InProceedingsofthe44thInternationalConferenceonSoftware BachLe,andDavidLo. Patchzero:Zero-shotautomaticpatchcorrectnessas- Engineering,2022. sessment.arXivpreprintarXiv:2303.00202,2023. [34] XinZhou,KisubKim,BowenXu,JiakunLiu,DongGyunHan,andDavidLo.The [13] TingZhang,BowenXu,FerdianThung,StefanusAgusHaryono,DavidLo,and devilisinthetails:Howlong-tailedcodedistributionsimpactlargelanguage LingxiaoJiang. Sentimentanalysisforsoftwareengineering:Howfarcanpre- models.arXivpreprintarXiv:2309.03567,2023. trainedtransformermodelsgo? In2020IEEEInternationalConferenceonSoft- [35] DavidLo. Trustworthyandsynergisticartificialintelligenceforsoftwareengi- wareMaintenanceandEvolution(ICSME),pages70–80.IEEE,2020. neering:Visionandroadmaps.CoRR,abs/2309.04142,2023. [14] XinZhou,DongGyunHan,andDavidLo.Assessinggeneralizabilityofcodebert. In2021IEEEInternationalConferenceonSoftwareMaintenanceandEvolution(IC- SME),pages425–436.IEEE,2021. 5 |
2401.16185 LLM4Vuln: A Unified Evaluation Framework for Decoupling and Enhancing LLMs’ Vulnerability Reasoning YuqiangSun1,DaoyuanWu2†,YueXue3,HanLiu2,WeiMa1,LyuyeZhang1,YangLiu1,YingjiuLi4 1 NanyangTechnologicalUniversity 2 TheHongKongUniversityofScienceandTechnology 3 MetaTrustLabs 4 UniversityofOregon (e.g.,[15,16,28,29,55,63,65,68])andneuralnetwork-based vulnerabilitydetection(e.g.,[17,20,44,54,67,72]). Abstract Under this emerging paradigm, triggered by the suc- cessful release of ChatGPT [13,38,48] on November 30, Largelanguagemodels(LLMs)havedemonstratedsignif- 2022,relatedresearchprimarilyfocusesontwodimensions. icant potential in various tasks,including vulnerability de- One dimension involves designing specific LLM-based de- tection.However,currenteffortsinthisareaarepreliminary, tectors for different security problems. For example, re- lackingclarityonwhetherLLMs’vulnerabilityreasoningca- searchers have developed TitanFuzz [24], FuzzGPT [25], pabilitiesstemfromthemodelsthemselvesorexternalaids Fuzz4All [66],and ChatAFL [47] for fuzzing various vul- suchasknowledgeretrievalandtoolingsupport. nerabilities;GPTScan[56]andGPTLens[33]fordetecting This paper aims to isolate LLMs’ vulnerability reason- smartcontractvulnerabilities;andLLift[42]andLATTE[46] ingfromothercapabilities,suchasvulnerabilityknowledge forLLM-enhancedprogramandbinaryanalysis. adoption,contextinformationretrieval,andstructuredoutput The other dimension aims to benchmark or evaluate generation. We introduce LLM4Vuln,a unified evaluation LLMs’ capabilities in vulnerability detection. It examines frameworkthatseparatesandassessesLLMs’vulnerability howdifferentmodels,configurations,andinstructionsinflu- reasoningcapabilitiesandexaminesimprovementswhencom- ence detection results, addressing the key question, “How binedwithotherenhancements. farhave we come?” in LLM-basedvulnerability detection. Weconductedcontrolledexperimentswith97ground-truth Notably, Thapa et al. [58] pioneered this effort by bench- vulnerabilitiesand97non-vulnerablecasesinSolidityand markingtransformer-basedlanguagemodelsagainstRNN- Java,testing them in a total of 9,312 scenarios across four based models forsoftware vulnerability detection. Follow- LLMs (GPT-4, GPT-3.5, Mixtral, and Llama 3). Our find- ing the release of ChatGPT and GPT-4, additional LLM- ingsrevealthevaryingimpactsofknowledgeenhancement, focusedbenchmarkstudieshavebeenconducted,including contextsupplementation,promptschemes,andmodels.Addi- forsmartcontracts[18,23],traditionalC/C++/Javavulnera- tionally,weidentified14zero-dayvulnerabilitiesinfourpilot bilities[26,31,37,60],andvulnerabilityrepair[50,69]. bugbountyprograms,resultingin$3,576inbounties. Our research falls into the second dimension. However, insteadoffocusingontheperformanceofindividualLLM 1 Introduction instancesandtheirconfigurations,wedelveintotheparadigm itself and consider what is missing or could be improved. Tothisend,wefirstabstractandgeneralizetheparadigmof Intherapidlyevolvinglandscapeofcomputersecurity,Large LLM-basedvulnerabilitydetectionintoanarchitectureshown Language Models (LLMs) have significantly transformed inFigure1.ExistingLLM-basedvulnerabilitydetectiontyp- ourapproachtocomplexchallenges.Distinguishedbyexten- ically takes a piece of target code (TC) and asks the LLM sivepre-trainingandstronginstructionfollowingcapabilities, todeterminewhetherTCisvulnerableundercertainprompt thesemodelsexcelinunderstandingandinterpretingthese- schemes(e.g.,roleplaying[33]andchain-of-thought[64]). mantics of both human and programming languages. This However,theadditionalinformationaboutTC(e.g.,thecon- hasledtotheemergenceofLLM-basedvulnerabilitydetec- textoffunctions andvariables involvedin TC),whichcan tion,offeringsuperiorintelligenceandflexibilitycompared beobtainedbyLLMsthroughinvokingtoolsupport,isoften totraditionalprogramanalysis-basedvulnerabilitydetection overlookedintheparadigm. †Correspondingauthor:DaoyuanWu.WorkconductedwhileatNTU. More importantly,LLMs are pre-trained up to a certain 1 4202 peS 5 ]RC.sc[ 2v58161.1042:viXracalpromptengineeringenhancementsbyexploringdifferent promptschemesandemploysthemostcapableGPT-4model Off-the-shelf LLMs: 𝐿𝐿𝑀 LLM’s own knowledge torefinetheraw,unstructuredoutputofmodelswithlesspro- Vulnerability Knowledge: 𝑉𝐾 $3.1 ficientinstruction-followingcapabilities.Morespecifically, Raw/Summarized Vuln Code Vuln Knowledge LLM4Vulnintroducesthefollowingdesign: $3.2 Target Code for LLM Matching: 𝑇𝐶 Knowledge Vuln Code Enhancement • Forknowledgeenhancement,LLM4Vulnnotonlyincor- + Context Context Scheme 1: Scheme 2: poratesrawvulnerabilityreportsforknowledgeretrieval Supplement Raw CoT LLM’s vulnerability butalsoconsiderssummarizedvulnerabilityknowledge. Prompt $3.3 reasoning capability Schemes Instruction However,thesummarizedknowledgecannotbedirectly Following $3.4 searchedusing only TC. Hence,during the automatic |
summarizationofvulnerabilityknowledge,wealsogen- (Annotated) Vulnerability Results eratedescriptionsofthefunctionalityorapplicablesce- narios for each original vulnerability. By embedding Figure1:AnillustrationoftheLLM-basedvulnerabilityde- thesesummarizedfunctionalitiesintoavectordatabase, tectionparadigmandtheLLM4Vulnframework. weenabletheretrievalofrelevantVKbasedonthesimi- laritybetweenitscorrespondingfunctionalityandthat cutoffdate1,makingitchallengingforLLMstoadapttothe ofTC. latestvulnerabilityknowledge.Inotherwords,itisessential • For context supplementation, we provide LLMs with toincorporaterelevantvulnerabilityknowledge(VK)intothe thecontextofTCthroughexternalprogramanalysisto paradigm. Furthermore,open-source LLMs typically have ensureafaircomparisonamongdifferentmodels.Inreal- weakerinstruction-followingcapabilitiesthanOpenAImod- worldvulnerabilitydetection,LLMscanobtaincontext els,duetothelatterbeingalignedwithextensivereinforce- informationthroughthefunctioncallingmechanism[9]. ment learning from human feedback (RLHF) [40], which couldindirectlyaffecttheLLM’sreasoningoutcomeinau- • Forpromptschemes,weadoptcommonpracticessuch tomatic evaluation. As we can see from above,the LLM’s aschain-of-thoughts(CoT)[64])andcustomizeitinto vulnerabilityreasoningcapabilitycanbeinfluencedbyvari- twospecificCoTschemesforvulnerabilityanalysis.The ousfactorsbeyondthemodelitselfanditsconfiguration. detailswillbeintroducedin§3.3. Based on this intuition,rather than treating LLM-based • Forinstructionfollowing,weuseawell-instructedmodel vulnerabilitydetectionasawholeforevaluation,wedecouple to align and structure the analysis output of a less- LLMs’ vulnerability reasoning capability from their other instructedmodel,facilitatingautomaticresultevaluation. capabilitiesforthefirsttimeandassesshowLLMs’vulnera- bilityreasoningcouldbeenhancedwhencombinedwiththe In this paper, we focus on the implementation of enhancementofothercapabilities.Asforbenchmarkingand LLM4VulnonSolidityandJava,whicharepopularprogram- tuningthemodelitselfunderdifferentconfigurations(e.g.,dif- minglanguagesforsmartcontractsandtraditionalsoftware, ferenttemperatures),wedefertoanotherdimensionofrelated respectively.BugtypesinSolidityandJavaaresignificantly work focused on how to pre-train vulnerability-specific or different,asistheassociatedknowledge.Toaddressthis,we security-orientedLLMs[19,27,30,32,35,39,43,49,51,62,73], havebuiltaknowledgedatabaseconsistingof1,013high-risk whichgoesbeyondmerelybeingalanguagemodel. vulnerabilityreportsforSoliditysmartcontractsand77Com- Toachievethisresearchobjective,weproposeLLM4Vuln, monWeaknessEnumeration(CWE)vulnerabilitycategories aunifiedevaluationframeworkfordecouplingandenhancing forJava to enhance knowledge. Additionally,we collected LLMs’ vulnerability reasoning. As illustrated in Figure 1, 51vulnerablecodesegmentsforSolidityand47forJava,as LLM4Vulnfirstconsidersthestate-of-the-artLLMs’ability wellas an equalnumberofnon-vulnerable code segments, to actively invoke tools for seeking additional information forevaluationwithground-truth. aboutTC,suchasthroughfunctioncallinginproprietaryOpe- Bytestingthese194piecesofcodeacross9,312scenarios, nAImodels[9]andinfine-tunedopen-sourcemodels[10,11]. whichincludethreetypesofknowledgeenhancement(one Besides supplying additional information for TC through withonlyLLMs’pre-trainedknowledge),twocontextsupple- LLM-invokedtools,LLM4Vulnalsodecouplesandenhances mentationoptions(withorwithoutcontextsupplementation), LLMs’ VK by providing a searchable vector database of andtwopromptschemes(rawandCoT),andusingfourrep- vulnerabilityknowledge,similartoretrieval-augmentedgen- resentativeLLMs(GPT-4,GPT-3.5,Mixtral,andLlama3), eration (RAG) [41] technique forNLP-domain knowledge weidentifythefollowingfourfindings(moredetailsin§5) enhancement. Furthermore, LLM4Vuln incorporates typi- basedonthebenchmarkresultsannotatedbyLLMsin§3.4): 1Forexample,gpt-4-1106-previewwaspre-trainedusingdatauptoApril • (KnowledgeEnhancement)Knowledgeenhancementhas 2023,whileCodeLlamawastrainedwithdatauptoJuly2023;see§2. diverseimpactsacrossdifferentprogramminglanguages. 2Positive impacts are observed in languages (e.g., So- Table1:MajorLLMsusedforsecuritytasks(inputpriceand lidity) with more logic vulnerabilities,while negative outputpriceareinUSDper1Mtokens). impactsmayoccurintraditionallanguages(e.g.,Java) Max. Knowledge Input Output thathavewell-organizedCWEcategories.Regardlessof ModelAPI Token CutoffDate Price Price thelanguage,summarizedknowledgeimprovesLLMs’ gpt-4-1106-preview 128k 04/2023 10 30 vulnerability reasoning more effectively than original gpt-4-32k-0613 32k 09/2021 60 120 vulnerabilityreports. gpt-4-0613 8k 09/2021 30 60 gpt-3.5-turbo-1106 16k 09/2021 1 2 • (ContextSupplementation)Supplyingcontextmaynot mixtral-8x7b 32k Summer2023 0.30 1 |
alwaysenhanceLLMs’abilitytoreasonaboutvulnera- mixtral-7b 32k Summer2023 0.05 0.25 bilities.Itmayalsoleadtodistractions,hinderingLLMs llama-3-8b 8k 03/2023 0.05 0.25 fromaccuratelyidentifyingvulnerabilities. codellama-13b 16k 07/2023 0.1 0.5 • (Prompt Schemes) Chain-of-thought based prompt schemessignificantlyaffectLLMs’performanceinvul- codeandcommentsfromGitHub,employingalearningsys- nerabilitydetection.Decomposingcomplextasksinto tem known as spaced repetition to focus training on tasks simplersub-taskshelpsimproveLLMs’performance. wherethemodelismorelikelytomakeerrors. Thesemodelscanbetailoredforspecificapplicationsus- • (Model Selection) The enhancement discussed in this ingmethodslikefine-tuningorzero-shotlearning,enabling paperhasasimilarimpactontheperformanceofdiffer- themtoutilizetoolsandaddressproblemsbeyondtheirini- entmodels.However,formodelswithpoorcapability tialtrainingdata[38,53].OpenAIandotherresearchgroups invulnerabilitydetection,theimpactofthesefactorsis havedemonstratedtheeffectivenessoftheseapproachesin lesssignificant. preparingLLMsforinteractiveusewithexternaltools.More- over,LLMscanengagewithknowledgebeyondtheirtraining Besides findings on benchmarking LLMs’ vulnerability datasets through skillful in-context learning prompts,even reasoning,wealsoconductedapilotstudyin§5.5onusing withoutfine-tuning[22].However,notallin-contextlearning LLM4Vulntotestfourreal-worldprojectsincrowdsourcing promptsareequallyeffectivefortaskslikevulnerabilitydetec- audit bounty programs. We submitted a total of 29 issues tion.Additionally,Weietal.introducedthe“chain-of-thought” to these four projects, respectively, and 14 of these issues promptingmethodology[64],whichenhancesreasoningby wereconfirmedbytheprojectteams,leadingto$3,576inbug breaking down tasks into sequential steps. This approach bountybeingawarded.Thediscoveryofthese14zero-day promptsLLMstoaddresseachstepindividually,witheach vulnerabilitiesdemonstratesthepracticalimpactandvalueof stage’soutputinfluencingthenext,fosteringmorelogicaland LLM4Vuln.Wefurtherconductedacasestudyonfourpub- coherentoutputs. liclyavailablevulnerabilities,whichshowsthatLLM4Vuln However,the application ofthese techniques in vulnera- caneffectivelydetectvulnerabilitiesmissedbyexistingtools, bilitydetectionspecificallyremainsanareaofuncertainty.It even when these vulnerabilities do not exactly match the isunclearhowthesemethodscouldbeusedtoimprovepre- knowledgeinthevectordatabaseofLLM4Vuln. cisionorrecallinLLM-basedvulnerabilitydetection.Also, the type of knowledge that could be effectively integrated 2 Background intoin-contextlearningtoboostperformanceinvulnerability detectiontasksneedsfurtherinvestigationandclarification. Generative Pre-training Transformer (GPT) models,exem- plified by the pioneering GPT-3 [48], represent a class of extensive language models trained on diverse text corpora 3 TheLLM4VulnFramework coveringawiderangeofknowledgeacrossvariousdomains. AsindicatedinTable1,therearedifferentvariantsofsuch Inthissection,weintroducethedesignofLLM4Vuln,amod- LargeLanguageModels(LLMs),includingtheGPTseries ularframeworktosupporttheparadigmofLLM-basedvul- from OpenAI, such as GPT-3.5-turbo, GPT-4, and GPT-4- nerability detection. As illustratedin Figure 2,LLM4Vuln turbo.Therearealsoopen-sourceimplementations,likeMix- supportsfourtypesofpluggablecomponentsforevaluating tral[12,34]andLlama2/3[59].Mixtral,developedbyMis- andenhancinganLLMs’vulnerabilityreasoningcapability. tralAI,matchesoroutperformstheLlama2familyandsur- ThesecomponentsareKnowledgeRetrieval,Contextsupple- passesGPT-3.5inmostbenchmarks.Mixtral-8x7b-instructis ment,PromptSchemes,andInstructionFollowing.Allcompo- afine-tunedversionofmixtral-8x7b,optimizedforinstruc- nentsarewell-decoupled,allowingforeasyreplacementwith tionfollowing.Llama3,amorerecentandpowerfulmodel, otherimplementations.ForeachcomponentofLLM4Vuln, outperformsGPT-4-turboinsomebenchmarks.CodeLlama- onlyoneimplementationisrequiredforittofunctioneffec- 13B[6,52]fromMetaAIistrainedon13billiontokensof tively. 3Knowledge Context Prompt Instruction Vulnerability Report VectorDB Summarized Knowledge VectorDB Retrieval Supplement Schemes Following Summarized Vuln Report Related Code Functionality Raw Original Output Matched Query Query Mapping V Vec ut lo nr D RB ep w ori tth VR ule nl ea rt ae bd l e Matched Embedding Sum Vm ua lnri zed Code Vuln Report Knowledge Summarize Code Functionality VectorDB with Callees With-CoT Structured Summarized Knowledge Output Figure3:Twotypesofvulnerabilityknowledgeretrieval. Figure2:LLM4Vuln’sfourpluggablecomponentsforevaluat- ingandenhancingLLMs’vulnerabilityreasoningcapability. wecollectoriginalvulnerabilityreportsalongwiththecorre- spondingvulnerablecode.Wecalculatetheirembeddingsand The first component (§3.1) aims to provide LLMs with createavectordatabasecontainingboththeembeddingsof |
up-to-dateknowledgeofvulnerabilities.InLLM4Vuln,we thecodeandtheassociatedvulnerabilityreport.Whenatarget havedesignedtwotypesofvectordatabasesforknowledge codesegmentTCisprovided,itcanbeusedtodirectlysearch retrieval:onestorestheoriginalvulnerabilityreports,andthe thevectordatabaseforthemostsimilarcodesegments.Since othercontainssummarizedknowledgeofvulnerabilities. a code segmentmayinclude bothcode andcomments,the The second component (§3.2) provides context supple- commentscanmatchthevulnerabilityreports,andthecode ments to support the vulnerability detection process. In can correspond to the code mentioned in the report. After LLM4Vuln,weutilizetwotypesofcontextsupplements:re- theretrievalprocess,weuseonlythetextofthevulnerability latedvulnerablecodeextractedfromthevulnerabilityreports report,excludingthecode,asrawvulnerabilityknowledge ofthevulnerablesamplesandthecalleesforallsamples.For forsubsequentanalysis. afairevaluationofLLMsacrossdifferenttests,thispartisim- Inthesecondtype,wefirstuseGPT-4tosummarizethe plementedbystaticanalysistoolsforstablecontexts,which vulnerabilityreports.Thissummaryincludesthefunctionality areprocessedbeforefeedingthecodeintoLLMs. ofthevulnerablecodeandtherootcauseofthevulnerability, Thethirdandfourthcomponentsarebothdiscussedin§3.3. encapsulatedinseveralkeysentences.Thepromptsusedfor ThePromptSchemescomponentaimstoprovideenhanced summarizing the code’s functionality and the key concept instructionstoLLMstoimprovereasoning.Forthispurpose, causing the vulnerability,as well as examples of the sum- wedesigntwopromptschemes:RawandCoT.Specifically, marized knowledge, can be found in Appendix A and C, theRawpromptschemesimplyasksLLMstoreasonabout respectively. avulnerabilitywithoutspecialinstructions,whileCoTisa chain-of-thoughtpromptscheme.Besidespromptschemes, With the functionality and knowledge from past vulner- LLM4Vuln also aims to enhance instruction-following for ability reports summarized,as depictedin the rightpartof structuredoutput,whichfacilitatestheautomaticevaluation Figure3,wethencalculatetheembeddingofthisfunction- ofresults. alitypartandcreateavectordatabasethatcontainsonlythe Lastly,forbenchmarkingpurposesonly,wedesignanLLM- functionalityembeddings.WhenatargetcodesegmentTC basedresultannotationcomponentin§3.4. isprovided,weuseGPT-4tofirstsummarizeitsfunctional- ityandthenusethisextractedfunctionalitytoretrievesimi- 3.1 RetrievingVulnerabilityKnowledge larfunctionalitiesinthevectordatabase.Withthematched functionality,wecandirectlyretrievethecorrespondingvul- AlthoughLLMsaretrainedwithextensivecodevulnerability nerabilityknowledgeassummarizedknowledgeforfurther data, they typically have a knowledge cutoff date for pre- analysis. training,as indicatedin Table 1. As a result,LLMs do not haveup-to-datevulnerabilityknowledge,whichisparticularly Withthevectordatabase,wecanprovidetoken-levelsimi- crucialfordetectingdynamicallyevolvinglogicvulnerabili- laritymatchingofknowledgetoLLMstoenhancetheirun- ties,suchasthosefoundinsmartcontracts.Toaddressthis derstandingofvulnerabilities.Theproposedmethodcanalso issue,LLM4Vulnproposestwotypesofknowledgeretrieval beeasilyextendedtoincludeothertypesofknowledgeinnat- methodsforenhancingvulnerabilityknowledgeinLLMs,as urallanguage.Forexample,itcanutilizeothergraph-based illustratedinFigure3. similarity-matchingalgorithmstomatchthecontrolflowor Inthefirsttype,asillustratedintheleftpartofFigure3, dataflowofthecodesegmentwiththeknowledge. 43.2 ContextSupplementation PromptCombination:Knowledge+Output+Scheme AspreviouslyillustratedinFigure2,LLMsmaydetectvul- Knowledge nerabilitiesbasedontheprovidedcodecontextofthetarget Prefix1-LLM’sownknowledge: codeTC. Sometimes,thetriggerofavulnerabilitymaybe As a large language model,you have been trained hiddeninmultiplefunctionsormayrequireadditionalcontext withextensive knowledge ofvulnerabilities. Based tobedetected.Evenifthetriggeringlogicisentirelywithin onthispastknowledge,pleaseevaluatewhetherthe thegivencodesegment,contextmayhelpalargelanguage givensmartcontractcodeisvulnerable. modelbetterunderstandthefunctionsemantics. Prefix2-Rawknowledge: Inthispaper,weprovidetwotypesofcontextsupplements: NowIprovideyouwithavulnerabilityreportasfol- relatedvulnerablecodeextractedfromthevulnerabilityre- lows:{report}.Basedonthisgivenvulnerabilityre- portsofthevulnerablesamplesandthecalleesforallsam- port,plsevaluatewhetherthegivencodeisvulnera- ples. Forthefirsttype,weextractallthefunctionsthatare ble. mentioned in the vulnerability reports,such as those from Prefix3-Summarizedknowledge: Code4RenaandissuesfromGitHub.Forthesecondtype,we NowI provide you witha vulnerabilityknowledge extractallthecalleesforallthesamples,whichcanbeusedto that{knowl}.Basedonthisgivenvulnerabilityknowl- provideabetterunderstandingofthecode.Thecontextsup- edge,evaluatewhetherthegivencodeisvulnerable. plementsareprocessedbeforefeedingthecodeintoLLMs. Forbenchmarking purposes only,we use static analysis to OutputResult: ensurethesamecontextsforthegivencodeacrossallmodels, |
Inyouranswer,youshouldatleastincludethreeparts: achievingafaircomparison. yesorno,typeofvulnerability(answeronlyonemost Inreal-worldLLM-basedvulnerabilitydetection,LLMs likely vulnerability type if yes),and the reason for can utilize the function calling mechanism [9,53] to as- youranswer. sistthemselvesinretrievingextracontextinformation.For example, we can define a series of function calling APIs, Scheme1-Raw: suchasgetFunctionDefinition,getClassInheritance, Notethatifyouneedmoreinformation,pleasecall getVariableDefinition,along with a description of the thecorrespondingfunctions. usage of each function, to assist LLMs in actively calling Scheme2-CoT: thefunctionwhentheyrequiretheinformation.Morecom- Notethatduringyourreasoning,youshouldreview plicatedcontexts,suchascontrolanddatainformation,still the given code step by step and finally determine needtobeprovidedbystaticanalysistoolsthough. whetheritisvulnerable.Forexample,youcanfirst summarizethefunctionalityofthegivencode,then 3.3 PromptSchemes&InstructionFollowing analyze whether there is any error that causes the vulnerability.Lastly,providemewiththeresult. In this section, we describe the enhancement of prompt schemesandinstructionfollowing. PromptSchemes.Asdescribedin§3.1,therearetwotypes Figure4:Twopromptschemescombinedwiththreedifferent ofknowledgeprovidedtoLLMs:theoriginalvulnerabilityre- knowledgeprefixes,yieldingsixdetailedprompts. portsandthesummarizedknowledgeofvulnerabilities.Addi- tionally,LLMspossessinherentknowledgeofvulnerabilities fromtheirtraining.Therefore,wehavedesignedthreeprompt LLMs shouldfirstsummarize the functionality imple- schemescorrespondingtothreetypesofknowledgeusage: mented by the given code segment, then analyze for LLM’s own knowledge, Raw knowledge, and Summarized anyerrorsthatcouldleadtovulnerabilities,andfinally knowledge.Figure4illustrateshowthesetypesofknowledge determinethevulnerabilitystatus. canbecombinedwithdifferentCoTinstructionstoformthree kindsofpromptschemes: ImprovedInstructionFollowing.Sincealloutputsfrom • In Scheme 1 - Raw,we simply ask LLMs to generate LLMs are in natural language, they are unstructured and needsummarizationandannotationtoderivethefinalevalua- resultswithoutanyspecificinstructions.LLMscanuse tionresults.LLMshavebeensuccessfullyutilizedasevalua- theAPIsmentionedin§3.2toretrieverelatedcodeseg- tors[21,45],andweuseGPT-4toautomaticallyannotatethe ments. Foropen-source models,this scheme does not outputsofLLMs.ThefunctioncallingAPIisemployedto includethisparticularsentence. transformunstructuredanswersintostructuredresults.Specif- • InScheme2-CoT,werequestLLMstofollowchain- ically,LLM4Vulngeneratesstructuredresultsbasedonthe of-thoughtinstructionsbeforegeneratingtheresult.The answers provided by different prompt schemes and LLMs. 5Table2:Datasetsusedintheevaluation. Summarize √ TP FP(type) TN FN Dataset Samples Projects Period Audit Result Has Vuln? SolidityKnowledgeset 1,013 251 Jan2021-Jul2023 SolidityTestingset 51+51 11 Aug2023-Jan2024 Check √ JavaKnowledgeSet 77 N/A N/A JavaTestingSet 46+46 N/A Jan2013-Dec2022 Vuln Description Type Correct? 4 ImplementationandSetup Ground Truth WithLLM4Vuln’smodulardesignintroducedin§3,wenow Figure5:TheprocessofautomaticannotationbyGPT-4. target a detailed implementation of LLM4Vuln in this sec- tion.Inthispaper,wefocusonLLM4Vuln’sapplicationto JavaprogramsandsmartcontractswritteninSolidity[1].The Theseresultsincludetwoparts:whethertheLLMconsiders choice of Java is motivated by its popularity and the avail- thecodetobevulnerable,andtherationaleforitsvulnerabil- ability of a large number of vulnerabilities in the CWE ityorlackthereof.Thespecificpromptusedforthisprocess database [8] andCVE database [7]. While smartcontracts can be found in Appendix A. Following this step,we can astheanalysissubjecttargetedinthispapermainlybecause automaticallyannotatealltheoutputsfromLLMs. LLMshavedemonstratedgreatereffectivenessinidentifying logic vulnerabilities in natural-language-like Solidity con- 3.4 LLM-basedResultAnnotation&Analysis tracts [33,56] than in detecting traditional vulnerabilities in C/C++ programs [42,46]. Moreover,compared to tradi- Thecomponentsmentionedabovearedesignedtoenhance tionalmemorycorruptionvulnerabilities,smartcontractsare LLMs’vulnerabilityreasoningcapabilities.However,there moresecurity-criticalandcoulddirectlycausehugefinancial isalsoaneedforacomponentthatenablesautomaticeval- lossesifexploited[71].Nevertheless,whilewebelievethat uationofLLMs’reasoningonvulnerabilities.Assuch,we thedesignofLLM4Vulnisgenericandextendabletoother havedesignedLLM4VulntoperformLLM-basedresultan- programminglanguages,theconcreteextensionmaystillre- notation,whichisusedtoobtainthefinalevaluationresults quireconsiderableeffort,andweleavethisimplementation afterindividualLLMsgeneratetheirrawoutput. extensiontofuturework. Based on the raw Yes/No output from individual LLMs andwhetherthevulnerabilitytypematches,asannotatedby 4.1 DataCollectionandTesting GPT-4,LLM4Vulnautomaticallyobtainsthefollowinganno- |
tationresultsintermsoftruepositives,truenegatives,false §3.1introducedthemethodologybywhichLLM4Vulnbuilds negatives,falsepositives,andfalsepositivetypes: a vulnerability knowledge base and enables vector-based TP(TruePositive):TheLLMcorrectlyidentifiesthevulner- knowledge retrieval. In this section,we choose two differ- abilitywiththecorrecttype. entprogramminglanguages,SolidityandJava,toevaluatethe TN(TrueNegative):TheLLMcorrectlyconcludesthatthe effectivenessofLLMsinvulnerabilitydetection.Javaisone codeisnotvulnerable. ofthemostwidelyusedprogramminglanguagesandhasa FN(FalseNegative):TheLLMincorrectlyidentifiesavul- largenumberofvulnerabilities,mostofwhicharerelatedto nerablecodesegmentasnon-vulnerable. featuresoftheprogramminglanguage,suchasunsafedeseri- FP(FalsePositive):TheLLMincorrectlyidentifiesanon- alization.Incontrast,Solidityisanobject-orientedprogram- vulnerablecodesegmentasvulnerable. minglanguageforwritingsmartcontractsontheEthereum FP-type (False Positive Type): The LLM identifies a vul- blockchain,andmostofthevulnerabilitiesarerelatedtobusi- nerable code segment as vulnerable,but with an incorrect nesslogic.Thecollecteddatahavetwoparts:thefirstisfor vulnerabilitytype. buildingtheknowledgedatabase,andthesecondisfortesting SinceFP-typeincludesbothfalsepositives(reportinganon- theeffectivenessoftheLLMs. existentvulnerability)andfalsenegatives(failingtoreportan ForSolidity,wecollectedallthedatafromCode4Rena [5], existingvulnerability),wecalculatetheprecisionandrecall apopularcrowdsourcingauditingplatformforsmartcontracts. oftheLLMs’vulnerabilitydetectionresultsasfollows: Tobuildtheknowledgedatabase,weusedallthehigh-risk vulnerabilities from its GitHubissues [4] between January TP Precision= (1) 2021andJuly2023,whichcontain1,013vulnerabilitiesfrom TP+FP+FP type 251projects.Forthetestingset,weusedthevulnerabilities TP identifiedafterJuly2023toavoiddataleakage,whichcon- Recall= (2) TP+FN+FP tains 51 vulnerable code segments and 51 non-vulnerable type 6code segments from 11 projects. The non-vulnerable code heretothedefaultmodelconfigurationprovidedbythemodel segmentsarerandomlychosenfromthesameprojectsthat providers,aswecannotpredicttheparameterspecifications containvulnerablecodesegments,andtheyhavesimilarcom- users might choose in practice. The configurations are as plexityandlengthstothevulnerablecodesegments. follows: ForJava,wecollecteddatafromCWEandCVE.Tobuild • GPT-3.5:Temperature:1,Top-p:1,Frequencypenalty: theknowledgedatabase,weused77CWEsfromCWEweb- 0.0,Presencepenalty:0.0. sites,whichareforsoftwarewritteninJava[8].UnlikeSolid- • GPT-4: Temperature: 1,Top-p: 1,Frequency penalty: ity,wherethereisnobugreportfromtheauditingplatform, 0.0,Presencepenalty:0.0. weusetheknowledgeofweaknessesfromCWEdirectly.For • Mixtral:Temperature:0.7,Top-p:0.95,Top-k:50,Fre- eachiteminCWE,wecollecttheID,title,description,miti- quencypenalty:0.0,Presencepenalty:0.0. gation,andcodeexamplestobuildtheknowledgedatabase. • Llama 3: Temperature: 0.7, Top-p: 0.95, Presence Similarly,onedatabaseisindexedwithcodeandanotherwith penalty:0.0. thefunctionaldescription.Forthetestingpart,wecollected Forsimplicity,goingforward,weusetheabovemodelnames 46CVEsforJava,withbothvulnerableandfixedversions, torefertothespecificmodelinstancestestedinthispaper. from2013to2022. Forthechoiceoftemperature,wedonotwanttosacrifice Toavoiddataleakage,forSolidity,allthedatainthetest- theLLM’sinherentcreativityforlessrandomness(e.g.,by ing sets were published afterthe knowledge cutoff date of settingthetemperatureto0).Therefore,weusethemodels’ theLLMs.ForJava,sincethecodesegmentsarefromreal- defaultsettings,whicharesupposedtobewidelyused,and worldprojects priorto 2022,they may be containedin the run each scenario three times—top-3 knowledge retrieved pre-training dataset of LLMs. To mitigate the risk of data for each testing via RAG or repeated three times without leakage,weusedGPT-4tosystematicallyrewritethecode knowledge supply. We could run more times each but the segmentswithdifferentfunctionnames,variablenames,and costoftestingLLMs(e.g.,$400forGPT-4,$40forGPT-3.5) comments,whilestrictlymaintainingthesamesemantics,and ishigh.Furthermore,wearguethatpotentialrandomnessin wedidnotchangethestatementsatall. individualcasesisnotanissueforourevaluation. First,as Forknowledgeretrieval,weuseFAISS[36]toconstruct explainedin§1,ourresearchobjectiveisnottoobtainapre- thevectordatabase,andwesetthetop-Ktoretrievethetop-3 ciseperformancenumberforindividualmodels;instead,we mostrelevantpiecesofknowledgeperquery.InFAISS,the areconcernedonlywiththerelativeimprovementbyLLM’s queryisembeddedandcalculatesthedotproductwithallthe differentassistedcapabilities(e.g.,knowledgeretrievaland |
vectors in the database; the top-K vectors withthe highest contextsupplementation).Second,asmentionedin§4.1,we dotproductsarereturnedastheresult.Besides,tominimize have9,312diversetestcases(4,656withoutconsideringcon- the impact of randomness on the result, the case without textvariation),whicharesufficienttoshowstatisticalpatterns. knowledgewillbeexecutedthreetimes,whichisthesameas ResultScope.AsLLM4Vuln’scurrentimplementationspecif- thecasewithknowledge.Asaresult,forSolidity,although icallytargetsdifferentprogramminglanguages,thefindings thereareonly51+51codesegments,theygeneratediversetest tobereportedin§5areprimarilyapplicabletoSolidityand casesfordifferentcombinationsofthreeknowledgeretrievals, Java.However,itmaynotnecessarilyalignwithresultswhen twopromptschemes,twocontextvariations,andfourdifferent testingLLM4Vulnforotherprogramminglanguages. models,amountingtoatotalof4,896(102×3×2×2×4) testcases.AndforJava,thereareatotalof4,416(92×3× 5 Evaluation 2×2×4)testcases. In this section, we evaluate how LLMs’ vulnerability rea- 4.2 LLMsEvaluatedandTheirConfigurations soningcouldbeenhancedwhencombinedwiththeenhance- mentofothercapabilitiesundertheframeworkofLLM4Vuln. For LLMs, we aim to benchmark the most advanced pro- Specifically,weusetheexperimentalsetupdescribedin§4 prietary and open-source models available at the time of andthestandardsforcalculatingTruePositives(TP),True ourevaluation(betweenNovember2023andJanuary2024). Negatives(TN),FalsePositives(FP),FalseNegatives(FN), Therefore,fromtheLLMslistedinTable1,weselectgpt-4- andFalsePositive-Type(FP-Type),asdefinedin§3.4. 1106-preview as the state-of-the-art proprietary model and Table 3 presents the overall results, which includes the choosemixtral-7b-instructandllama3-8b-instructastheopen- rawresultsofTP,TN,FP,FN,andFP-typeunderdifferent sourcemodels,basedonthebackgroundintroducedin§2.We combinationsofknowledge,context,andpromptschemes,as also used gpt-3.5-turbo-0125,which is a widely used free wellasthecalculatedprecision,recall,andF1score.Based commercialmodel. ontheseresults,weperformacorrelationanalysisandanswer We use the OpenAI API to interact with gpt-4-1106- fiveresearchquestions(RQs)from§5.1to§5.4.Notethatthe preview and gpt-3.5-turbo-0125,and the Replicate API to resultsrelatedtothetwoopen-sourcemodelsareprimarily interactwithmixtral-7b-instruct,llama3-8b-instruct.Wead- analyzedin§5.4,astheymainlyserveforcomparisonwith 7Table3:RawresultsofTP,FP,FN,andFP-type(abbreviatedas“FPt”)alongwiththecalculatedPrecision(abbreviatedas“P”), Recall,andF1score(“F1”)underdifferentcombinationsofknowledge,context,promptschemeforSolidityandJava. Solidity Raw CoT TP FP TN FN FPt P Recall F1 TP FP TN FN FPt P Recall F1 5.3-TPG C 14 147 6 5 134 4.75 9.15 6.25 16△ 152▽ 1▽ 1△ 136▽ 5.26△ 10.46△ 7.00△ W/OExtraKnowledge N 18 137 16 5 130 6.32 11.76 8.22 16▽ 153▽ 0▽ 0△ 137▽ 5.23▽ 10.46▽ 6.97▽ C 26 151 2 6 121 8.72 16.99 11.53 17▽ 146△ 7△ 5△ 131▽ 5.78▽ 11.11▽ 7.61▽ W/OriginalVulnReport N 18 149 4 3 132 6.02 11.76 7.96 17▽ 150▽ 3▽ 1△ 135▽ 5.63▽ 11.11▽ 7.47▽ C 10 129 24 32 111 4.00 6.54 4.96 22△ 144▽ 9▽ 12△ 119▽ 7.72△ 14.38△ 10.05△ W/SummarizedKnowledge N 21 121 32 17 115 8.17 13.73 10.24 17▽ 131▽ 22▽ 15△ 121▽ 6.32▽ 11.11▽ 8.06▽ 4-TPG C 26 151 2 0 127 8.55 16.99 11.38 35△ 147△ 6△ 5▽ 113△ 11.86△ 22.88△ 15.62△ W/OExtraKnowledge N 34 153 0 0 119 11.11 22.22 14.81 26▽ 134△ 19△ 9▽ 118△ 9.35▽ 16.99▽ 12.06▽ C 12 149 4 2 139 4.00 7.84 5.30 15△ 146△ 7△ 3▽ 135△ 5.07△ 9.80△ 6.68△ W/OriginalVulnReport N 18 146 7 2 133 6.06 11.76 8.00 16▽ 139△ 14△ 10▽ 127△ 5.67▽ 10.46▽ 7.36▽ C 28 105 48 29 96 12.23 18.30 14.66 33△ 88△ 65△ 45▽ 75△ 16.84△ 21.57△ 18.91△ W/SummarizedKnowledge N 29 113 40 34 90 12.50 18.95 15.06 29 78△ 75△ 45▽ 79△ 15.59△ 18.95 17.11△ lartxiM C 6 56 97 93 54 5.17 3.92 4.46 11△ 61▽ 92▽ 77△ 65▽ 8.03△ 7.19△ 7.59△ W/OExtraKnowledge N 7 85 68 87 59 4.64 4.58 4.61 9△ 76△ 77△ 96▽ 48△ 6.77△ 5.88△ 6.29△ C 11 77 76 85 57 7.59 7.19 7.38 13△ 80▽ 73▽ 76△ 64▽ 8.28△ 8.50△ 8.39△ W/OriginalVulnReport N 13 104 49 45 95 6.13 8.50 7.12 19△ 79△ 74△ 72▽ 62△ 11.88△ 12.42△ 12.14△ C 8 85 68 76 69 4.94 5.23 5.08 20△ 61△ 92△ 69△ 64△ 13.79△ 13.07△ 13.42△ W/SummarizedKnowledge N 12 81 72 82 59 7.89 7.84 7.87 23△ 88▽ 65▽ 58△ 72▽ 12.57△ 15.03△ 13.69△ 3amalL C 17 153 0 0 136 5.56 11.11 7.41 16▽ 152△ 1△ 0 137▽ 5.25▽ 10.46▽ 6.99▽ W/OExtraKnowledge N 15 153 0 0 138 4.90 9.80 6.54 12▽ 152△ 1△ 0 141▽ 3.93▽ 7.84▽ 5.24▽ |
C 11 151 2 3 139 3.65 7.19 4.85 9▽ 151 2 3 141▽ 2.99▽ 5.88▽ 3.96▽ W/OriginalVulnReport N 11 152 1 0 142 3.61 7.19 4.80 8▽ 150△ 3△ 0 145▽ 2.64▽ 5.23▽ 3.51▽ C 19 150 3 2 132 6.31 12.42 8.37 22△ 117△ 36△ 28▽ 103△ 9.09△ 14.38△ 11.14△ W/SummarizedKnowledge N 21 151 2 6 126 7.05 13.73 9.31 23△ 142△ 11△ 8▽ 122△ 8.01△ 15.03△ 10.45△ Java Raw CoT TP FP TN FN FPt P Recall F1 TP FP TN FN FPt P Recall F1 5.3-TPG C 28 102 36 32 78 13.46 20.29 16.18 27▽ 125▽ 13▽ 11△ 100▽ 10.71▽ 19.57▽ 13.85▽ W/OExtraKnowledge N 34 101 37 27 77 16.04 24.64 19.43 26▽ 125▽ 13▽ 11△ 101▽ 10.32▽ 18.84▽ 13.33▽ C 3 105 33 33 102 1.43 2.17 1.72 3 93△ 45△ 42▽ 93△ 1.59△ 2.17 1.83△ W/OriginalVulnReport N 8 118 20 23 107 3.43 5.80 4.31 2▽ 104△ 34△ 38▽ 98△ 0.98▽ 1.45▽ 1.17▽ C 3 101 37 41 94 1.52 2.17 1.79 6△ 109▽ 29▽ 25△ 107▽ 2.70△ 4.35△ 3.33△ W/SummarizedKnowledge N 5 104 34 37 96 2.44 3.62 2.92 5 115▽ 23▽ 19△ 114▽ 2.14▽ 3.62 2.69▽ 4-TPG C 30 113 25 35 73 13.89 21.74 16.95 41△ 105△ 33△ 28△ 69△ 19.07△ 29.71△ 23.23△ W/OExtraKnowledge N 28 112 26 35 75 13.02 20.29 15.86 43△ 111△ 27△ 22△ 73△ 18.94△ 31.16△ 23.56△ C 0 18 120 124 14 0.00 0.00 - 3△ 15△ 123△ 125▽ 10△ 10.71△ 2.17△ 3.61△ W/OriginalVulnReport N 3 16 122 120 15 8.82 2.17 3.49 3 14△ 124△ 128▽ 7△ 12.50△ 2.17 3.70△ C 5 29 109 97 36 7.14 3.62 4.81 4▽ 14△ 124△ 118▽ 16△ 11.76△ 2.90▽ 4.65▽ W/SummarizedKnowledge N 6 39 99 107 25 8.57 4.35 5.77 3▽ 21△ 117△ 105△ 30▽ 5.56▽ 2.17▽ 3.12▽ lartxiM C 5 38 100 99 34 6.49 3.62 4.65 2▽ 32△ 106△ 114▽ 22△ 3.57▽ 1.45▽ 2.06▽ W/OExtraKnowledge N 6 41 97 97 35 7.32 4.35 5.45 6 39△ 99△ 105▽ 27△ 8.33△ 4.35 5.71△ C 4 21 117 118 16 9.76 2.90 4.47 3▽ 20△ 118△ 113△ 22▽ 6.67▽ 2.17▽ 3.28▽ W/OriginalVulnReport N 2 29 109 111 25 3.57 1.45 2.06 3△ 38▽ 100▽ 114▽ 21△ 4.84△ 2.17△ 3.00△ C 4 42 96 98 36 4.88 2.90 3.64 11△ 56▽ 82▽ 91△ 36 10.68△ 7.97△ 9.13△ W/SummarizedKnowledge N 4 36 102 102 32 5.56 2.90 3.81 10△ 57▽ 81▽ 84△ 44▽ 9.01△ 7.25△ 8.03△ 3amalL C 10 138 0 2 126 3.65 7.25 4.85 11△ 129△ 9△ 16▽ 111△ 4.38△ 7.97△ 5.66△ W/OExtraKnowledge N 11 138 0 0 127 3.99 7.97 5.31 6▽ 125△ 13△ 16▽ 116△ 2.43▽ 4.35▽ 3.12▽ C 1 112 26 17 120 0.43 0.72 0.54 1 59△ 79△ 60▽ 77△ 0.73△ 0.72 0.73△ W/OriginalVulnReport N 1 123 15 16 121 0.41 0.72 0.52 1 78△ 60△ 61▽ 76△ 0.65△ 0.72 0.68△ C 6 134 4 2 130 2.22 4.35 2.94 3▽ 107△ 31△ 29▽ 106△ 1.39▽ 2.17▽ 1.69▽ W/SummarizedKnowledge N 9 133 5 2 127 3.35 6.52 4.42 5▽ 115△ 23△ 34▽ 99△ 2.28▽ 3.62▽ 2.80▽ 1.“C”and“N”representtheresultswithcontextandwithoutcontext,respectively. 2.△and▽indicatebetterorworsevaluescomparedtotheRawcolumn. 3.Thebestresultwithineachcombinationofknowledge,context,promptscheme,andmodelishighlightedinbold. GPTmodels.Lastly,weconductapilotstudyin§5.5onusing 5.1 RQ1:EffectsofKnowledgeEnhancement LLM4Vulntotestnewprojectsforzero-dayvulnerabilities. InthisRQ,weaimtoevaluatetheeffectsofknowledgeen- hancementmechanismsintroducedbyLLM4Vulnin§3.1. Asurprisingfindingisthatknowledgeenhancementdoes notalwayshavethesameimpactacrossdifferentprogram- minglanguages,suchasSolidityandJava,whichwetested. 8AccordingtoTable3,weobservethatvulnerabilityknowl- casesandintroducesmoreFNcases,whichleadstodecreased edge significantly improves LLMs’ reasoning on Solidity precision,recall,andF1-score. vulnerabilities.Ineverycombination,modelswithknowledge Regardlessoftheprogramminglanguage,wefurtherana- (eitherW/SummarizedKnowledgeorOriginalVulnReport) lyzethedifferingimpactsofsummarizedknowledgeandorig- outperformthosewithoutanyextraknowledge(W/OExtra inalvulnerabilityreports.Mostofrawknowledgeislengthy Knowledge)intermsofF1score.However,theoppositeis and rich in detail. Due to the attention mechanism [61] of true for Java,where in most combinations,models exhibit LLMs,theseabundantdetailscaneasilydistractanLLM’s superior performance without extra knowledge,except for focus,leadingittoreachapositiveyetincorrectresult,which thecombinationofMixtralwiththeCoTscheme.Thisphe- increases the numberofFP andFP-type cases. In contrast, nomenon couldbe attributedto this majorreason: Solidity summarizedvulnerabilityknowledgeisconciseandcontains hassignificantlymorelogicvulnerabilitiescomparedtotradi- onlytheinformationnecessarytodetectaspecificvulnera- tionallanguagessuchasC/C++andJava,asdescribedin[70]. bility.Hence,whenprovidedwithsummarizedknowledge, |
Detectingtheserequiresfine-grainedvulnerabilityknowledge LLMstendtobemorefocusedandarelesslikelytogenerate to be supplied to LLMs. In contrast,traditional languages falsepositives,asindicatedbythelowernumberofFPfor havewell-organizedCommonWeaknessEnumeration(CWE) modelswithSummarizedKnowledgeinTable3. vulnerabilitycategories,whichhavebeenadoptedbyLLMs duringtheirpre-trainingphase.Thiscanbeindirectlyinferred Finding1: Threesub-findingsontheeffectsofknowl- fromthefactthatsignificantlymoreTPsareidentifiedinJava edgeenhancementonLLMs’vulnerabilityreasoning: withoutextraknowledgecomparedtoSolidity. Asaresult, (a)Knowledgeenhancementhasdiverseimpactsacross supplying additionalspecificvulnerabilityknowledge may differentprogramminglanguages; simplydistractLLMs’attention[61]intheJavascenario. (b) Positive impacts are observed in languages (e.g., Wenowtakeacloserlookattheresults.ForSolidity,sum- Solidity)withmorelogicvulnerabilities,whilenegative marizedknowledgeachievesthebestperformance,exhibiting impactsmayoccurintraditionallanguages(e.g.,Java) higherprecision(in7outof8model-promptcombinations) thathavewell-organizedCWEcategories; andahigherF1-score(in7outof8model-promptcombina- (c)Regardlessofthelanguage,summarizedknowledge tions) than the modelwithoutextra knowledge orwiththe improvesLLMs’vulnerabilityreasoningmorethanorig- originalvulnerabilityreport. Forrecall,combinations with inalvulnerabilityreports. summarized knowledge provide the highest recall in 4 out of8model-promptcombinations,whilethemodelwithout extra knowledge achieves the highest recall in 2 out of 8 5.2 RQ2:EffectsofContextSupplementation model-promptcombinations,andthemodelwiththeoriginal vulnerabilityreportachievesthehighestrecallintheremain- InthisRQ,weaimtoevaluatetheeffectsofcontextsupple- ingtwocombinations.Boththehighestprecisionandrecall mentationprovidedbyLLMs’capabilityofinvokingtools, areproducedbytheGPT-4model.Inparticular,thehighestF1 asintroducedin§3.2.Wepresentalltheresultswithcontext scoreof18.91%wasachievedbythecombinationofGPT-4 supplementation(the“C”rows)andwithoutcontextsupple- W/SummarizedKnowledge,withCoT,andwithcontext. mentation(the“N”rows)forallcombinationsofknowledge, Inall16model-prompt-contextcombinations,comparedto promptschemes,andmodelsinTable3. thosewithoutextraknowledge,thecombinationswithsumma- AccordingtoTable3,onSolidityandJava,outof48model- rizedknowledgehavehigherprecisionin14ofthem,higher prompt-knowledgecombinations,27showhigherprecision, recall in 13 of them,and a higher F1-score in 15 of them. 17showhigherrecall,and24showahigherF1scorewith SummarizedknowledgeimprovesTPcasesandreducesFP contextsupplementation.Thisindicatesthatcontextsupple- cases,whichleadstoincreasedprecision,recall,andF1-score. mentationdoesnothaveastableeffectontheperformanceof Forcombinationswiththeoriginalvulnerabilityreport,the LLMsindetectingvulnerabilities.Whencomparingthenum- precisionishigherthanthosewithoutextraknowledgein7 berofTPandFPcases,wefindthatin23combinationsthe outof16model-promptcombinations,therecallishigherin numberofTPisreducedwithcontextsupplementation,17is 7 outof16,andthe F1-score is higherin 7 outof16. The increased,and8remainthesame.ForFPcases,26combina- originalvulnerabilityreportslightlyincreasesboththeTPand tionsshowadecrease,18showanincrease,and4remainthe FPcases,leadingtoanunstableimpactonprecision,recall, same.Thissuggeststhatcontextsupplementationmaycause andF1-score. LLMstoprovidemoreconservativeresults,whichleadsto Incontrast,forJava,modelswithoutextraknowledgeper- areductioninpositivecases.Inaddition,additionalcontext formbetterthanthosewithknowledgeenhancement,showing maynotnecessarilyenhancereasoningabilitiesandcouldin- higherprecision,recall,andF1-score(in6,7,and7outof troduceextraneousinformation,potentiallydistractingLLMs 8model-promptcombinations,respectively).Inmostcombi- fromaccuratelyidentifyingvulnerabilities. nations,theadditionofknowledgereducesTPcasesandFP 912 combinations of prompt schemes,context supplements, Finding2: Supplyingcontextmaynotalwaysenhance andknowledgesupplements,therecallislessthan16%for LLMs’abilitytoreasonaboutvulnerabilities.Itcould Solidity and less than 8% for Java. The F1-score is below also lead to distractions,hindering LLMs from accu- 14%forSolidityand10%forJava.Afterknowledgesupply, ratelyidentifyingvulnerabilities. therecallandF1-scoreofMixtralareslightlyimproved. Compared to GPT-4,Llama 3 exhibits a higher number of positive cases,including TP,FP,and FP types. In some 5.3 RQ3:EffectsofDifferentPromptSchemes extremecases(Llama3W/OExtraKnowledge,Rawprompt, In this RQ, we aim to investigate the effects of different withoutcontext),Llama3evengives0TNand0FNanswers, promptschemes,specificallyCoTpromptsasintroducedin arguingthatallexamplesarevulnerable.Asaresult,Llama |
§3.3,onLLMs’vulnerabilityreasoning. 3hasverylowprecision(thelowestof0.65%)andF1-score For different prompt schemes, we observe that prompts (thelowestof0.68%).UnlikeGPT-3.5,GPT-4,andMixtral, withCoTcouldprovidebetterperformancethantheoriginal afterusingCoTprompts,theprecisionandF1-scoreofLlama prompt.Among48combinations,thepromptwithCoThas 3arenotsignificantlyimproved.Insomecombinations,the higherTPin21combinations,lowerFPin32combinations, precisionandF1-scoreareevenlowerthantheoriginalvalues. higherTNin32combinations,lowerFNin20combinations, andlowerFP-typein27combinations.Althoughtheimpact Finding4: Contextsupplements,knowledgesupple- onTP,FN,andFPisnotsignificant,thechangesinTNand ments,andpromptschemeshaveasimilarimpacton FP-typealreadyhaveasignificantimpactontheperformance theperformanceofdifferentmodels.However,formod- ofthemodel.Inaddition,28combinationshavehigherpreci- elswithpoorcapabilityinvulnerabilitydetection,the sion,21combinationshavehigherrecall,and26combinations impactofthesefactorsmaynotbesignificant. haveahigherF1score.Forrecall,therearealso7combina- tionswiththesamerecall,andanother21combinationshave a lower recall. This improvement is likely because LLMs 5.5 RQ5:TestingforZero-DayVulnerabilities generate new tokens based on existing ones,and Pre-CoT promptsLLMstoconductreasoningaccordingtofunctional- InthisRQ,wedeployedLLM4Vuln’sapproachtotheweb itybeforeprovidingananswer,leadingtomorereasonable serviceofourindustrypartner,aWeb3securitycompany.As outcomesinvulnerabilitydetection. aresult,inthisRQ,weonlydiscussthenewzero-dayvulner- abilitiesfoundbyLLM4VulninSolidityprojects.LLM4Vuln Finding3: TheCoTpromptschemecanimprovethe wasusedtoauditreal-worldsmartcontractprojects,andall precisionandrecallofLLMs’vulnerabilityreasoningin theoutputs,afterabriefmanualreviewtofilteroutfactual mostscenarios.However,itdoesnothaveaconsistent errors,were submitted to the Secure3 community for con- impactonrecallimprovement. firmation. We submitted a total of 29 issues to these four projects,with14issuesconfirmedbythecommunity,which areApebond,GlyphAMM,StakeStone,andHajime.Details oftheprojectsandbountiescanbefoundinTable4.From 5.4 RQ4:PerformancewithDifferentModels thesefourprogramscombined,wereceivedatotalbountyof In this RQ,we aim to evaluate the vulnerability reasoning $3,576,demonstratingthepracticalityofLLM4Vuln. capabilityofothermodelsanddeterminewhetherthechain- In the rest of this section, we will conduct case studies of-thoughtpromptschemeisalsoeffectiveforthem.Wehave on the confirmed issues in the Apebond project [2],which chosentwostate-of-the-artopen-sourcemodelsforoureval- involvesasetofcontractsforsingletransactiontokenswaps, uation:MixtralandLlama3,aswellasthecommonlyused liquidityprovision,andbondpurchases.Theauditreportfor GPT-3.5/GPT-4 model. Their detailed configurations were Apebondisavailableat[3]anddoesnotincludetherealnames previouslyintroducedin§4.2. oftheauditors. Fordifferentmodels,mostfactorshaveasimilarimpacton Thefirstcaseinvolvesaniterationwithoutproperchecks theperformanceofthemodels.Forexample,theknowledge forduplicateentries,potentiallyleadingtofinanciallosses. supplyhasasimilarimpactontheperformanceofthemodels. The source code is shown in Figure 6. The function However,theperformanceofthemodelsdiffers,whichmay _routerSwapFromPathisdesignedtoexecuteatokenswap be attributed to the different training data and the design operationusingtheinputparameter_uniSwapPath,whichis purposesofthemodels. expectedtocontainaswappingpath_uniSwapPath.pathin- ForMixtral,boththenumberofTPandFParelowerthan dicatingtheseriesoftokenconversionstobeexecuted.Ifthe thoseforGPT-3.5,GPT-4,andLlama3,asshowninTable3. inputarraycontainsduplicateentries,itwillresultinunneces- And it has higher numbers of TN and FN. This leads to a sarytokenconversions,andthefeespaidfortheseduplicated significantdecreaseinrecallandalowF1-score.Acrossall conversionswillbelost.UsingLLM4Vuln,wematchedthe 10Table4:ProjectsauditedinRQ5. 1 function getSwapRatio( 2 SwapRatioParams memory swapRatioParams Project Bounty($) Submitted Confirmed 3 ) internal view returns (uint256 amount0, uint256 amount1) { Apebond 376 12 4 4 // More code GlyphAMM 329 6 4 5 vars.token0decimals = ERC20(address( swapRatioParams.token0)).decimals(); StakeStone 2281 9 5 6 vars.token1decimals = ERC20(address( Hajime 590 2 1 swapRatioParams.token1)).decimals(); 7 vars.underlying0 = _normalizeTokenDecimals(vars. Total 3,576 29 14 underlying0, vars.token0decimals); 8 vars.underlying1 = _normalizeTokenDecimals(vars. underlying1, vars.token1decimals); 9 // More code 1 function _routerSwapFromPath( 10 } 2 SwapPath memory _uniSwapPath, 11 function _normalizeTokenDecimals( 3 uint256 _amountIn, 12 uint256 amount, 4 address _to, 13 uint256 decimals 5 uint256 _deadline 14 ) internal pure returns (uint256) { |
6 ) private returns (uint256 amountOut) { 15 return amount * 10 ** (18 - decimals); 7 require(_uniSwapPath.path.length >= 2, "SoulZap: 16 } need path0 of >=2"); 8 address outputToken = _uniSwapPath.path[ _uniSwapPath.path.length - 1]; Figure7:Case2-PrecisionCalculationErrorTypeI. 9 uint256 balanceBefore = _getBalance(IERC20( outputToken), _to); 10 _routerSwap( 11 _uniSwapPath.swapRouter, 1 function pairTokensAndValue( 12 _uniSwapPath.swapType, 2 address token0, 13 _amountIn, 3 address token1, 14 _uniSwapPath.amountOutMin, 4 uint24 fee, 15 _uniSwapPath.path, 5 address uniV3Factory 16 _to, 6 ) internal view returns (uint256 price) { 17 _deadline 7 // More Code 18 ); 8 uint256 sqrtPriceX96; 19 amountOut = _getBalance(IERC20(outputToken), _to) 9 (sqrtPriceX96, , , , , , ) = IUniswapV3Pool( - balanceBefore; tokenPegPair).slot0(); 20 } 10 uint256 token0Decimals = getTokenDecimals(token0); 11 uint256 token1Decimals = getTokenDecimals(token1); 12 if (token1 < token0) price = (2 ** 192) / (( Figure6:Case1-LackofDuplicationCheckforInputArray. sqrtPriceX96) ** 2 / uint256(10 ** ( token0Decimals + 18 - token1Decimals))); 13 else price = ((sqrtPriceX96) ** 2) / ((2 ** 192) / uint256(10 ** (token0Decimals + 18 - token1Decimals))); functionalityofthecodewiththeknowledgethat“Forany 14 } functionalitythatinvolvesprocessinginputarrays,especially in smart contracts or systems managing assets and tokens, Figure8:Case3-PrecisionCalculationErrorTypeII. it’scrucialtoimplementstringentvalidationmechanismsto checkforduplicateentries.”Thefulldetailsofthisknowledge areavailableinKnowledge1inAppendixC. pairTokensAndValue is responsible for calculating Thesecondcase,asshowninFigure7,involvesaprecision the price of tokens using sqrtPriceX96 obtained from a calculationerrorthatcouldleadtofinancialloss.Inlines7 UniswapV3Pool. However, this function also erroneously and8,thefunctiongetSwapRatiocalculatestheunderlying assumes that the precision of sqrtPriceX96 is always 18 balancesoftheswaptokens.Thisprocessincludesnormaliza- decimalplaces,whichcouldresultinunplannedbenefitsor tionofprecision,butitincorrectlyassumesthattheprecision loss of funds. The knowledge matched in LLM4Vuln for of the input token is always 18 decimal places. However, thisscenariois“Thevulnerabilityoccurswhencalculating whenobtainingpriceratiosfromotheroracles,theprecision thesquaredrootpriceofapositioninaliquiditypoolwith mayvaryandnotalwaysbe18decimalplaces.Thisincor- tokenshavingdifferentdecimalvalues.”Whiletwomatched rectassumptionaboutprecisioncanleadtomiscalculation knowledge from cases 2 and3 are notidentical- the latter oftheswapratio,potentiallycausingtheusertogaintokens specifically mentions the “squared root price” - they are thatdonotbelongtothemorloseanumberoftokens.With semantically similar and both applicable to describing the LLM4Vuln,wematchedthefunctionalityofthecodewith sametypeofvulnerability.Thefulldetailsofthisknowledge theknowledgethat“Thevulnerabilitystemsfromanincor- areavailableinKnowledge3inAppendixC. recthandlingofdecimalprecisionwhilecalculatingtheprice In Figure 9,the function _zap divides the input amount ratiobetweentwooracleswithdifferentdecimals.”Thefull (amountIn)equallybetweenamount0Inandamount1Inon details of this knowledge are available in Knowledge 2 in lines 9 and 10. However, the actual token reserve ratio in AppendixC. the pool may not be 1:1. This discrepancy can lead to an Similarly, as shown in Figure 8, the function imbalanced provision of liquidity when addLiquidity is 11SelectionofModels.Althoughbothopensourceandcom- 1 function _zap( 2 ZapParams memory zapParams, mercialmodels perform similarlyfordifferenttypes ofen- 3 SwapPath memory feeSwapPath, hancementswhenperformingvulnerabilitydetectiontasks, 4 bool takeFee 5 ) internal whenNotPaused { modelsthathavepoorvulnerabilitydetectioncapabilitiesmay 6 // Verify and setup notbenefitsignificantlyfromtheseenhancements.Tomake 7 if (zapParams.liquidityPath.lpType == LPType.V2) { 8 // some checks themostoftheseenhancements,itisbettertochooseamodel 9 vars.amount0In = zapParams.amountIn / 2; thatalreadyhasacertainvulnerabilitydetectioncapability. 10 vars.amount1In = zapParams.amountIn / 2; 11 } Context Supplement. The context supplement introduces 12 // More code ...... 13 if (zapParams.liquidityPath.lpType == LPType.V2) { morenoisewhilehelpingLLMsimprovetheirunderstanding 14 (vars.amount0Lp, vars.amount1Lp, ) = ofthecode’sfunctionalityandbusinesslogic.Toolongcon- IUniswapV2Router02( textsthatarenotrelevanttothevulnerabilitycanalsoleadto 15 zapParams.liquidityPath.lpRouter 16 ).addLiquidity( distractionfortheLLMs,whichinturnaffectstheirresults. 17 zapParams.token0, Therefore,whenusingLLMforcodeauditing,onlynecessary 18 zapParams.token1, 19 // ...... andrelevantcontextshouldbeprovided. 20 ); 21 } KnowledgeEnhancement.Enhancingknowledgeiscrucial |
22 // More code forhelpingLLMsbetterunderstandcodeandvulnerabilities. 23 } Matchingcodewithknowledgeiscomplex,andaninadequate matchingalgorithmcouldleadtosemanticloss.Analgorithm Figure9:Case4-FundingAllocationError. capableofaccuratelypreservingortranslatingthesemantic informationofmathematicalsymbolsinthecodetonatural languageformatchingknowledgeisessential. calledonline16,asthetokenpairmightrequireadifferent Prompt Scheme Selection. Chain-of-thought prompt ratioforoptimalliquidityprovision.Thisvulnerabilitycan schemescansignificantlyaffectLLMs’performance.When disrupttheequilibriumofliquiditypoolsandcausetradersto designingpromptschemes,itisimportanttobreakdownthe losetokens.TheknowledgematchedinLLM4Vulnforthis tasksintosimplersub-tasksthatLLMscaneasilysolve. caseis“Thefundamentalvulnerabilityoccurswhenliquidity providersaddliquiditytoapooloftwotokens,andthetoken amounts provided have different proportions as compared 6.2 MoreAccurateKnowledgeRetrieval totheexistingliquiditypool.Thecontractusesthesmaller Inthispaper,weensurethesuppliedknowledgeisthesame of these proportions to calculate the amount of LP tokens for each case within the same type of knowledge supply minted.”Whilethisknowledgedoesnotpreciselydescribe scenariosforafairbenchmark.Nevertheless,moreaccurate the vulnerability, as there is no smaller proportion in this knowledgeretrievalcouldleadtomorepowerfulLLM-based specificcase,itcanstillbeusefulindetectingthevulnerability vulnerabilitydetection.Herewemanuallyverifytheaccuracy whencombinedwiththereasoningabilityofLLMs.Thefull oftheknowledgeretrievalusedinourevaluationandleave details of this knowledge are available in Knowledge 4 in moreaccurateknowledgeretrievaltofuturework. AppendixC. Specifically, we manually check whether the retrieved From the above four cases,it is evident that knowledge knowledgeisrelevanttothegroundtruthforthepositivecases. supplementsinLLM4Vulncanaidindetectingvulnerabilities, ForSolidity,werandomlysampled100casesandfoundthat evenifthevulnerabilitydoesnotexactlymatchtheknowledge 68ofthemhaveretrievedknowledgerelevanttotheground in the vector database. Among these four cases, only the truth.Thisalsoexplainswhytheprecisionandrecallofthe firstcanbedetectedbystaticanalysistools,whiletheother LLMsincreasewhentheknowledgebaseisattached.Inthese threearelogicbugscloselyrelatedtobusinesslogic. With cases,vulnerabilitiesrelatedtohigh-levelsemanticshavea theenhancementofknowledge,LLM4Vulndemonstratesits higherprobabilityofbeingretrieved.Incontrast,vulnerabili- capabilitytodetectbugsinreal-worldprojectsthatarenot tiesrelatedtolow-levelsemantics,suchasintegeroverflow, identifiedbyexistingtools. are less likelyto be retrieved. On Java,only36 outof100 cases have relevant knowledge, resulting in worse perfor- 6 DiscussionandFutureWork mance of the LLMs with a knowledge base attached. Vul- nerabilities like XSS andSQL injection are more likely to 6.1 LessonsLearned beretrieved,whilevulnerabilitieslikeSSRFandCSRFare hardertomatch.MostvulnerabilitiesforJavaarenotrelated Inthissection,wesummarizethekeyinsightsgainedfromour tobusinesslogicandarehardertomatchwithfunctionality. empiricalstudyin§5onusingLLMsfordetectingvulnera- Although the knowledge retrieval part does not always bilitiesinsmartcontracts,alongwithsomerecommendations provide the exact same vulnerabilities as the ground truth, forenhancingperformance. whichisanextremelyhardtask,itisstillusefulfortheLLMs 12tolearnfromtheretrievedknowledge. AsclaimedinRQ5 combinedGPTwithstaticanalysistodetectlogicvulnerabili- andAppendixC,aslongastheextractedknowledgehasa tiesinsmartcontracts.Lietal.[42]proposedanautomated certainsimilaritytotheactualvulnerability,theLLMismore frameworktointerfacestaticanalysistoolsandLLMs.How- likelytoidentifythecorrectvulnerability. ever,thesestudiesfocusonlyondetectingvulnerabilitiesand donotdelveintothereasoningcapabilitiesofLLMsorthe impact of these capabilities on vulnerability detection. In 6.3 CoveringMoreProgrammingLanguages thispaper,weintroduceLLM4Vuln,aunifiedframeworkto benchmarkandexplorethecapabilityofLLMs’reasoningin OurevaluationofLLMsisonSolidity,themostpopularlan- vulnerabilitydetection. guageforsmartcontracts,andJava,oneofthemostpopular BenchmarkingLLMs’AbilityinVulnerabilityDetection. languagesforgeneraltasks.Hence,thevulnerabilityknowl- OtherresearchersfocusonevaluatingthecapabilityofLLMs’ edgebaseandotherrelatedtoolsaredesignedforSolidityand reasoning in vulnerability detection. For example, Thapa Java.Nevertheless,LLM4Vulnhasthepotentialtobeadapt- etal.[58]comparedtheperformanceoftransformer-based abletootherlanguages,suchasC/C++,Rust,JavaScript,and LLMswithRNN-basedmodelsinsoftwarevulnerabilityde- others.EvaluatingLLM4Vulnonotherlanguagesisbeyond tection. Chen et al. [18] conducted an empirical study to the scope of this paper. In this section, we provide some investigate the performance ofLLMs in detecting vulnera- suggestionsforapplyingLLM4Vulntootherprogramming bilities in smart contracts. David et al. [23] evaluated the languages.Specifically,itisnecessarytocompileavulnera- |
performance of LLMs in security audits of 52 DeFi smart bilityknowledgebaseforthetargetlanguageandimplement contracts.Khareetal.[37]benchmarkedtheeffectivenessof correspondingtool-invokingfunctioncalls,suchascallgraph LLMsinJavaandC++vulnerabilitydetection.Gaoetal.[31], construction.Thesefunctioncallsshouldbetailoredbasedon Ullahetal.[60],Dingetal.[26]introduceddiversedatasets thespecificsyntaxandsemanticsofeachlanguage.Theex- ofvulnerabilitiesandevaluatedLLMs,deeplearning-based istingsetoffunctioncallscouldalsobeexpandedtoinclude methods,andtraditionalstaticanalysismodelsinvulnerabil- dataflowanalysisandsymbolicexecution,applicableacross itydetection.Additionally,LLMshavealsobeenevaluated multiplelanguages. onthevulnerabilityrepairtask[50,69].However,thesestud- ies focus on the performance ofindividualLLM instances 7 RelatedWork andtheirconfigurations.Incontrast,weaimtoexplorethe capabilityofLLMs’reasoninginvulnerabilitydetection. LLM-basedVulnerabilityDetection.Vulnerabilitydetec- Security-orientedLLMs.Beyondbeingalanguagemodel, tionhasbeenalong-standingprobleminsoftwaresecurity. some studies have introduced vulnerability-specific or Traditionalmethodsrelyonpredefinedrulesorfuzztesting, security-orientedmodels.Forinstance,Lacomisetal.[39] whichareusuallyincompleteandstruggletodetectunknown createdcorporaforrenamingdecompiledcode,whilePalet vulnerabilities.Inrecentyears,withthedevelopmentofcode- al.[49]developedamodeltopredicttherealvariablenames basedLLMs[52,58],researchershaveproposedmanymeth- fromdecompilationoutputs.Peietal.[51]introducedanew odstodetectvulnerabilitiesusingLLMs.Forexample,Thapa mechanismoftransformersforlearningcodesemanticsinse- etal.[58]exploredhowtoleverageLLMsforsoftwarevul- curitytasks.Chenetal.[19]improveddecompilationresults nerability detection. Alqarni et al. [14] introduced a novel forsecurity analysis,and Ding et al. [27] enhanced model modelfine-tunedfordetectingsoftwarevulnerabilities.Tang performancewithanexecution-awarepre-trainingstrategy. etal.[57]proposedmethodsthatutilizeLLMstodetectsoft- Gaietal.[30]focusedondetectinganomalousblockchain ware vulnerabilities, combining sequence and graph infor- transactions,and Guthula et al. [32] pre-trained a network mationtoenhancefunction-levelvulnerabilitydetection.Hu securitymodelonunlabelednetworkpackettraces.Jianget etal.[33]proposedaframeworkthatemploysLLMs’role- al. [35] and Li et al. [43] both presented LLMs forbinary playingtodetectvulnerabilitiesinsmartcontracts. codeanalysis.Recently,Wangetal.[62]proposedSmartInv forsmartcontractinvariantinference.Whileallofthesestud- SomeresearchersapplyLLMstofuzztestingforvulnera- iesfocusonsecurity-orientedLLMs,ourworkexploresthe bilitydetection.Forexample,Dengetal.[24]introducedTi- generalcapabilities ofLLMs,specificallydecoupling their tanFuzztoleverageLLMstogenerateinputcasesforfuzzing vulnerabilityreasoningfromothercapabilities.. deeplearninglibraries.FuzzGPT[25]isanotherworkthat usesLLMstosynthesizeunusualprogramsforfuzzingvul- nerabilities. Meng etal. [47] proposedan LLM-basedpro- 8 Conclusion tocolimplementationfuzzingmethodcalledChatAFL.Xia etal.[66]presentedFuzz4All,atoolthatemploysLLMsto ThispaperproposesLLM4Vuln,aunifiedevaluationframe- generatefuzzinginputsforallkindsofprograms. worktodecoupleandenhanceLLMs’reasoningaboutvul- Additionally,therearealsostudiescombiningLLMswith nerabilities. By applying LLM4Vuln to 194 Solidity and existingstaticanalysismethods.Forexample,Sunetal.[56] Java cases in 9,312 scenarios, we gained insights into the 13effectsofknowledgeenhancement,contextsupplementation, expressedinthismaterialarethoseoftheauthor(s)anddonot and prompt schemes. Furthermore, our pilot study identi- reflecttheviewsofNationalResearchFoundation,Singapore fied14zero-dayvulnerabilitiesacrossfourprojects,resulting andCyberSecurityAgencyofSingapore. in a total of $3,576 in bounties. This demonstrates an im- provedparadigmforLLM-basedvulnerabilitydetectionvia References theframeworkofLLM4Vuln. [1] Solidityprogramminglanguage,2023. [2] Soulzapv1,2023. EthicalConsiderations [3] Apebond_final_secure3_audit_report.pdf,2024. Inthissection,weclarifythepotentialethicalconsiderations [4] Code4renaFindingsonGitHub,2024. associatedwiththisresearch. [5] Code4renaSecurityAuditReports,2024. Authorized Vulnerability Detection: The vulnerabilities [6] codellama-13b-instruct,2024. forbuildingknowledgedatabaseandevaluationarecollected [7] Commonvulnerabilitiesandexposures(cve),aug2024. frompublicsources,suchasbugbountyprogramsandGitHub [8] Cweview:Weaknessesinsoftwarewritteninjava,aug2024. issues.AllvulnerabilitydetectionactivitiesinRQ5werecon- [9] Functioncalling-OpenAIAPI,2024. ductedexclusivelythroughformalinvitationsbyvulnerability [10] Llama-2-7b-chat-hf-function-calling-v3,2024. |
auditplatforms.Thetestingenvironmentswerestrictlyoffline deployed,ensuringthatnounauthorizedpenetrationtesting [11] Mistral-7b-instruct-v0.1-function-calling-v2,2024. wascarriedout.Furthermore,theseactivitiesdidnotinterfere [12] Mixtral-8x7b-instruct-v0.1,2024. withthestabilityorfunctionalityofanypubliclydeployed [13] Openai,2024. services. [14] MansourAlqarniandAkramulAzim.Lowlevelsourcecodevulnera- ResponsibleDisclosure: Allidentifiedvulnerabilitieswere bilitydetectionusingadvancedbertlanguagemodel.InProc.CANAI, submittedtotherespectivevendorsforconfirmationbefore 2022. anypublicdisclosure.Thevulnerabilitiesmentionedinthis [15] StevenArzt,SiegfriedRasthofer,ChristianFritz,EricBodden,Alexan- paperweredisclosedafterreceivingvendorconfirmationand dreBartel,JacquesKlein,YvesLeTraon,DamienOcteau,andPatrick McDaniel.Flowdroid:precisecontext,flow,field,object-sensitiveand after appropriate patches or fixes were deployed,ensuring lifecycle-awaretaintanalysisforandroidapps. InProc.ACMPLDI, thatnoexposedvulnerabilitiesremainedunaddressedpriorto 2014. publication. [16] CristianCadar,DanielDunbar,andDawsonEngler.KLEE:Unassisted andautomaticgenerationofHigh-Coveragetestsforcomplexsystems programs.InProc.USENIXOSDI,2008. OpenScienceStatement [17] SaikatChakraborty,RahulKrishna,YangruiboDing,andBaishakhi Ray. Deeplearningbasedvulnerabilitydetection:Arewethereyet? Our dataset and source code are open-sourced at https: IEEETrans.Softw.Eng.,2022. //anonymous.4open.science/r/LLM4Vuln/. There is a [18] ChongChen,JianzhongSu,JiachiChen,YanlinWang,TingtingBi, detailed instruction on how to reproduce our experiments YanliWang,XingweiLin,TingChen,andZibinZheng.Whenchatgpt insidetherepository.However,sincetheknowledgedatabase meetssmartcontractvulnerabilitydetection:Howfararewe?,2023. arXiv:2309.05520. ofSolidityisfromourindustrypartner,wecouldnotopen- sourcethemwithouttheirpermission,andthesummarized [19] QibinChen,JeremyLacomis,EdwardJ.Schwartz,ClaireLeGoues, GrahamNeubig,andBogdanVasilescu.Augmentingdecompilerout- knowledgedatabaseofSolidityisnotincludedinthereposi- putwithlearnedvariablenamesandtypes.InProc.USENIXSecurity, tory.Youcanstillusetherestofthedatasetandsourcecode 2022. toreproduceexperimentsforJavaandSolidityvulnerabilities [20] YizhengChen,ZhoujieDing,LamyaAlowain,XinyunChen,andDavid (exceptformodelswithsummarizedknowledge). Wagner. Diversevul:Anewvulnerablesourcecodedatasetfordeep learningbasedvulnerabilitydetection.InProc.RAID,2023. [21] Cheng-HanChiangandHung-yiLee.Canlargelanguagemodelsbe Acknowledgements analternativetohumanevaluations?InProc.ACL,2023. [22] DamaiDai,YutaoSun,LiDong,YaruHao,ShumingMa,Zhifang WethankMiaoleiShi,DaweiZhou,XinrongLi,LiweiTan, Sui,andFuruWei. Whycangptlearnin-context? languagemod- andothercolleagues atMetaTrustLabs fortheirhelpwith els implicitly perform gradient descent as meta-optimizers,2023. LLM4Vuln. This research/project is supported by the Na- arXiv:2212.10559. tionalResearchFoundationSingaporeandDSONationalLab- [23] IsaacDavid,LiyiZhou,KaihuaQin,DawnSong,LorenzoCavallaro, oratoriesundertheAISingaporeProgramme(AISGAward andArthurGervais.Doyoustillneedamanualsmartcontractaudit?, 2023.arXiv:2306.12338. No:AISG2-RP-2020-019),theNationalResearchFoundation, Singapore,andtheCyberSecurityAgencyunderitsNational [24] Yinlin Deng,Chunqiu Steven Xia,Haoran Peng,Chenyuan Yang, andLingmingZhang. Largelanguagemodelsarezero-shotfuzzers: Cybersecurity R&D Programme (NCRP25-P04-TAICeN). Fuzzingdeep-learninglibrariesvialargelanguagemodels. InProc. Anyopinions,findingsandconclusionsorrecommendations ACMISSTA,2023. 14[25] YinlinDeng,ChunqiuStevenXia,ChenyuanYang,ShizhuoDylan [42] HaonanLi,YuHao,YizhuoZhai,andZhiyunQian.Enhancingstatic Zhang,ShujingYang,andLingmingZhang.Largelanguagemodels analysisforpracticalbugdetection:Anllm-integratedapproach. In areedge-casegenerators:Craftingunusualprogramsforfuzzingdeep Proc.ACMonProgrammingLanguages(OOPSLA),2023. learninglibraries.InProc.IEEE/ACMICSE,2023. [43] XuezixiangLi,YuQu,andHengYin.Palmtree:Learninganassembly [26] YangruiboDing,YanjunFu,OmniyyahIbrahim,ChawinSitawarin, languagemodelforinstructionembedding.InProc.ACMCCS,2021. Xinyun Chen,Basel Alomair,David Wagner,Baishakhi Ray,and [44] ZhenLi,DeqingZou,ShouhuaiXu,XinyuOu,HaiJin,SujuanWang, YizhengChen. Vulnerabilitydetectionwithcodelanguagemodels: ZhijunDeng,andYuyiZhong.Vuldeepecker:Adeeplearning-based Howfararewe?,2024.arXiv:2403.18624. systemforvulnerabilitydetection,2018.arXiv:1801.01681. [27] YangruiboDing,BenSteenhoek,KexinPei,GailKaiser,WeiLe,and [45] ZongjieLi,ChaozhengWang,PingchuanMa,DaoyuanWu,Shuai BaishakhiRay.Traced:Execution-awarepre-trainingforsourcecode, |
Wang, Cuiyun Gao, and Yang Liu. Split and merge: Aligning 2023.arXiv:2306.07487. position biases in large language model based evaluators, 2023. [28] YuzhouFang,DaoyuanWu,XiaoYi,ShuaiWang,YufanChen,Mengjie arXiv:2310.01432. Chen,YangLiu,andLingxiaoJiang.Beyond“protected”and“private”: [46] PuzhuoLiu,ChengnianSun,YaowenZheng,XuanFeng,ChuanQin, Anempiricalsecurityanalysisofcustomfunctionmodifiersinsmart YunchengWang,ZhiLi,andLiminSun.Harnessingthepowerofllm contracts.InProc.ACMISSTA,2023. tosupportbinarytaintanalysis,2023.arXiv:2310.08275. [29] Andrea Fioraldi,DominikMaier,Heiko Eißfeldt,andMarcHeuse. [47] RuijieMeng,MartinMirchev,MarcelBohme,andAbhikRoychoud- AFL++:Combiningincrementalstepsoffuzzingresearch. InProc. hury.Largelanguagemodelguidedprotocolfuzzing.InProc.ISOC USENIXWOOT20,2020. NDSS,2023. [30] Yu Gai,Liyi Zhou,Kaihua Qin,Dawn Song,and Arthur Gervais. [48] LongOuyang,JeffWu,XuJiang,DiogoAlmeida,CarrollL.Wain- Blockchainlargelanguagemodels,2023.arXiv:2304.12749. wright,PamelaMishkin,ChongZhang,SandhiniAgarwal,Katarina [31] ZeyuGao,HaoWang,YuchenZhou,WenyuZhu,andChaoZhang. Slama,AlexRay,JohnSchulman,JacobHilton,FraserKelton,Luke Howfarhavewegoneinvulnerabilitydetectionusinglargelanguage Miller,MaddieSimens,AmandaAskell,PeterWelinder,PaulChris- models,2023.arXiv:2311.12420. tiano,JanLeike,andRyanLowe.Traininglanguagemodelstofollow instructionswithhumanfeedback,2022.arXiv:2203.02155. [32] SatyandraGuthula,NavyaBattula,RomanBeltiukov,WenboGuo,and ArpitGupta.netfound:Foundationmodelfornetworksecurity,2023. [49] KuntalKumarPal,AtiPriyaBajaj,PratyayBanerjee,AudreyDutcher, arXiv:2310.17025. Mutsumi Nakamura,Zion Leonahenahe Basque,Himanshu Gupta, SaurabhArjunSawant,UjjwalaAnantheswaran,andYan."lenorindex [33] S.Hu,T.Huang,F.Ilhan,S.Tekin,andL.Liu.Largelanguagemodel- orcount,anythingbutv1":Predictingvariablenamesindecompilation poweredsmartcontractvulnerabilitydetection:Newperspectives.In outputwithtransferlearning. InProc.IEEESymposiumonSecurity Proc.IEEETPS-ISA,2023. andPrivacy(S&P),2024. [34] AlbertQ.Jiang,AlexandreSablayrolles,AntoineRoux,ArthurMensch, [50] HammondPearce,BenjaminTan,BaleeghAhmad,RameshKarri,and BlancheSavary,ChrisBamford,DevendraSinghChaplot,Diegodelas BrendanDolan-Gavitt.Examiningzero-shotvulnerabilityrepairwith Casas,EmmaBouHanna,FlorianBressand,GiannaLengyel,Guil- largelanguagemodels. InProc.IEEESymposiumonSecurityand laumeBour,GuillaumeLample,LélioRenardLavaud,LucileSaulnier, Privacy(S&P),2023. Marie-Anne Lachaux,Pierre Stock,Sandeep Subramanian,Sophia Yang,SzymonAntoniak,TevenLeScao,ThéophileGervet,Thibaut [51] Kexin Pei, Weichen Li, Qirui Jin, Shuyang Liu, Scott Geng, Lavril,ThomasWang,TimothéeLacroix,andWilliamElSayed.Mix- Lorenzo Cavallaro,Junfeng Yang,and Suman Jana. Symmetry- tralofexperts,2024.arXiv:2401.04088. preservingprogramrepresentationsforlearningcodesemantics,2023. arXiv:2308.03312. [35] NanJiang,ChengxiaoWang,KevinLiu,XiangzheXu,LinTan,and XiangyuZhang. Nova+:Generativelanguagemodelsforbinaries, [52] BaptisteRozière,JonasGehring,FabianGloeckle,StenSootla,ItaiGat, 2023.arXiv:2311.13721. XiaoqingEllenTan,YossiAdi,JingyuLiu,TalRemez,JérémyRapin, ArtyomKozhevnikov,IvanEvtimov,JoannaBitton,ManishBhatt,Cris- [36] JeffJohnson,MatthijsDouze,andHervéJégou.Billion-scalesimilarity tianCantonFerrer,AaronGrattafiori,WenhanXiong,AlexandreDé- searchwithgpus.IEEETrans.BigData,2021. fossez,JadeCopet,FaisalAzhar,HugoTouvron,LouisMartin,Nicolas [37] Avishree Khare,Saikat Dutta,Ziyang Li,Alaia Solko-Breslin,Ra- Usunier,ThomasScialom,andGabrielSynnaeve.Codellama:Open jeev Alur,and Mayur Naik. Understanding the effectiveness of foundationmodelsforcode,2023.arXiv:2308.12950. large language models in detecting security vulnerabilities,2023. [53] Timo Schick,Jane Dwivedi-Yu,Roberto Dessì,Roberta Raileanu, arXiv:2311.16169. Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas [38] TakeshiKojima,ShixiangShaneGu,MachelReid,YutakaMatsuo,and Scialom.Toolformer:Languagemodelscanteachthemselvestouse YusukeIwasawa.Largelanguagemodelsarezero-shotreasoners.In tools,2023.arXiv:2302.04761. Proc.NeurIPS,2022. [54] DongdongShe,KexinPei,DaveEpstein,JunfengYang,Baishakhi [39] JeremyLacomis,PengchengYin,EdwardSchwartz,MiltiadisAllama- Ray,andSumanJana.Neuzz:Efficientfuzzingwithneuralprogram nis,ClaireLeGoues,GrahamNeubig,andBogdanVasilescu.Dire:A smoothing.InProc.IEEESymposiumonSecurityandPrivacy(S&P), |
neuralapproachtodecompiledidentifiernaming.InProc.IEEE/ACM 2019. ASE,2019. [55] YanShoshitaishvili,RuoyuWang,ChristopherSalls,NickStephens, [40] Nathan Lambert,Louis Castricato,Leandro von Werra,and Alex MarioPolino,AndrewDutcher,JohnGrosen,SijiFeng,Christophe Havrilla. Illustratingreinforcementlearningfromhumanfeedback Hauser,ChristopherKruegel,andGiovanniVigna.Sok:(stateof)the (rlhf).HuggingFaceBlog,2022.https://huggingface.co/blog/rlhf. artofwar:Offensivetechniquesinbinaryanalysis. InProc.IEEE SymposiumonSecurityandPrivacy(S&P),2016. [41] PatrickLewis,EthanPerez,AleksandraPiktus,FabioPetroni,Vladimir Karpukhin,Naman Goyal,Heinrich Küttler,Mike Lewis,Wen-tau [56] YuqiangSun,DaoyuanWu,YueXue,HanLiu,HaijunWang,Zhengzi Yih,TimRocktäschel,SebastianRiedel,andDouweKiela.Retrieval- Xu,XiaofeiXie,andYangLiu.GPTScan:Detectinglogicvulnerabil- augmentedgeneration forknowledge-intensivenlptasks. In Proc. itiesinsmartcontractsbycombininggptwithprogramanalysis. In NeurIPS,2020. Proc.IEEE/ACMICSE,2024. 15Appendix [57] WeiTang,MingweiTang,MinchaoBan,ZiguoZhao,andMingjun Feng.Csgvd:Adeeplearningapproachcombiningsequenceandgraph embeddingforsourcecodevulnerabilitydetection. J.Syst.Softw., 2023. [58] Chandra Thapa,Seung Ick Jang,Muhammad Ejaz Ahmed,Seyit Camtepe,JosefPieprzyk,andSuryaNepal. Transformer-basedlan- A PromptforSummarizingtheFunctionalities guagemodelsforsoftwarevulnerabilitydetection. In Proc. ACM ACSAC,2022. andRootCausesfromVulnerabilityReports [59] HugoTouvron,LouisMartin,KevinStone,andetal.Llama2:Open foundationandfine-tunedchatmodels,2023.arXiv:2307.09288. [60] Saad Ullah, Mingji Han, Saurabh Pujar, Hammond Pearce, Ayse Coskun, and Gianluca Stringhini. Can large language models identify and reason about security vulnerabilities? not yet, 2023. PromptforSummarizing arXiv:2312.12575. [61] AshishVaswani,NoamShazeer,NikiParmar,JakobUszkoreit,Llion SummarizeFunctionalities Jones,AidanN.Gomez,LukaszKaiser,andIlliaPolosukhin.Attention Given the following vulnerability description, isallyouneed,2023.arXiv:1706.03762. followingthetask: [62] S.Wang,K.Pei,andJ.Yang.Smartinv:Multimodallearningforsmart 1. Describe the functionality implemented in the contractinvariantinference.InProc.IEEESymposiumonSecurityand given code. This should be answered under the Privacy(S&P),2024. section"Functionality:"andwrittenintheimperative [63] Tielei Wang,Tao Wei,Guofei Gu,and Wei Zou. Taintscope: A mood, e.g., "Calculate the price of a token." Your checksum-awaredirectedfuzzingtoolforautomaticsoftwarevulner- abilitydetection.InProc.IEEESymposiumonSecurityandPrivacy response should be concise and limited to one (S&P),2010. paragraphandwithin40-50words. [64] JasonWei,XuezhiWang,DaleSchuurmans,MaartenBosma,brian 2. Remember, do not contain any variable or ichter,FeiXia,EdChi,QuocVLe,andDennyZhou.Chain-of-thought functionorexperssionnameintheFunctionalityRe- promptingelicitsreasoninginlargelanguagemodels.InProc.NeurIPS, sult,focusonthefunctionalityorbusinesslogicitself. 2022. [65] DaoyuanWu,DebinGao,RobertH.Deng,andChangRockyK.C. Whenprogramanalysismeetsbytecodesearch:Targetedandefficient SummarizeRootCause inter-proceduralanalysisofmodernandroidappsinbackdroid.InProc. IEEE/IFIPDSN,2021. Please provide a comprehensive and clear abstract thatidentifies the fundamentalmechanics behinda [66] ChunqiuStevenXia,MatteoPaltenghi,JiaLeTian,MichaelPradel, andLingmingZhang.Fuzz4all:Universalfuzzingwithlargelanguage specificvulnerability,ensuring thatthis knowledge models,2024.arXiv:2308.04748. can be applied universally to detect similar vulner- [67] XiaojunXu,ChangLiu,QianFeng,HengYin,LeSong,andDawn abilities across different scenarios. Your abstract Song. Neuralnetwork-basedgraphembedding forcross-platform should: binarycodesimilaritydetection.InProc.ACMCCS,2017. 1.Avoidmentioninganymoderationtoolsorsystems. [68] XiaoYi,YuzhouFang,DaoyuanWu,andLingxiaoJiang.BlockScope: 2.Excludespecificcodereferences,suchasfunction Detecting and Investigating Propagated Vulnerabilities in Forked or variable names, while providing a general yet BlockchainProjects.InProc.ISOCNDSS,2023. precisetechnicaldescription. [69] LyuyeZhang,KaixuanLi,KairanSun,DaoyuanWu,YeLiu,Haoye 3. Use the format: KeyConcept:xxxx, placing the Tian,andYangLiu. Acfix:Guidingllmswithminedcommonrbac foundationalexplanationofthevulnerabilityinside practicesforcontext-awarerepairofaccesscontrolvulnerabilitiesin smartcontracts,2024.arXiv:2403.06838. thebrackets. 4. Guarantee that one can understand and identify [70] ZhuoZhang,BrianZhang,WenXu,andZhiqiangLin.Demystifying exploitablebugsinsmartcontracts.In2023IEEE/ACM45thInterna- thevulnerabilityusingonlytheinformationfromthe |
tionalConferenceonSoftwareEngineering(ICSE),pages615–627. VulnerableCodeandthisKeyConcept. IEEE,2023. 5.Striveforclarityandprecisioninyourdescription, [71] L.Zhou,X.Xiong,J.Ernstberger,S.Chaliasos,Z.Wang,Y.Wang, ratherthanbrevity. K.Qin,R.Wattenhofer,D.Song,andA.Gervais.Sok:Decentralized 6.Breakdownthevulnerabilitytoitscoreelements, finance(defi)attacks. In Proc. IEEESymposium on Securityand ensuring all terms are explained and there are no Privacy(S&P),2023. ambiguities. [72] YaqinZhou,ShangqingLiu,JingkaiSiow,XiaoningDu,andYangLiu. By following these guidelines, ensure that your Devign:Effectivevulnerabilityidentificationbylearningcomprehen- siveprogramsemanticsviagraphneuralnetworks.InProc.NeurIPS, abstract remains general and applicable to various 2019. contexts,withoutrelyingonspecificcodesamplesor [73] WenyuZhu,HaoWang,YuchenZhou,JiamingWang,ZihanSha,Zeyu detailedcase-specificinformation. Gao,andChaoZhang.ktrans:Knowledge-awaretransformerforbinary codeembedding,2023.arXiv:2308.12659. 16B Prompt for Instruction Following and Auto- C MatchedKnowledgeinRQ5 Annotation Knowledge1: RigorousValidationAgainstDuplicate Entries in Input Arrays For any functionality that in- PromptforAuto-Annotation volvesprocessinginputarrays,especiallyinsmartcon- GenerateTypeandDescription tractsorsystemsmanagingassetsandtokens,it’scru- IwillgiveyousometextgeneratedbyanotherLLM. cialtoimplementstringentvalidationmechanismsto Buttheformatmaybewrong.Youmustcallthereport checkforduplicateentries.Thisvalidationisnecessary APItoreporttheresult. to prevent exploitation where duplicates in the input canleadtoerroneouscalculations,inflatedbalances,or disproportionatedistributionofbenefits. Byensuring Compare Types between Output and Ground thateachentryinaninputarrayisuniqueandcorrectly Truth mapped, systems can avoid vulnerabilities that arise You areaseniorsmartcontractauditor. NowIwill fromimproperhandlingofduplicateentries.Thiscon- give you a ground truth of vulnerability, and a de- ceptappliesbroadlyacrossvariousfunctionalitiesand scriptionwrittenbyanauditor.Youneedtohelpme isessentialformaintainingtheintegrityandsecurityof identifywhetherthedescriptiongivenbytheauditor systemshandlingvaluabledataorassets. contains a vulnerability in the groundtruth. Please reporttheresultusingthefunctioncall. Groundtruth:{GroundTruth} Knowledge 2: Incorrect handling of decimal preci- Description:{Output} sioninpriceratiocalculationsThevulnerabilitystems from a incorrecthandling ofdecimalprecision while ReasonFP calculatingthepriceratiobetweentwooracleswithdif- You are a senior smart contract auditor. I will give ferentdecimals.Thecurrentimplementationassumes youagroundtruthofvulnerabilityandadescription thattheinputpricefeedswillhaveafixeddecimalvalue writtenbyanauditor.Theauditorgivesafalsepos- (e.g.,8decimals)andperformscalculationsaccordingly. itive result. Please help me identify the reason and However,when a price feed with a different number selectonefromtheAvailableOptions.Pleasereport ofdecimalsisprovided,itleadstoinaccurateand,in theresultusingthefunctioncall. somecases,completelyincorrectresultsduetowrong Groundtruth:{GroundTruth} math operations used in handling the decimals. The Description:{Output} coreissueliesintheassumptionthatallpricefeedswill AvailableOptions:{Options} have a fixed number of decimals and the subsequent calculationsperformed.Intheproblematicimplementa- ReasonFNType1 tion,decimalsareaddedandremovedbyafixedamount You are a senior smart contract auditor. I will give basedonthisassumption,leadingtothelossofpreci- youagroundtruthofvulnerabilityandadescription sionwhenadifferentdecimalvalueisused.Thiscan written by an auditor. The audit failed to find the impact the validity of the price ratio and have a cas- vulnerabilityinthecode.Pleasehelpmeidentifythe cading effecton othercalculations dependenton this reason and select one from the Available Options. value.Tomitigatethisvulnerability,theimplementation Pleasereporttheresultusingthefunctioncall. shouldproperlyhandlepricefeedswithvaryingdeci- Groundtruth:{GroundTruth} mals.Insteadofassumingafixednumberofdecimals, Description:{Output} thecalculationsshouldadapttotheactualnumberof AvailableOptions:{Options} decimals provided by the price feeds. By modifying the code that calculates the price ratio to account for Options varyingdecimalsandcorrectlyscalingtheintermediate NeedOtherCode:choosethiswhentheauditorneeds values,thevulnerabilitycanbeaddressed,ensuringac- othercodetomakethedecision.NeedDetailAnalysis: curatepriceratiocalculationsregardlessofthenumber choosethiswhentheauditorneedsmoredetailedanal- ofdecimalsintheinputpricefeeds. ysis,suchasdatafloworsymbolicexecution,tomake thedecision.WrongReasoning:choosethiswhenthe auditormadeawrongreasoning.Other:choosethis Knowledge 3: Incorrecthandling oftoken decimals whenthereasonisnoneoftheabove.Youmustcall when calculating the squaredrootprice ratio in a liq- thereportAPItoreporttheresult. uiditypool,leadingtoinflatedvaluesforcertaintoken 17pairs and affecting pricing calculations. The vulnera- bility occurs when calculating the squared root price ofapositioninaliquiditypoolwithtokenshavingdif- |
ferentdecimalvalues.Thepriceratiocalculationdoes not correctly handle the difference in token decimal values.Specifically,theissueoccurswhenthetoken1 decimalisstrictlygreaterthanthetoken0decimal.In suchcases,thecalculatedsquaredrootpricecanbesig- nificantlyinflatedduetothehard-codedconstant(1E9 or10**9)usedfornormalization,whichdoesnottake intoaccountthedifferencebetweenthetokendecimals. Asaconsequenceofthismiscalculation,functionsthat relyonthesquaredrootpriceratio,likegetAmounts- ForLiquidity()andgetTokenPrice(),canreturninflated liquidityamountsandincorrecttokenprices.Thisissue canadverselyaffecttheproperfunctioningoftheliq- uiditypoolandleadtoimprecisecalculationsintoken pricinganddistribution.Tomitigatethisvulnerability,it iscrucialtoaccountforthedifferenceintokendecimals during the calculation of the squared root price ratio. Insteadofusingafixedconstantfornormalization,the function should dynamically calculate normalization factorsbasedonthedifferencebetweenthetokendec- imals.Thisensuresthatthecorrectsquaredrootprice ratioiscalculatedandreturnedforvarioustokenpairs, leadingtomoreaccuratetokenpricesandliquiditycal- culations. Knowledge4: Inaccuratetokenamountcalculationin addingliquidityThefundamentalvulnerabilityoccurs whenliquidityprovidersaddliquiditytoapooloftwo tokens,andthetokenamountsprovidedhavedifferent proportionsascomparedtotheexistingliquiditypool. The contract uses the smaller of these proportions to calculatetheamountofLPtokensminted.Duetothis, therewillbeexcesstokensthatcannotberedeemedfor theamountofLPtokensminted,effectivelydonating theextratokenstothepool,whichwillbesharedamong allliquidityprovidersofthepool.Thisvulnerabilityis caused by the improper calculation of optimal token amounts based on user inputs,pool reserves,and the minimalLPtokensamountspecifiedbytheuser,result- ing in an undesireddiscrepancy in token proportions when providing liquidity. To mitigate this issue,it is recommendedtoenhancethetokenamountcalculation mechanismwhileaddingliquiditytoapool,similarto howitishandledinUniswapV2Router. 18 |
2401.16947 WGAN-AFL: Seed Generation Augmented Fuzzer with Wasserstein-GAN Liqun Yang1, Chunan Li1, Yongxin Qiu1, Chaoren Wei1, Jian Yang2*, Hongcheng Guo2, Jinxin Ma3, Zhoujun Li2 1School of Cyber Science and Technology, Beihang University 2School of Computer Science and Engineering; 3China Information Technology Security Evaluation Center; {lqyang, lcn142857, 19377012, weichaoren, jiaya, hongchengguo, lizj}@buaa.edu.cn; majinxin2003@126.com; Abstract—The importance of addressing security vulnerabil- into use. Fuzz testing, as an automated software testing ities is indisputable, with software becoming crucial in sectors technology, generates random input that is unexpected by suchasnationaldefenseandfinance.Consequently,Thesecurity the target. Through monitoring the resulting abnormal results, issuescausedbysoftwarevulnerabilitiescannotbeignored.Fuzz suchassoftwarecrashes,memoryerrors,etc.,wecanidentify testing is an automated software testing technology that can detect vulnerabilities in the software. However, most previous existingvulnerabilitiesinthesoftware.Fuzztestingeffectively fuzzersencounterchallengesthatfuzzingperformanceissensitive reduces the human and time costs required for vulnerability to initial input seeds. In the absence of high-quality initial input discovery, making it possible for technical personnel who seeds,fuzzersmayexpendsignificantresourcesonprogrampath cannot discover vulnerabilities to identify vulnerabilities in exploration, leading to a substantial decrease in the efficiency the software through fuzz testing. Based on these advantages, of vulnerability detection. To address this issue, we propose WGAN-AFL. By collecting high-quality testcases, we train a fuzz testing technology has been widely used, and it has also generative adversarial network (GAN) to learn their features, improved software security to some extent [24]. thereby obtaining high-quality initial input seeds. To overcome AFL (American Fuzzy Lop) is a leading open-source fuzz drawbackslikemodecollapseandtraininginstabilityinherentin testingframeworkbasedoncoverageguidanceandiscurrently GANs,weutilizetheWassersteinGAN(WGAN)architecturefor themostpopulartoolinthiscategory.Itoffershighcodecov- training, further enhancing the quality of the generated seeds. ExperimentalresultsdemonstratethatWGAN-AFLsignificantly erage, strong vulnerability discovery capabilities, and opera- outperforms the original AFL in terms of code coverage, new tionalefficiency[22].AFLusesinstrumentationtorecordcode paths, and vulnerability discovery, demonstrating the effective coverage and employs a mutation strategy based on genetic enhancement of seed quality by WGAN-AFL. algorithms[26].Byevaluatingthediscoveryofnewexecution Index Terms—fuzzing, AFL, deep learning, seed generation, paths and selecting high-quality mutation seeds, AFL reduces wasserstein generative adversarial network thegenerationofunnecessarytestcases,increasingthefuzzing efficiency. I. INTRODUCTION However, AFL has a notable limitation. Its fuzzing results In the 21st century, software plays an irreplaceable role in are sensitive to the initial input seed [15]. The quality of the various critical domains such as national defense, finance and initial seed determines AFL’s effectiveness in covering pro- economy [19]. Consequently, the security issues caused by gram paths and detecting hidden vulnerabilities. If the initial software vulnerabilities cannot be ignored. A vulnerability of seed is of poor quality or does not adhere to the program’s high risk has the potential to enable remote manipulation of syntax structure, AFL must conduct extensive mutation early equipment, thereby precipitating the leakage of sensitive in- in the fuzz testing process to pass syntax detection. This formation and service interruptions. In more severe instances, consumes significant system resources, leading to inefficient itleadstothecompletedestabilizationofthesystem,resulting exploration of program paths and wasting fuzz testing re- in incalculable losses. A notable example is the 2016 incident sources and time. where a Japanese satellite disintegrated due to an underlying To mitigate the sensitivity of AFL (American Fuzzy Lop) software malfunction, incurring billions of dollars in losses to initial input seeds, we propose the integration of generative [33]. Similarly, in 2018 and 2019, two fatal crashes involving adversarial networks (GANs) for seed optimization (WGAN- Boeing 737MAX aircraft were attributed to software design AFL). GANs possess the capability to discern patterns in defects,resultinginthetragiclossof346lives[4].Therefore, existing datasets and synthesize new data instances from software needs to undergo rigorous testing and inspection to stochastic inputs. Upon completion of the training phase, the prevent irreversible losses caused by its vulnerabilities. datacraftedbytheGANnotonlymirrorsthesalientattributes To effectively ensure software security, we can try to of the source data but also captures a broad spectrum of discover potential vulnerabilities before the software is put its influential characteristics. By employing GANs, we can 4202 naJ 03 ]RC.sc[ 1v74961.1042:viXragenerate refined seed inputs that retain the diversity and com- testing iteration, AFL chooses seeds from the input queue plexitynecessaryforeffectivefuzztesting,therebypotentially and employs various mutation strategies, including splicing boostingtheefficacyandcoverageoftheAFLtestingprocess. and bit-flipping, to create sets of testcases. These testcases ConsideringthatGANispronetoproblemssuchasgradient are then input into the target program, and the results are |
vanishing and pattern collapse during training, which affects evaluated. The program’s response, such as crashes or the the quality of the output of the seed by the seed optimization exploration of new code paths, is detected through techniques model,weproposetoreplaceGANwithWGAN,whichlever- like instrumentation and memory inspection. Based on these ages the wasserstein distance for GAN model optimization. outcomes, the testcases are either added to or removed from WGANprovidesstablegradienttodrivethemodeltoconverge the input queue. duringthetrainingprocess,toimprovethequalityoftheseeds B. Generating Adversarial Network output by the seed optimization model, and to improve code coverage and the number of vulnerabilities found. The generative adversarial network (GAN) is a class of Our main contributions are summarized as follows: machine learning frameworks introduced by Ian Goodfellow • We propose an AFL seed generation augmented model in 2014 [14], consisting of two neural network models: the based on GAN. By learning from high-quality test cases, generator and the discriminator. The generator’s objective is themodelcapturesthedistributioncharacteristicsofhigh- to produce samples that closely resemble real data, while the quality seeds and generates higher-quality seed sets. discriminator aims to differentiate between samples and as- • Furthermore, We substitute GAN with WGAN, which certain whether they originate from genuine data distributions effectively mitigates the gradient vanishing issue present or are artificially generated by the generator. Throughout the inGANs.Thismodificationenhancesthetrainingprocess training process of the GAN, the generator undergoes contin- ofGAN,consequentlyimprovingthequalityofgenerated uous optimization, striving to generate increasingly realistic seeds. samples to deceive the discriminator into recognizing them as • We conduct experiments on commonly used Linux soft- originating from authentic data distributions. Simultaneously, ware to verify the correctness and effectiveness of the the discriminator enhances its discriminatory capabilities to proposedmethod.Thisisachievedthroughacomparative avoidbeingmisledbythegenerator.Thisiterativeoptimization analysis of code coverage and the number of vulnerabil- process continues until the generator is capable of producing ities discovered. samples that closely resemble real data [8], [43]. The rest of this paper is organized as follows: Section II The training process of the generative adversarial network summarizestherelatedworkofFuzz,AFL,Fuzzoptimization can be expressed by the loss function shown in (1): where x based on Deep Learning approaches. Section III introduces denotes the real data, P data(x) denotes the distribution that the design of our model in detail. In Section IV, we discuss the real data obeys, E denotes the mathematical expectation the details of our experimental setup. In Section V, we give ofthedistribution,z denotestherandomnoisegenerateddata, a comprehensive analysis to explain our experimental results. and P z(x) denotes the distribution of the generated data. Wesummarizetheadvantagesofourproposedworkandplans for future work in Section VI. minmaxV(D,G)=E [logD(x)] x∼pdata(x) (1) II. PRELIMINARY +E [log(1−D(G(z))] z∼pz(z) A. Fuzzing From(1),wecanlearnthetrainingobjectiveofagenerative Fuzz testing is a software automatic testing technology adversarial network. Given real data x and the discriminator [19], which aims to automate the discovery of potential D(x), the training goal for the discriminator is to maximize vulnerabilities existing in software. This method involves the output of D(X) to approach 1. Simultaneously, for the injectingasubstantialvolumeofrandom,invalid,abnormal,or generator function G(z), where z represents random noise, unintendeddataintotheinputdataorcommandsofaprogram. the training objective is to maximize the output of D(G(Z)) Fuzz testing fundamentally addresses a search problem to approach 1. The discriminator aims to correctly distinguish withinaninfinitesolutionspace[20].Itsobjectiveistoidentify between real and generated data, so the training objective inputs among all possible ones that can trigger program for generated data is to minimize the output of D(G(z)) to crashes, typically representing boundary scenarios or uncom- approach 0. mon inputs. To guide fuzz testing tools in discovering such C. Wasserstein GAN inputs, designers commonly approach the problem from two angles: generation and mutation. They integrate techniques GAN may encounter issues such as gradient vanishing and like symbolic execution [16], genetic algorithm [25], and mode collapse during training. However, wasserstein genera- other technologies to enhance the exploration capabilities of tive adversarial network (WGAN) can mitigate the gradient fuzz testing tools, thereby improving their code coverage and vanishing problem [1] [2], and thus we uses the WGAN as vulnerability discovery capabilities. theseedoptimizationmodelinsteadofGAN.Thewasserstein AFL is a representative evolutionary greybox fuzzer, fol- distance, as a distance metric for measuring the difference lows the main workflow depicted in Fig. 1 [22]. In each between two probability distributions, takes into account theFig.1. AFLworkflow. structural and geometric features between distributions, en- ablingittomoreaccuratelycapturesimilaritiesanddifferences between distributions. Its expression is shown in (2). W(P ,P )= inf E [||x−y||] (2) r g (x,y)∼γ γ∼π(Pr,Pg) In this expression, π(p ,p ) represents the set of all joint r g distributions that combine p and p , p and p represent the r g r g real distribution and the generated distribution, respectively. γ |
represents the set of possible joint distributions, while x and y represent real samples and a generated sample, respectively. Fig.2. Workingprocessofwgan.[2] This paper uses the wasserstein distance as a replacement for metrics based on KL divergence or JS divergence in GAN. This approach enhances the meaningful gradients in gradient we limit their absolute values to be less than a fixed constant descent.Byoptimizingthewassersteindistance,thegenerated c. We also replace the original optimization algorithm Adam distribution p can gradually and stably converge towards the with the RMSProp algorithm. The algorithm flow of WGAN g real distribution p . is as Figure. 2. r To reduce the computational difficulty of calculating the wasserstein distance, the Kantorovich-Rubinstein duality is III. WGAN-AFL usedinGANtoapproximatethecalculationofthewasserstein distance [28] [31]. The expression is shown in (3). In this section, we describe our methodology and the main aspects of the WGAN-AFL in detail. The overall fuzzing 1 W(P ,P )= sup E [f(x)]−E [f(x)] (3) framework is depicted in 3, including the Data Process- r g K x∼Pr x∼Pg ||f||L≤K ing Module(DPM), Model Training(MT), and Fuzzing Mod- In the expression, K is the Lipschitz constant of the ule(FM). function f. After obtaining the approximate expression for the wasserstein distance, it is combined with GAN to obtain the A. Dataset Processing Module loss functions for the generator and discriminator as shown in Firstly,definehigh-qualitytestcasesthatmeetsthefollowing (4) and (5) respectively: criteria [12]: Loss G =−E x∼Pg[f ω(x)] (4) • testcases that cause program crashes during execution; Loss =E [f (x)]−E [f (x)] (5) • testcases with high code coverage; D x∼Pg ω x∼Pr ω • testcases that trigger new code paths during execution, By modifying GAN, we replace the loss function of the which increase the total code coverage; generator with (4) and the loss of the discriminator with • testcaseswithsignificantlydifferentexecutionpathsfrom expression(5).Wealsoreplacethediscriminatornetworkwith the original input seed, which tend to trigger new execu- acriticnetwork.Whenupdatingthediscriminatorparameters, tion paths;TABLEI PARAMETERSOFDISCRIMINATORNETWORKINWGAN. Layer(type) Output Shape Fully connected 1 (Linear) (batch size, 256) LeakyReLU 1(LeakyReLU) (batch size,256) Fully connected 2(Linear) (batch size,256) LeakyReLU 2(LeakyReLU) (batch size,256) Fully connected 3(Linear) (batch size,1) TABLEII PARAMETERSOFGENERATORNETWORKINWGAN. Layer(type) Output Shape Fully connected 1 (Linear) (batch size, 1024) LeakyReLU 1(ReLU) (batch size,1024) Fully connected 2(Linear) (batch size,1024) LeakyReLU 2(ReLU) (batch size,1024) Fully connected 3(Linear) (batch size,1024) LeakyReLU 3(ReLU) (batch size,1024) Fully connected 4(Linear) (batch size,1024) LeakyReLU 4(ReLU) (batch size,1024) Fully connected 5(Linear) (batch size,output size) Fig.3. WGAN-AFLframework. Tanh(Tanh) (batch size,output size) We employ AFL to perform 72-hour fuzz testing on the software. Throughout the fuzzing process, high-quality muta- through the above operations has a size within the range tion seeds are collected and filtered to serve as training data of (0,255). To accelerate neural network convergence, it is for WGAN. necessary to normalize the tensors by changing the size of Following the capture phase, testcases must undergo pre- each element to within the range of (−1,1) using (8) for processing to align with the input format specifications of the calculation. neuralnetwork.Astheneuralnetworkdemandstensorinputs, X′ =(X−128)/128 (8) we initially convert each testcase in the filtered dataset into Inthisexpression,X representsthetensorbeforenormaliza- tensor form, resulting in the tensor X. tion, and X′ represents the tensor after normalization. After completing the normalization operation, this paper uses the X =[x x x ...x ] (6) 1 2 3 n datasetanddataLoaderclassesinthedeeplearningframework In the expression (6), x represents the ith byte of the PyTorch to build high-quality seed sets and initialize data i testcase X. As neural networks mandate uniform tensor input loaders. The data loaders are used to batch load the tensors lengths, padding operations are necessary on each seed to into the neural network, improving the training efficiency of ensure consistent tensor lengths. Furthermore, to improve the neural network. training performance, we set the length of the padded tensor B. Model Training tobeanintegermultipleof32.Specifically,wefirstdetermine the maximum length among the input tensors, denoted as ThegeneratoranddiscriminatoroftheWGANimplemented maxlen, and calculate it using (7) to ensure that maxlen adopt fully connected networks. The relevant parameters are is an integer multiple of 32. Finally, we iterate through all the showninTableIandII.Thefullyconnectednetworkstructure input tensors and pad zeros at the end of each tensor with a is simple and easy to implement, requires less training time, length less than maxlen until its length is equal to maxlen. and is able to comprehensively learn the testcases’ features [29]. TheRMSPropoptimizationalgorithmisemployedformin- maxlen=maxlen+(32−maxlen%32) (7) imizingthelossfunction,anddynamicallyadjustingthelearn- The above processing transforms the training samples into ing rate to enhance the efficiency of gradient descent. Within |
equally-sizedtensorsthatcanbeinputintotheneuralnetwork the PyTorch framework, the optim.lr scheduler is utilized to fortraining.Aftercompletingthedatainitialization,thedataset systematically decrease the learning rate, facilitating a steady can be constructed. Each element in the tensors constructed convergenceofWGAN.Simultaneously,topreventoverfittingof the discriminator, a controlled range of noise is introduced TABLEIII to the real data during training, and the training labels are EXPERIMENTALENVIRONMENT. smoothed. This approach encourages the generator to capture Experimental Environment Configuration Information the genuine features of high-quality testcases from a broader perspective. OperationSystem Ubuntu20.04Server randomaccessmemory(RAM) 4G Considering the training characteristic of WGAN, the CPU Intel(R)Gold6133CPU@3.0GHz wassersteindistanceisrenderedtractablebyemployingweight GPU NVIDIATeslaT4 clipping in each training round, where the parameters of AFLVersion 2.57b CompilationEnvironment GCC-9.3.0 the discriminator are constrained to the clip value value. Python 3.10 Additionally,thevalueofn criticisdeterminedsuchthatthe Pytorch 2.1.2 generator updates its parameters only when the discriminator CUDA 11.8 parameters are updated every n critic iteration. Upon convergence of the network, we preserve generator modelsfromvariousrounds,andselectedwhicharecapableof information will guide the fuzz testing process based on producinghigh-qualitytestcasesthroughmanualvalidation.In the feedback during testing. Additionally, compared to the thefuzzingphase,thediversityofinputseedscanbeenriched instrumentation mode of QEMU, source code instrumentation by sampling from these generators that output high-quality enhances the performance of fuzz testing and prevents unnec- testcases. essary waste of system resources [11]. To answer the research questions raised above, we design C. Fuzzing Module the following three sets of experiments: Upon convergence of the network, we preserve generator • AFL group: This is the control group that does not models from various rounds and select which are capable of use any optimization method. It only uses ordinary seed producinghigh-qualitytestcasesthroughmanualvalidation.In inputstoconductfuzztestingexperimentswiththeorigin thefuzzingphase,thediversityofinputseedscanbeenriched AFL tool. by sampling from these generators that output high-quality • GAN-AFL group: This group uses a seed optimization testcases. modelbasedonGANtooptimizetheseedinputsofAFL. With the enhanced quality of initial input seeds, the FM GAN has the same model architecture and parameters as module’s execution efficiency is further elevated, achieving WGAN. higher code coverage in a shorter time. This improvement • WGAN-AFLgroup:Thisgroupusesaseedoptimization facilitates the easier triggering of software crashes, thereby model based on WGAN to optimize the seed inputs of enhancing vulnerability mining capabilities. Testcases that AFL. induce software crashes during testing will be preserved, enablingfurtheranalysisbydeveloperstopinpointandaddress B. Training Results software vulnerabilities. Utilizing the PyTorch framework, we trained the generative IV. EVALUATION adversarial network (GAN) and the Wasserstein generative In this section, we evaluate WGAN-AFL’s bug-finding adversarial network. Details about the loss variation during performance and achieved code coverage with respect to model training are depicted in Figure.4, while the time con- the origin AFL. Specifically, we answer the following three sumed by model training is illustrated in Figure.5. research questions: With epoch increases, both GAN and WGAN tend to • RQ1. Can WGAN-AFL find more bugs? converge, allowing the generator to grasp the features of • RQ2. Can WGAN-AFL achieve higher code coverage high-quality testcases through adversarial training with the and find more paths? discriminator. Since fully connected networks serve as the • RQ3. Does WGAN perform better than GAN in seed primarymodelforbothGANandWGAN,thetrainingtimeis optimization? relativelyshort,typicallyrangingfrom5to8minutes.Notably, WGAN exhibits faster training compared to GAN, attributed A. Experiment Setup toareductioninthenumberofgradientcomputationsrequired Inthispaper,weconductexperimentsontheUbuntu20.04- in each round of training. Server operating system with the specific configuration as showninTableIII.Theexperimentalsubjectisthecommonly C. Fuzzing Results used toolset Binutils under the Linux system, which includes commonly used programs such as readelf, nm, and objdump, An 8-hour fuzz testing experiment is conducted on four with a version of V2.25. commonly used software applications (readelf, nm, objdump, We employing AFL-GCC for code instrumentation on the and tcpdump) using AFL, WGAN-AFL, and GAN-AFL. target programs in the experiment, enable the recording of Throughoutthetestingprocess,werecordedthecodecoverage code coverage throughout the testing process. This recorded and the number of discovered crashes [17].Fig.4. VariationofLossValuesduringModelTraining:TrainingLossofGAN(Left)andTrainingLossofWGAN(Right). TABLEIV FUZZINGPERFORMANCEOFAFL,GAN-AFL,WGAN-AFLONDIFFERENTSOFTWARES. Fuzzer Metrics readelf nm objdump tcpdump Code Coverage 35.5% 25.9% 31.6% 16.4% AFL New Paths 3049 1203 868 1285 Crashes 0 96 3 0 Code Coverage 36.4% 37.7% 33.4% 18.1% GAN-AFL New Paths 3120 1926 917 1432 Crashes 0 208 19 0 Code Coverage 41.2% 38.9% 38.3% 17.0% WGAN-AFL New Paths 3533 1880 1052 1322 Crashes 0 274 21 0 higher code coverage than the original AFL. The average code coverage reaches 31.4%, representing a 14.8% average |
increase compared to the original AFL. This indicates that GAN plays a certain role in promoting the growth of code coverage. 2) New Paths: In terms of new paths, WGAN-AFL also demonstrates the best performance. On readelf and nm soft- ware, WGAN-AFL exhibits a significantly higher number of program path gains compared to GAN-AFL. However, on objdump and tcpdump software, the program path gains for WGAN-AFL are slightly lower than GAN-AFL. The average number of discovered paths across the four sets of software testingreaches1946.75,representinga21.6%averageincrease comparedtoAFL.Thisunderscorestheexcellentexpansibility Fig.5. TrainingTimeofGANandWGAN. of seeds generated by WGAN. GAN-AFL also performs well, with a superior number of 1) Code Coverage: In terms of code coverage, as depicted discoveredpathscomparedtoAFLinallfoursetsofsoftware. in Table IV, WGAN-AFL exhibits the best overall perfor- The average number of discovered paths reaches 1848.75, mance. It achieves the highest code coverage on readelf, representinga15.4%averageincreasecomparedtoAFL.This objdump, and nm, and the second-highest code coverage on validates the role of GAN in seed optimization. tcpdump. The average coverage reaches 33.85%, representing 3) Crashes: In terms of vulnerabilities detection, due to a 23.8% average increase compared to the original AFL. This the robust security features of readelf and tcpdump, WGAN- validates the superiority of the seed enhancement method AFL was unable to discover any existing vulnerabilities. based on WGAN. However, in the testing of objdump and nm, WGAN-AFL GAN-AFL exhibits the second-best overall performance, demonstrated a significant advantage, uncovering the highest but in experiments with the four sets of software, it achieves numberofvulnerabilitiesamongthethreegroups.Specifically,it discovered 274 vulnerabilities for objdump and 21 for nm. B. Mutation-based Fuzzers This represents a total growth of 198% compared to AFL. Mutation-based fuzzers create a large number of testcases Similarly,GAN-AFLdidnotdiscoveranyvulnerabilitiesin by applying a series of mutation rules to the seeds, and input readelf and tcpdump. However, in nm and objdump, the total them into the target program for fuzzing [10], [45]. AFL [22] numberofvulnerabilitiesdetectedbyGAN-AFLreached228, stands out as a representative mutation-based fuzzer. It cap- representing a 130% increase compared to AFL. tures the runtime code coverage of the program through code In summary, WGAN-AFL demonstrates the best overall instrumentation,andsubsequentlyemploysgeneticalgorithms performance, with GAN-AFL surpassing AFL. This indicates to identify valuable test samples for mutation. AFL utilizes that WGAN-AFL produces seeds of the highest quality, re- code coverage bootstrapping to enhance both code coverage sulting in a more diverse set of test cases during the mutation and vulnerability mining capabilities. Widely adopted by se- phase.Thesediversetestcasesaremorelikelytoexploredeep curity professionals for software testing, AFL has become a program paths, thereby increasing the likelihood of triggering prevalent tool in the field. Moreover, Some researchers [44] softwarecrashes.WhileGAN-AFLgeneratesseedsofslightly utilize dynamic taint analysis to trace the propagation path of lower quality compared to WGAN-AFL, it still possesses the datathroughoutprogramexecution.Thisapproachenablesthe capability to output excellent seeds. identificationofconnectionsbetweenpotentiallysensitivedata This indicates that WGAN exhibits the strongest learning and user inputs, allowing for the detection of data processing capability, thoroughly capturing the features of high-quality operations that may give rise to security vulnerabilities. Then, test samples and minimizing the distance between the gen- thesymbolicexecutionmethodwithinwhite-boxtesting[5]is erated sample distribution and the real sample distribution. usedtoidentifydependenciesamongdifferentinputlocations. On the other hand, GAN, constrained by issues like gradient C. Machine learning for fuzzers vanishing and mode collapse, can learn partial features of high-quality test samples. However, in the later stages of The integration of machine learning has significantly en- training, due to inappropriate distance metrics, the generator’s hanced the performance of fuzz testing tools. There is now ability to acquire effective learning gradients approaches zero a considerable number of tools leveraging machine learning or becomes overly conservative, leading to a tendency for methods to enhance vulnerability discovery capabilities [7]. fixed pattern outputs and a lack of sufficiently diverse seed The origin AFL is expanded [27] by incorporating long short- generation. term memory (LSTM) network and sequence-to-sequence (seq2seq) models [18], [23], [35], [37], [40]. These additions aim to learn the relationship between testcases and code path V. RELATEDWORK exploration, resulting in improved quality of variant locations Inthissection,wediscussfuzzingtechniquesthatarebased within input test cases. Previous work leverages statistical on generation and mutation. Specifically, we will explore machine learning [13] based on recurrent neural networks to fuzzers that leverage machine learning approaches. automate the training of testcase generation. They selectively utilized inputs generated by the long short-term memory network model, employing a sampling strategy to achieve A. Generation-based Fuzzers a balanced generation of both format-correct and format- Generation-based fuzzers focus on creating inputs system- incorrect inputs. This approach facilitates the exploration of |
atically, often using predefined models or templates [6], [9], new program states. NEUZZ [30] is proposed to use gradient [17], [34], [36], [38], [39], [42]. They are suitable for fuzzing bootstrapping techniques and the smoothing capabilities of programs that require highly structured inputs, like inter- neural networks to incrementally learn an approximation of preters,andcompilers[21].Towellutilizethem,usersneedto real-world program behavior. provide a set of syntactic specifications and generation rules. VI. CONCLUSION CSmith [41] hard-codes the language specification, is able to generate valid C fragment code for testing C compilers. This WeproposeWGAN-AFL,animprovedmethodtosolvethe languagespecificationcanbedefinedbyusers,includinglimits problem that fuzz testing tools are sensitive to initial input on program size, the number of variables, and the depth of seeds. Through the collection of high-quality test samples, control flow structures. Further, Skyfire [32] based on corpus we construct a generative adversarial network to learn their and probabilistic context-sensitive grammar (PCSG), learns featuresandderiveamodelcapableofgeneratinghigh-quality and deduces syntactic and semantic rules between legitimate initial seeds. And since generative adversarial networks suffer inputs through a grammar tree, imposes appropriate heuristics from training instability as well as pattern crashes, we use and reorganizationrules, and ultimatelyproduces high-quality WGAN to alleviate this deficiency and further improve the inputsthatcanpassthesyntactic-semanticcheck.NAUTILUS quality of the output seeds. We use WGAN-AFL to conduct [3] combines grammar rules and code coverage feedback to experiments on commonly used softwares under Linux, and guide the generation of test cases. It utilizes context-free the results show that WGAN-AFL proves to be effective due grammartodeconstructhigh-qualitytestcases,formingarich totheoriginalAFLintermsofcodecoverageandthenumber corpus. of vulnerabilities found.REFERENCES [29] A. G. Schwing and R. Urtasun. Fully connected deep structured networks. CoRR,abs/1503.02351,2015. [1] M. Arjovsky and L. Bottou. Towards principled methods for training [30] D. She, K. Pei, D. Epstein, J. Yang, B. Ray, and S. Jana. NEUZZ: generativeadversarialnetworks. InICLR2017,2017. efficientfuzzingwithneuralprogramsmoothing. In2019IEEE,pages [2] M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein generative 803–817,2019. adversarialnetworks. InICML2017,pages214–223,2017. [31] S.Vallender.Calculationofthewassersteindistancebetweenprobability [3] C. Aschermann, T. Frassetto, T. Holz, P. Jauernig, A.-R. Sadeghi, and distributions on the line. Theory of Probability & Its Applications, D.Teuchert. Nautilus:Fishingfordeepbugswithgrammars. InNDSS, 18(4):784–786,1974. 2019. [32] J. Wang, B. Chen, L. Wei, and Y. Liu. Skyfire: Data-driven seed [4] G.Avetisov. alwareat30,000feet-whatthe737maxsaysaboutthe generationforfuzzing. InSP2017,pages579–594,2017. stateofairplanesoftwaresecurity. [EB/OL]. [33] A. Witze. Software error doomed japanese hitomi space- [5] S. K. Cha, M. Woo, and D. Brumley. Program-adaptive mutational craft. [EB/OL]. https://www.scientificamerican.com/article/ fuzzing. InSP2015,pages725–741,2015. software-error-doomed-japanese-hitomi-spacecraft/ Accessed January [6] L. Chai, D. Xiao, Z. Yan, J. Yang, L. Yang, Q. Zhang, Y. Cao, and 29,2024. Z. Li. QURG: question rewriting guided context-dependent text-to-sql [34] C.Yang,Y.Deng,R.Lu,J.Yao,J.Liu,R.Jabbarvand,andL.Zhang. semanticparsing. InPRICAI2023,pages275–286,2023. White-box compiler fuzzing empowered by large language models. [7] P.ChenandH.Chen.Angora:Efficientfuzzingbyprincipledsearch.In CoRR,abs/2310.15991,2023. 2018 IEEE Symposium on Security and Privacy (SP), pages 711–725. [35] J. Yang, S. Ma, L. Dong, S. Huang, H. Huang, Y. Yin, D. Zhang, IEEE,2018. L.Yang,F.Wei,andZ.Li. Ganlm:Encoder-decoderpre-trainingwith [8] A.Creswell,T.White,V.Dumoulin,K.Arulkumaran,B.Sengupta,and anauxiliarydiscriminator. InACL2023,pages9394–9412.Association A. A. Bharath. Generative adversarial networks: An overview. IEEE forComputationalLinguistics,2023. SignalProcess.Mag.,35(1):53–65,2018. [36] J. Yang, S. Ma, D. Zhang, Z. Li, and M. Zhou. Improving neural [9] Y.Deng,C.S.Xia,H.Peng,C.Yang,andL.Zhang. Largelanguage machinetranslationwithsofttemplateprediction. InACL2020,pages models arezero-shot fuzzers: Fuzzingdeep-learning librariesvia large 5979–5989,2020. languagemodels. InISSTA,pages423–435,2023. [37] J. Yang, S. Ma, D. Zhang, J. Wan, Z. Li, and M. Zhou. Smart-start [10] Y.Deng,C.S.Xia,C.Yang,S.D.Zhang,S.Yang,andL.Zhang.Large decodingforneuralmachinetranslation. InNAACL2021,pages3982– languagemodels areedge-casefuzzers:Testing deeplearninglibraries 3988,2021. viafuzzgpt. CoRR,abs/2304.02014,2023. [38] J. Yang, S. Ma, D. Zhang, S. Wu, Z. Li, and M. Zhou. Alternating |
[11] A. Fioraldi, D. C. Maier, H. Eißfeldt, and M. Heuse. AFL++ : languagemodelingforcross-lingualpre-training. InAAAI2020,pages Combiningincrementalstepsoffuzzingresearch.InWOOT2020,2020. 9386–9393,2020. [12] S. Gan, C. Zhang, X. Qin, X. Tu, K. Li, Z. Pei, and Z. Chen. Path [39] J.Yang,J.Wan,S.Ma,H.Huang,D.Zhang,Y.Yu,Z.Li,andF.Wei. sensitivefuzzingfornativeapplications.IEEETrans.DependableSecur. Learningtoselectrelevantknowledgeforneuralmachinetranslation.In Comput.,19(3):1544–1561,2022. NLPCC2021,pages79–91,2021. [13] P. Godefroid, H. Peleg, and R. Singh. Learn&fuzz: machine learning [40] J. Yang, Y. Yin, S. Ma, H. Huang, D. Zhang, Z. Li, and F. Wei. forinputfuzzing. InASE2017,pages50–59,2017. Multilingualagreementformultilingualneuralmachinetranslation. In [14] I.J.Goodfellow,J.Pouget-Abadie,M.Mirza,B.Xu,D.Warde-Farley, ACL2021,pages233–239,2021. S. Ozair, A. C. Courville, and Y. Bengio. Generative adversarial [41] X. Yang, Y. Chen, E. Eide, and J. Regehr. Finding and understanding networks. Commun.ACM,63(11):139–144,2020. bugsinCcompilers. InPLDI2011,pages283–294,2011. [15] A. Herrera, H. Gunadi, S. Magrath, M. Norrish, M. Payer, and A. L. [42] Z. Yang, W. Wu, J. Yang, C. Xu, and Z. Li. Low-resource response Hosking. Seed selection for successful fuzzing. In ISSTA 2021, pages generation with template prior. In EMNLP 2019, pages 1886–1897, 230–243,2021. 2019. [16] J.C.King. Symbolicexecutionandprogramtesting. Communications [43] Y. Zhai, J. Yang, Z. Wang, L. He, L. Yang, and Z. Li. Cdga: A gan- oftheACM,19(7):385–394,1976. based controllable domain generation algorithm. In TrustCom 2022,, [17] G.Klees,A.Ruef,B.Cooper,S.Wei,andM.Hicks. Evaluatingfuzz pages352–360,2022. testing. InCCS2018,pages2123–2138,2018. [44] Y. Zhang, W. Huo, K. Jian, J. Shi, H. Lu, L. Liu, C. Wang, D. Sun, [18] G.Klees,A.Ruef,B.Cooper,S.Wei,andM.Hicks. Evaluatingfuzz C. Zhang, and B. Liu. Srfuzzer: an automatic fuzzing framework for testing. InCCS2018,pages2123–2138,2018. physicalSOHOrouterdevicestodiscovermulti-typevulnerabilities. In [19] J. Li, B. Zhao, and C. Zhang. Fuzzing: a survey. Cybersecur., 1(1):6, ACSAC2019,pages544–556,2019. 2018. [45] X. Zhu, S. Wen, S. Camtepe, and Y. Xiang. Fuzzing: A survey for [20] J. Liang, M. Wang, Y. Chen, Y. Jiang, and R. Zhang. Fuzz testing in roadmap. ACMComput.Surv.,54(11s):230:1–230:36,2022. practice:Obstaclesandsolutions.In2018IEEE25thInternationalCon- ference on Software Analysis, Evolution and Reengineering (SANER), pages562–566,2018. [21] X.Liu,X.Li,R.Prajapati,andD.Wu.Deepfuzz:Automaticgeneration ofsyntaxvalidCprogramsforfuzztesting.InAAAI2019,pages1044– 1051,2019. [22] Z. M. American fuzzy lop. [EB/OL]. http://lcamtuf.coredump.cx/afl AccessedJanuary29,2024. [23] S.Ma,J.Yang,H.Huang,Z.Chi,L.Dong,D.Zhang,H.H.Awadalla, A. Muzio, A. Eriguchi, S. Singhal, X. Song, A. Menezes, and F. Wei. XLM-T: scaling up multilingual machine translation with pretrained cross-lingualtransformerencoders. CoRR,abs/2012.15547,2020. [24] V.J.M.Mane`s,H.Han,C.Han,S.K.Cha,M.Egele,E.J.Schwartz, and M. Woo. The art, science, and engineering of fuzzing: A survey. IEEETrans.SoftwareEng.,47(11):2312–2331,2021. [25] T. V. Mathew. Genetic algorithm. Report submitted at IIT Bombay, page53,2012. [26] S.Nagy,A.Nguyen-Tuong,J.D.Hiser,J.W.Davidson,andM.Hicks. Breaking through binaries: Compiler-quality instrumentation for better binary-onlyfuzzing.InUSENIXSecurity2021,pages1683–1700,2021. [27] M.Rajpal,W.Blum,andR.Singh. Notallbytesareequal:Neuralbyte sieveforfuzzing. CoRR,abs/1711.04596,2017. [28] L.Ru¨schendorf. Thewassersteindistanceandapproximationtheorems. ProbabilityTheoryandRelatedFields,70(1):117–129,1985. |
2401.17010 4202 luJ 72 ]RC.sc[ 5v01071.1042:viXra PublishedasaconferencepaperatICOMP2024 FINETUNING LARGE LANGUAGE MODELS FOR VUL- NERABILITY DETECTION AlekseiShestov,RodionLevichev,RavilMussabayev,EvgenyMaslov,AntonCheshkov&PavelZadorozhny AlekseiShestov∗ RodionLevichev SberAILab HuaweiRussianResearchInstitute Moscow117997,Russia Moscow121099,Russia AMShestov@sberbank.ru,shestovmsu@gmail.com rodion.levichev@huawei-partners.com RavilMussabayev† EvgenyMaslov SatbayevUniversity HuaweiRussianResearchInstitute Almaty121099,Kazakhstan Moscow121099,Russia r.mussabayev@satbayev.university evgeny.maslov@huawei.com AntonCheshkov PavelZadorozhny HuaweiRussianResearchInstitute HuaweiRussianResearchInstitute Moscow121099,Russia Moscow121099,Russia anton@huawei.com pavel.zadorozhny@huawei-partners.com ABSTRACT This paper presents the results of finetuning large language models (LLMs) for the task of detecting vulnerabilities in Java source code. We leverage Wizard- Coder,arecentimprovementofthestate-of-the-artLLMStarCoder,andadaptit forvulnerabilitydetectionthroughfurtherfinetuning. To acceleratetraining,we modify WizardCoder’s training procedure, also we investigate optimal training regimes.Fortheimbalanceddatasetwithmanymorenegativeexamplesthanpos- itive,wealsoexploredifferenttechniquestoimproveclassificationperformance. The finetuned WizardCoder model achieves improvementin ROC AUC and F1 measuresonbalancedandimbalancedvulnerabilitydatasetsoverCodeBERT-like model,demonstratingtheeffectivenessofadaptingpretrainedLLMsforvulnera- bilitydetectioninsourcecode. Thekeycontributionsarefinetuningthestate-of- the-artcodeLLM,WizardCoder,increasingitstrainingspeedwithouttheperfor- mance harm, optimizingthe training procedureand regimes, handlingclass im- balance,andimprovingperformanceondifficultvulnerabilitydetectiondatasets. Thisdemonstratesthepotentialfortransferlearningbyfinetuninglargepretrained languagemodelsforspecializedsourcecodeanalysistasks. 1 INTRODUCTION Detecting vulnerabilities in source code is an important problem in software engineering. This projectexploresfinetuningofLLMsforbinaryclassificationofJavafunctionsasvulnerableornot. Thegoalsarethefollowing: • Investigate the possibility of applying the new class of big LLM models for the task of vulnerabilitydetection; • ImproveoverpreviousresultsofCodeBERT-stylemodelsbyusingbigLLMmodels; • Investigate whether the encountered performance limit is due to the limited capacity of CodeBERT-likemodels. ∗TheworkwasdonewhiletheauthorwasattheHuaweiRussianResearchInstitute †TheworkwasdonewhiletheauthorwasattheHuaweiRussianResearchInstitute 1PublishedasaconferencepaperatICOMP2024 Transformermodelspretrainedonlargecodecorpus,likeCodeBERTFengetal.(2020)andContra- BERTLiuetal.(2023a),haveshownpromisingresultsonmanytasksofcodeunderstanding.Line- VulFu&Tantithamthavorn(2022),amodelbuiltonCodeBERT,achievedstate-of-the-artresultson thevulnerabilitydetectiontask,surpassinggraphnetworkapproaches,likeRevealChakrabortyetal. (2022). Recent LLMs, which have billions of parameters, such as StarCoder Lietal. (2023) and WizardCoderLuoetal.(2023),arepretrainedontrillionsoftokens.Theyare2ordersofmagnitude biggerthanCodeBERT(CodeBERThasapproximately125millionparameters)andhaveordersof magnitudemoredatausedforpretraining. Theyachievebetterresultsonlanguageandcodeunder- standing tasks, providing an opportunityfor improved transfer learning. The vulnerability detec- tiontaskcanbenefitfromtheincreasedexpressivenessandknowledgecontainedinstate-of-the-art LLMs. Vulnerability detection is a complex task with many aspects influencing and potentially limiting thesolutionquality. Theproblemoffunction-levelvulnerabilitydetectioncanbedividedintotwo stages:representingthefunctioninsomeform(suchasexecutionpathsorvariousgraphs)andsolv- ingtheclassificationproblemforthisrepresentationChakrabortyetal.(2022);Zhangetal.(2023). Itisnotedthateachstagecanbeindependentlyimproved,andimprovingeachstageenhancesthe overallqualityofthesolutiontotheproblem.Justificationstotheseobservationsinclude:theEPVD modelZhangetal.(2023),whereimprovingthedatastructurewiththesameMLalgorithmresulted ingains,andtheLineVulmodelFu&Tantithamthavorn(2022)comparedtoarticlesthatpresentthe functiontexttoanLSTM.Therefore,itisrelevanttoimproveeachofthetwostages. We focusonimprovingthesecondpart, theMLalgorithmforvulnerabilitydetection,withoutad- dressingthefirststage. Ourapproachinvolvestwodirections: enhancingthemodel’scapacityand improving the handling of imbalanced data, which represents real-world conditions. One of the limitingfactorscouldbethecapacityandcodeunderstandingabilityofCodeBERT-likemodels,as vulnerabilitydetectionrequiresadeepunderstandingofmanycodeaspects. Checkingthishypothe- sisisacrucialprobleminthefieldofvulnerabilitydetection. Hence,wecompareourclassification offunctions,representedastext,withthepreviousmostpowerfulfamilyofmodels,theCodeBERT- basedmodels. Furthermore,thestudybyCheshkovetal.(2023)revealedthatsimplepromptingtechiquesforthe ChatGPTLLMwereunabletoachievenoticeablybetterresultsthanrandompredictiononthevul- nerabilitydetection task. Thisfinding suggeststhat finetuningof LLMson this task naturallybe- comesapromisingresearchdirection. |
Thereal-worlddistributionofvulnerablefunctionsishighlyclassimbalanced,containingfarmore negative samples than positive ones. This imbalance makes the task inherently more difficult, as standard training objectives, like cross-entropy loss, may lead the model to focus on the domi- nantnegativeclass. Specializedtrainingtechniques,suchasfocallossLinetal.(2017)andsample weighting,arerequiredtoproperlyemphasizetheminoritypositiveexamplesduringfinetuning. Thisprojectconductsexperimentstofindtheoptimalmodelarchitecture,traininghyperparameters, and loss formulations for effective LLM finetuning on the vulnerability detection task (both for balancedandimbalanceddatasets). Theaimistoleveragerecentadvancesinpretrainedmodelsto improveoverpriorbenchmarkresults. Theremainderofthepaperisorganizedasfollows: • Wediscussthemostnotablerelatedworkinthefield; • We describe our approach for building and adapting the model. Specifically, we discuss howwesolvedthefollowingproblems: – Choosingthemodel; – UsinglimitedGPUresourcesforthemodelsize; – Adaptingadecodermodelfortheclassificationtask; – Underutilizationoftheinputsequencelength,resultinginaslowtrainingspeed. • Wedetailourmaininstruments: datasets,models,metrics,aswellasmeasurementproce- dures. • Wedescribetheconductedexperiments: 2PublishedasaconferencepaperatICOMP2024 – ComparisonwithContraBERT(state-of-the-artnon-LLMtransformermodel)onbal- ancedandimbalanceddatasets; – Ablationstudyofthecontextsizeandclassificationloss; – Investigationofthebatchpackingtechnique; – Improvingtheusageofamoreinformativepartoftheimbalanceddataset. • Weconcludethestudy,highlightingourcontributionsandstatingpromisingfutureresearch directions. AppendixAprovidesadditionaldetailsabouttheobtaineddistributionofCWEtypesinourdataset. 2 RELATED WORK RecentworkbyZhouetal. Zhouetal.(2024)providesacomprehensivereviewofLargeLanguage Models (LLMs) for vulnerabilitydetection and repair. Their analysis revealsrapid growth in this field, with 27.8% of reviewed studies published in just the first two months of 2024. The major- ityofapproaches(82%)employfine-tuningtechniques,whilezero-shot(11%)andfew-shot(7%) methodsarelesscommon. CodeBERTemergesasthedominantmodelforvulnerabilitydetection, usedin38.5%ofstudies,withGPT-3.5andGPT-4followingat10.3%and2.6%respectively. The reviewalsohighlightspersistentchallengesinthefield,includingclassimbalance,noisydata,and scarcityoflabeleddatasetsfortrainingLLMsinvulnerabilitydetection. Ourworkaddressesthese challengesbyfocusingonfine-tuningWizardCoderforJavavulnerabilitydetection,contributingto thisrapidlyevolvingresearcharea. Despite the successes of GPT models in solving various code-related tasks, their ability to detect vulnerabilitiesincoderemainsinsufficient.Thereport”EvaluationofChatGPTModelforVulnera- bilityDetection”(Cheshkovetal.,2023)notesthatChatGPTandGPT-3models,withoutadditional tuningandtraining,showresultsnobetterthanrandomguessinginvulnerabilityclassification.Sim- ilar conclusionsare drawn in the study ”ChatGPT for VulnerabilityDetection, Classification, and Repair: How Far Are We?” (Michael Fu et al.) (Fuetal., 2023), where comparisons with spe- cialized models such as CodeBERT and GraphCodeBERT reveal a significant lag of ChatGPT in vulnerability-relatedtasks. AnothernotableexampleisthestudybyLiuetal. (2023)(Liuetal.,2023b),wheretheGPTmodel wasenhancedusingretrieval-augmentedgeneration(RAG).TechniquessuchasBM-25andTF-IDF wereemployedtoretrieverelevantinformation,whichwasthenusedasexamplesforthefew-shot learningtechnique. UsingtheDevigndataset,thestudydemonstratedsignificantimprovementsin precision,recall,andF1scorecomparedtotraditionalmethods. RecentworkbyPurbaetal. Purbaetal. (2023)explorestheapplicationoflargelanguagemodels (LLMs)forsoftwarevulnerabilitydetectioninCandC++code.TheyevaluatefourLLMs,including fine-tuned and zero-shot models, on SQL injection and buffer overflow vulnerabilities using the Code Gadgets and CVEfixes datasets. Their findings reveal that while LLMs demonstrate high recallinidentifyingvulnerablecodepatterns,theysufferfromhighfalsepositiverates. Thisaligns with our observationson the challengesof using LLMs for vulnerabilitydetection. However, our study extends this approach to Java, a domain where LLM-based vulnerability detection remains underexplored. We focus specifically on fine-tuning the state-of-the-art WizardCoder model for Java codeanalysis and introducenoveltechniquesto improvetrainingefficiencyand handle class imbalance. Unlike Purba et al., who primarily use prompt-based approaches, we explore more advancedfine-tuningmethodsandprovideadetailedanalysisofthemodel’sperformanceonJava- specific vulnerabilitytypes, contributingto the limited bodyof work on LLM-based vulnerability detectionforJava. 3 IMPLEMENTATION DETAILS 3.1 CHALLENGES Webrieflydescribethemainmilestonesandchallengesthatweencounteredduringtheimplemen- tationofourapproach. 3PublishedasaconferencepaperatICOMP2024 LLM model selection for finetuning Recently, there has been a rise in the number of open- sourcedLLMmodels,trainedonlargecodecorporaLietal.(2023);Luoetal.(2023);Zhengetal. |
(2023);Nijkampetal.(2023). Thetaskistochooseanoptimalmodelforfinetuning,bothfromthe perspectiveofqualityandpotentialtechnicaldifficultiesencountered.Thegoalistoselectamodel possessingthefollowingproperties: • Itshowsastrongpotentialforeffectivetransferlearningtothevulnerabilitydetectiontask; • Itistechnicallyeasytouseandintegrateintoourframework. Adapting LLMs for limited computational resources Given our computing capabilities, fine- tuningcompleteLLMmodelsisinfeasibleandwouldbeinefficientevenwithincreasedresources. Toenableeffectivefinetuningofa 13-billionparametermodelonourhardware,weneedmethods toreducethenumberoftrainedparametersandmemoryrequirements. Adapting LLMs for the classification task The standard pretrainingtask for LLM decodersis thenexttokenprediction,whichdiffersfromourendtaskofvulnerabilityclassification. Including thenexttokenpredictioncomponentinthelossmaymisguideweightupdatesawayfromtheoptimal state forclassification. To fullyleveragethecapabilitiesoftheLLMweightsforourtargettask, a betterstrategymightbetoleaveonlytheclassificationtermintheloss. Improvingaslowtrainingspeed Thestandardsequenceclassificationapproachofpaddingtoa fixedlengthmakestrainingextremelyslow.Tomitigatetheissueofshortsequencelengths,thetask istopackmultiplefunctionsintoeachtrainingsequence. Thiscandecreaseunusedpaddedtokens, improvingcomputationalefficiency. 3.2 CHOOSING AN LLMTO FINETUNE Recently,therehasbeenaproliferationofopen-sourcedLLMmodelstrainedonlargecodecorpora Lietal. (2023);Luoetal.(2023);Zhengetal. (2023);Nijkampetal.(2023). Thetask isto select anoptimalmodelforfinetuning. First,weattemptedtousetheCodeGeeXmodelZhengetal.(2023). Unfortunately,wefoundthat using this model was very difficult from the technical perspective, as the code provided by the authorsdoesnotsupportstandardlibrariesfortransformertrainingandlargemodeladaptation. We wereabletorunthemodelon8V100GPUsforinference,butthismemorycapacitywasinsufficient forfinetuning. ThelackofsupportforcommonLLMadaptationlibrariesmadeitverychallenging toovercomethememorylimitations,sowedecidedtodiscontinueattemptswiththismodel. Next, we exploredtwo other models, WizardCoderLuoetal. (2023) and CodeGenNijkampetal. (2023). In contrast to CodeGeeX, these models include support for major transformer training frameworkslike DeepSpeed, HuggingFaceTransformers, Peft Mangrulkaretal. (2022), etc. This compatibilitywithstandardlibrariessignificantlyeasedthe applicationoflow-memoryadaptation techniques. When conducting a simple Question Answering training, we find that WizardCoder is much bet- ter than CodeGen at directly answering “YES” and “NO”. CodeGen sometimes responded with “TRUE”,“FALSE”,erratic“YES”and“NO”withadditionalcharacters,andhasaworsesequence stoppingbehavior. GiventheWizardCoder’sstrongercapabilitiesonthistask,wefocusedoursub- sequentexperimentsonthismodel. Choosing the best StarCoder-family model In our study, we evaluate three models from the StarCoderfamilyforfinetuning:StarCoderBase,StarCoderLietal.(2023)(whichisStarCoderBase finetunedonthePythondataset),andWizardCoderLuoetal.(2023)(whichisanimprovedversion ofStarCodertrainedusingtheeval-instructtechnique).WecomparedtheperformanceofStarCoder andStarCoderBaseonthequestionansweringtaskandfoundnosignificantdifferencesbetweenthe twomodels,despitethefactthatStarCoderwasspecificallyadaptedforPython. Subsequently,we primarilyusedtheWizardCodermodel,asitisclaimedtobesuperiortoStarCoder. However,our experimentsdidnotrevealanysignificantdifferencesinperformancebetweenthesetwomodels.In ourfuturewriting,wewillusetheterm“StarCoder”toreferspecificallytotheWizardCodervariant oftheStarCoderfamily. 4PublishedasaconferencepaperatICOMP2024 Generally,moreelaborateexperimentsareneededtofindifthereareanydifferencesintheperfor- manceofthesemodels. 3.3 ADAPTING LLMSFORLIMITED COMPUTATIONALRESOURCES Finetuning complete multi-billion parameter LLMs is infeasible given our hardware constraints, evenifmoreresourceswereobtained. Toenableeffectivefinetuning,werequiretechniquestore- ducethenumberoftrainedparametersandmemoryneeds.TheHuggingFacePeftMangrulkaretal. (2022) library implements several methods tackling this, including the LORA method Huetal. (2022). LORA enables full-model finetuning by decomposing the weight matrices into low-rank approximations,drasticallydecreasingparametersandmemory. WesuccessfullyappliedLORAtotheWizardCodermodel. WeusedthefollowingoptimalLORA settings: r = 8, alpha = 32, dropout= 0.05. This reduced the 13 billion parameter modelsto just 25millionparameters,smallerthanCodeBERT.TheresultsvalidateLORA’seffectivenessforLLM adaptationundermemorylimitations. 3.4 ADAPTING LLMSFORCLASSIFICATION Typically,codeLLMsaretrainedona largeamountofunlabeledcodedata ina specialway. The principle is to iteratively take code tokens as input, predict the next token, and compare it with the ground truth. This principle drives the next token prediction loss. Specifically, for any input sequence{x 1,x 2,...,x N} of length N, the outputof an LLM is a probabilitydistributionof the nexttokenP(x N+1|x 1,x 2,...,x N,Θ)=p N+1 ∈[0,1]v,whereΘrepresentsallparametersofthe |
model,andvisthevocabularysize. Bycomparingitwiththerealdistribution,i.e.,aone-hotvector y N+1 ∈{0,1}voftheground-truthtoken,wecanoptimizethecumulativecross-entropyloss: N−1 L ntp =− X y n+1logP(x n+1|x 1,x 2,...,x n,Θ) (1) n=1 In the context of adapting a language model for classification tasks, the objective function used duringtrainingneedstobealignedwiththeclassificationgoalratherthanthenexttokenprediction. Foraninputsequence{x 1,x 2,...,x N}oflengthN,thetraditionalgenerativepretrainingobjective (next token prediction loss) would compute the loss across all predicted tokens in the sequence. However, this is suboptimal for a classification task such as vulnerability classification, which is concernedwithcategorizingtheentireinputsequenceratherthanpredictingsubsequenttokens. Toaddressthis,weproposeaclassificationlossthatleveragesonlythepredictedprobabilityofthe finaltokenx ,whichisthenmatchedagainstthegroundtruthlabelyusingcross-entropy.Thisloss N isexpressedmathematicallyas: L class =−logP(y|x 1,x 2,...,x N,Θ) (2) Here,yrepresentsthecorrectclasslabelforthesequence,andP(y|x 1,x 2,...,x N,Θ)istheproba- bilitythatthemodel,parameterizedbyΘ,assignstothecorrectclass. Thisformulationensuresthat weight updates during training are driven exclusively by the classification objective, without any influencefromthegenerativetaskofnexttokenprediction. Theclassificationlossmaybeviewed as correspondingto the last term of the generative pretraining objective’s (next token prediction) cross-entropyloss,focusingentirelyontheclassificationoutputforn=N. Eliminatingthenexttokenpredictionobjectiveenablesfullutilizationofthepretrainedweightsfor vulnerabilitydetection,withoutinterferencefromunrelatedgenerationtasks. 3.5 SPEEDING UP THECLASSIFICATION TRAINING The standard approach of passing code samples to the LLM by writing tokens into the context windowandpaddingunusedslotsis problematicforourdata. Mostdatasetfunctionshavelength under50tokens(seeTable1),sothispaddingwastescomputation. 5PublishedasaconferencepaperatICOMP2024 Functionlength(tokens) Functionsnumber 0-10 1976 10-20 31627 20-50 51972 50-100 32194 100-300 35769 300-500 8756 500-1000 5515 1000-2000 2236 >2000 767 Table1:Functionlengthhistogram,obtainedfromparsingseveralprojects We see that most functionshave less than 50 tokens, with many having 10-20 tokens. Using one functionperbatchelement(paddedto2048)ishighlyredundant. To mitigate the issue of shortsequence lengths, we pack multiple functionsinto each training se- quence.Thisincreasestheactualbatchsizeversususingonefunctionpersequence. Thus,duringourstudywefocusonthefollowingtworegimesofusingcodeLLMs: 1. Next token prediction. This approach corresponds to using the next token prediction lossequation1duringtrainingandprediction. Duringtraining,anLLM’sinputsequence ispackedwiththecodeoftrainingmethodsandtheirgroundtruthlabelsoneafteranother inthefollowingformat: Method’scode+YES/NO(binarygroundtruthlabel)+hEOSi Here hEOSi denotesthe special “end of sequence” token from the vocabulary. We fit as many as possible full training examples into the input sequence length, which is equal to 2048for StarCoder-basedmodelsand 512 for CodeBERT-based ones. The rest of the positionsarefilledwiththespecialpaddingtoken. Forprediction,thebinaryclassification token is generated according to equation 1, and its probability is taken for ROC AUC calculation; 2. Binary classification. This regime uses the binary cross entropyclassification loss equa- tion2fortrainingandprediction.Specifically,thetrainingschemeforbinaryclassification lossisformalizedasfollows: (a) Load a batch with as many complete function codes as it can fit in the following format: Method’scode+hEOSi (b) Populateanarrayoflabels,assigningalabeltoeachfunctioninthebatch,wherethe labeliseither1or0,indicatingwhetherthefunctionisvulnerableornot,respectively; (c) Duringlosscomputation,matchthepredictedprobabilityofthenexttokenforthelast tokenofeachfunctionwithitscorrespondinglabelfromthelabelarray; (d) Calculatethecross-entropylossforeachpairofpredictedprobabilityandactuallabel usingequation2; (e) Sumupthecross-entropylossvaluesacrossallpredictionsinthebatch(nottheaver- age,whichusuallyworksworse,aswillbeinvestigatedlater); (f) Theresultingsumisthelossforthebatch,whichisusedtoupdatethemodel’sweights duringtraining. ThelossL forabatchisthusgivenby: batch B L batch =X−y i·log(p i)−(1−y i)·log(1−p i) (3) i=1 whereB isthenumberoffunctionsinthebatch,y isthetruelabelforthei-thfunction, i andp isthemodel’spredictedprobabilitythatthei-thfunctionisvulnerable. i 6PublishedasaconferencepaperatICOMP2024 Due to the memory limits of our hardware, we limited the batch size to only one input sequenceduringtraining. Forprediction,nobatchpackingisusedandonlyonetestfunctionisfittedintotheentire inputsequence,withtherestofthetokenpositionsfilledbythepaddingtoken. Thebaselineclassificationapproachachievedonly1500trainingsamplesperhour. Thisextremely slowtrainingthroughputmadeiterativeexperimentsandtuninginfeasibleonourdataset.Incontrast, batchpackingprovideda13xspeedupto20000samplesperhour. Byconcatenatingmultipleshort sequencesintooneinputexample,wecansignificantlyboostefficiencyfortaskslikeourswithsmall |
inputfunctions.Thekeyadvantageofbatchpackingisreducingunusedpaddingtokens.Bypacking sequences,amuchlargerproportionofcomputationgoestowardsinformativefunctiontokensrather thanwastedpadding. Inourcase,batchpackingenablespracticalfinetuningbyspeedinguptrainingbyanorderofmag- nitude. Thisallowsreasonableiterationtimesforexperimentation.Also,thisapproachisgenerally applicablewhenfinetuningdecodermodelsontaskswithshortinputsequences. There is a room for further enhancement of the batch packing method. The dynamic batch size couldbestabilizedusingdynamicgradientaccumulationsteps. Thedynamicaccumulationwould accumulate gradients until a target batch size is reached before applying an optimizer step. This wouldstabilizebatchsizewhileretainingtheefficiencybenefitsofpackingsequences. However,thisposessometechnicalchallenges. Itrequiresnon-trivialmodificationstothetraining loopcodeinHuggingFaceTransformerstosupportdynamicgradientaccumulation. 4 EXPERIMENTAL SETUP 4.1 DATASETS Sources Ourvulnerabilitydatasetisconstructedfromseveralopen-sourcevulnerabilitydatasets: CVEfixes Bhandarietal. (2021), a Manually-Curated Dataset Pontaetal. (2019) and VCMatch Wangetal.(2022). CVEfixesdatasetincludesinformationaboutvulnerabilitiesandtheirfixesinopen-sourcesoftware, withdatacollectedfromtheNVDdatabaseandotherrepositories. However,someinconsistencies anderrorsmaybepresentduetotheautomaticconstructionprocess. A Manually-Curated Dataset focuses on Java language only and includes data for 624 publicly disclosed vulnerabilities across 205 distinct open-source Java projects. The dataset is considered morereliableasfix-commitsarefoundbyexperts. VCMatch providesa ranking-basedapproach for automatic security patches localization for OSS vulnerabilities.Itincludesdataforonly10popularrepositoriesandonlycontainsfix-commits. Datasetlabeling Eachoftheaforementioneddatasetshavestandalonefunctionsastheirelements. In each of the datasets functions are labeled as vulnerable or non-vulnerable based on heuristics derivedfromanalyzingvulnerability-fixingcommits: 1. Fromvulnerability-fixingcommits,thechangedfunctionsareextracted; 2. Thepre-changeversionsaretakenasvulnerablefunctions,whilethepost-changeversions aretakenasnon-vulnerableones; 3. Pairsofchangedfunctionsareusedtocreateabalanceddataset. Functionsthatremained unchangedarelabeledasnon-vulnerableaswell. Dataset filtering Vulnerability-fixingcommits often contain changes related to cleanups, refac- tors, irrelevant functionality changes. Therefore, the described labeling heuristics produce some amount of false positive labels. To address this issue, we create a modified dataset: we extract a function as vulnerable from a vulnerable-fixingcommit if this function is the only one that has changedinthecommit.WecallthisdatasetX . Thisdatasethavethefollowingadvantages: 1 7PublishedasaconferencepaperatICOMP2024 • Itslabelsaremorerobust,ascommitsfixingonlyonefunctionaremorelikelytofixonly vulnerabilities; • The dataset is small, which makes it a good instrument for investigating the algorithmic performanceandfindingoptimalparameters. Morespecifically,extractingfunctionsfromthecommitsfixingonlyonefunctionmitigateslabeling errorsthatstemfromirrelevantcodeandcleanupchangesCroftetal.(2023). Additionofeasynegativefunctions Asaresultoftheseprocedures,weobtainasetofvulnerable functions(wecallitP )andasetofvulnerablefunctionswithfixedvulnerability(wecallitP ).We 1 2 augmentedthesesetswithasetofunchangedfunctionsscratchedfromfilesassociatedwithpatch, whichdonothaveanyvulnerabilities.WecallthispartP . 3 Datasetscharacteristics Finally,weobtainthefollowingtwodatasets: • X withoutP . 1 3 Ithas1334samples,with810samplesinthetrainpart,272samplesinthevalidationpart and 252 samples in the test part. The balance of positive to negative classes is approxi- matelyequalto1:1; • X withP . 1 3 It has 22945 samples, with 13247 samples in the train part, 5131 samples in the valida- tionpartand4567samplesin the test part. The balanceofpositiveto negativeclasses is approximatelyequalto 1:34, and themajorityof the negativeclass is drawnfromtheP 3 part. InAppendixA,Figures1and2showthedistributionofthetop19CWEtypesfortheresultingtest datasetswithandwithouttheP part. 3 4.2 EVALUATION METRICS We useastandardprocedureformodeltrainingandevaluation. We trainamodelforapredefined numberofepochs,choosethebestcheckpointbyROCAUConthevalidationdataset,andevaluate thechosencheckpointonthetestdataset. ThemetricsreportedincludeROCAUC,F1scoreforthepositiveclass(vulnerablefunctions),op- timalclassificationthreshold(usedforF1score).Theoptimalclassificationthresholdisdetermined onthevalidationdataset. Inmostcases,wereportbothvalidationandtestmetrics. Hyperparameteroptimizationisperformedonthevalidationdatasetusingthebestvalidationmetrics performance. 4.3 PRETRAINED MODELS Inthiswork,theprimarypretrainedmodelarchitectureemployedisWizardCoderLuoetal.(2023), atransformer-basedneuralnetworkwithadecoder-onlyarchitecture. Withover13billionparame- ters,WizardCoderwaspretrainedusingacausallanguagemodelingobjectiveonalargecollection of GitHub source code, endowing the model with extensive knowledge of natural programming languageconstructs. We compare WizardCoder to ContraBERT Liuetal. (2023a), which is the state-of-the-art model |
in the CodeBERT family. In contrastto vanilla CodeBERT, ContraBERT introducesan enhanced encoderarchitectureobtainedbycooperationoftwopretrainedCodeBERTencoders. ContraBERT ispretrainedusingtheusualmaskedlanguagemodeling(MLM)alongwithanumberofcontrastive pretrainingtasks,andcanbefinetunedforthedefectdetectiontaskLiuetal.(2023a). ContraBERT canbeconsideredasanenhancementoftheCodeBERTmodel,whichshowcasesmoreaccurateand robustresultsthanCodeBERTforthedefectdetectiontask. Hence,ContraBERTisselectedforthe experiments. 8PublishedasaconferencepaperatICOMP2024 4.4 FULL REPRODUCIBILITY PACKAGE AllnecessarycodeanddataforreproducingourexperimentsareavailableinourGitHubrepository1. 5 RESEARCH QUESTIONS 5.1 COMPARISON TOCODEBERT-BASED MODELS. RQ1: Is a StarCoder-based model more effective than a CodeBERT-based for the balanced vulnerabilitydetectiontask? ThemodelswerefinetunedonthedatasetwithouteasyP negatives,usingdifferenthyperparameters 3 like the batch size, learning rate, and number of epochs. Optimal settings for this dataset were determined. RQ2: Is a StarCoder-basedmodelmore effectivethanCodeBERT-based forthe imbalanced vulnerabilitydetectiontask? Differenttrainingrecipes,likethenumberofepochsandlossfunctions,werecomparedbetweenthe datasetswithandwithoutP inordertoanalyzethetransferabilityofhyperparameters. 3 5.2 ABLATION STUDY. RQ3:IsthestandardLLMtrainingapproacheffectiveforvulnerabilitydetection? We formulate the vulnerability detection task as a question answering problem where the model predictsthenexttokeninadditiontoabinaryvulnerabilitylabel.Thegoalistoleveragethestandard trainingapproachforLLMdecodermodels,applyingittoourclassificationtask. Trainingutilizesa cross-entropylossforboththetokenpredictionandvulnerabilityclassificationobjectives. Batch packing properties. Batch packing enables dynamic batch sizes but may cause unstable gradients.Thekeyquestionsinthisdirectionare: • RQ4:Doesthedynamicbatchsizeharmthemodelqualityornot? • RQ5:Doesthemeanorthesumlossreductionperformbetter? RQ6:Doesanincreasedcontextsizeprovideimprovementsinquality? To determinesensitivityto context,the modelwas trainedwith reducedinputsequencelengthsof 512tokensinsteadof2048onthedatasetwithouttheP part. Then,theresultswerecomparedin 3 ordertoanalyzeimpact. 5.3 IMBALANCED CLASSIFICATION IMPROVEMENT. RQ7: Isthefocallosswithsampleweightingeffectivefortacklingthevulnerabilitydetection classimbalanceproblem? Onthefulldataset,focallossandsampleweightingtechniqueswereappliedtoemphasizethemi- nority positive examples from P . The gamma parameter and sampling weights of the focal loss 1 weresystematicallyvariedtoensuretheperformance. Forallexperimentsandanalyses,themainevaluationmetricswereROCAUC,F1score,accuracy, andoptimalclassificationthreshold. Thegoalwastomakefinetuningpossibleforclassification,as wellastodeterminethebestpracticesforfinetuningonanimbalancedtask. 1Sourcecodeanddataset:https://github.com/rmusab/vul-llm-finetune 9PublishedasaconferencepaperatICOMP2024 6 EXPERIMENTAL RESULTS 6.1 COMPARISON TOCODEBERT-BASED MODELS RQ1: Is a StarCoder-based model more effective than a CodeBERT-based for the balanced vulnerabilitydetectiontask? WecomparedtheStarcoder-basedmodelwiththeCodeBERT-basedmodelonthedatasetX with- 1 outP . Thisdatasetisabalanceddatasetwithouteasynegativeexamples. 3 The methodology is the following: we take a pretrained model, manually optimize its learning hyperparametersusing the training and validation datasets, and then determine the quality of the bestmodelonthetestdataset. Theoptimizedhyperparametersincludethebatchsize,learningrate andnumberofepochs. TheresultsarepresentedinTable2. WeseethatthefinetunedWizardCodersurpassesfinetuned ContaBERTbothinROCAUCandF1metricsonthebalancedvulnerabilitydetectiontask. ThissuperiorperformanceofWizardCodercanbeattributedtoitslargermodelcapacityof13billion parametersandpretrainingonamuchlargercodecorpus. Basemodel ROCAUC F1 ContraBERT 0.66 0.68 WizardCoder 0.69 0.71 Table2: ROCAUCandF1scoresforfinetunedContraBERTandWizardCodermodelsontheX 1 withoutP dataset. 3 TofinetunetheWizardCodermodel,severalimportanthyperparameterswereidentified:abatchsize of1(theactualbatchsizebecomesapproximately120bypackingfunctionsinthebatch),learning rate of 0.0001, and 50 epochs of training with cosine annealing schedule. The best checkpoint occurredatepoch19,indicatingsufficienttrainingtimeisneeded. Thenumberofepochsforthecosinescheduleiscrucial,asitcontrolsthelearningratedistribution acrossepochsandgreatlyimpactstrainingregimes. RQ2: Is a StarCoder-basedmodelmore effectivethanCodeBERT-based forthe imbalanced vulnerabilitydetectiontask? We compared the Starcoder-based model with the CodeBERT-based model on the X with P 1 3 dataset. This dataset is an imbalanced dataset with majority of samples belonging to easy nega- tiveexamples.Thisdatasetisclosertothereal-worldvulnerabilitydistribution. The methodology of comparison is the same as in RQ1. The optimal training hyperparameters appearedtobethesameforbothdatasets(X withandwithoutP ). Theidenticaloptimalsettings 1 3 implythatthemodelstrainsimilarlyonbothdatasets. The results are presented in Table 3. We see that finetuned WizardCoder surpasses finetuned ContaBERT both in ROC AUC and F1 metrics on the imbalanced vulnerability detection |
task. However,achievedgainsoverContraBERTinROCAUCscorearesmallercomparedto theX withoutP dataset. ApotentialreasonisunderutilizationofthemoreinformativeP and 1 3 1 P partitionsonthisdataset. 2 Basemodel ROCAUC F1 ContraBERT 0.85 0.22 WizardCoder 0.86 0.27 Table3: ROCAUCandF1scoresforfinetunedContraBERTandWizardCodermodelsonX with 1 P dataset. 3 6.2 ABLATION STUDY RQ3:IsthestandardLLMtrainingapproacheffectiveforvulnerabilitydetection? 10PublishedasaconferencepaperatICOMP2024 The WizardCoder model was finetuned on the vulnerability dataset by formulating the task as a questionansweringproblem.Specifically,themodelwastrainedtopredictthenexttokeninaddition tothebinaryvulnerabilitylabelontheX withP dataset. TheresultsarepresentedinTable4. 1 3 Basemodel Trainingapproach ROCAUC WizardCoder Nexttokenprediction 0.75 CodeBERT Binaryclassification 0.85 WizardCoder Binaryclassification 0.86 Table4:ROCAUCScoresforfinetuningwiththenexttokenpredictionandclassificationobjectives ontheimbalancedX withP dataset. 1 3 ThenexttokenpredictionapproachachievedtheROCAUCscoreof0.75forWizardCoderonthe X withP dataset.ThisresultisinferiortoCodeBERTandWizardCodermodelstrainedwith 1 3 classification-only objectives(0.85and 0.86, respectively), but still surpasses random perfor- mance. Batchpackingproperties. Thebatchpackingapproachintroducessomeuniqueproperties: • Theactualbatchsizebecomesdynamicduringtrainingassequencesarepacked; • Usingthestandardmeanlossreductiongivesdifferentfunctionsunequalinfluenceonthe gradient,whichmayharmtheoverallperformance; • Usingthesumlossreductiontrainingstepsunevenly,whichmayalsobeharmfultoperfor- mance. Thisraisestworesearchquestions: • RQ4:Doesthedynamicbatchsizeharmthemodelqualityornot? • RQ5:Doesthemeanorthesumlossreductionperformbetter? RQ4:Doesthedynamicbatchsizeharmthemodelqualityornot? The batch packing approachactually introducesthe dynamicbatch size duringtraining. This dy- namicbatchsizeallowstodecreasethetrainingtimebyupto13times. Dynamicbatchsizeisnota standardapproachfortrainingneuralnetworks,sothefollowingquestionarises:doesitsacrificethe modelqualityfortrainingspeedimprovement?Inordertoanswerthisquestionwetriedavariantof batchpackingthatlimitsthemaximumnumberoffunctionsinabatch. Thislimitingdecreasesthe dispersionoftheactualbatchsizesdistribution. We tested two different values for the maximum number of functions per batch: 50 and 100 on the validation part of the X without P dataset. Our results are present in Table 5. The results 1 3 implythatthereisnosignificantinfluenceoflimitingthemaximumnumberoffunctionsperbatch, indicatingthatdynamicbatchingmaynothaveadetrimentaleffectonthemodel’sperformance. Max. functionsperbatch ValidationROCAUC Nolimit 0.72 50 0.72 100 0.72 Table 5: WizardCoder’sresults with limiting the maximumnumber of functionsper batch on the validationpartoftheX withoutP dataset. 1 3 In summary, dynamic batch size works well empirically and limiting it does not lead to any improvements. RQ5:Doesthemeanorthesumlossreductionperformbetter? Eachkindoflossreductionhasargumentsagainstit: • Usingthestandardmeanlossreductiongivesdifferentfunctionsunequalinfluenceonthe gradient,whichmayharmtheoverallperformance; 11PublishedasaconferencepaperatICOMP2024 • Usingthesumlossreductiontrainingstepsunevenly,whichmayalsobeharmfultoperfor- mance. Thus,itbecomesimportanttoinvestigatewhetherthemeanorsumlossreductionisbettersuitedfor thebatchpackingapproach.Inordertoanswerthisquestion,weperformedtestingonthevalidation part of the X without P dataset. The results are present in Table 6. The mean loss reduction 1 3 performedpoorly,highlightingissueswiththemeanlossincomparisonwiththesumreduction. The superiorperformanceofthe sum loss reductionindicatesthatit better handlesthe unevense- quencelengthsarisingfromthebatchpacking. Bysummingoverthepackedsequences,eachfunc- tioncontributesequallytothegradient. Lossreduction ValidationROCAUC Mean 0.57 Sum 0.72 Table 6: WizardCoder’sperformanceby a loss reductionmethodon the validationpartof the X 1 withoutP dataset. 3 In summary, the sum loss reduction method outperforms the mean one by better handling unevensequencelengths. RQ6:Doesanincreasedcontextsizeprovideimprovementsinquality? AnopenquestionwaswhetherthequalityimprovementsobtainedbyWizardCoderstemmedfrom thelarger2048contextsizeversusthe512baseline,orbettercodeunderstandingbythemodel. We conductedan experimentto isolate the impactof contextsize. The2048contextsize Wizard- Coder model was comparedto the 512 contextsize version on the test part of the X withoutP 1 3 dataset. Contextsize TestROCAUC 512Tokens 0.69 2048Tokens 0.69 Table7:TheimpactoftheWizardCoder’scontextsizeonperformance. Table 7 shows that reducing the context size from 2048 tokens to 512 tokens resulted in the identicalvalidationROCAUCscoreof0.69forWizardCoder. Thissuggeststheperformance gains are mainly due to the model’s learning of improved code representations, rather than theincreasedcontextsize. 6.3 IMBALANCED CLASSIFICATION IMPROVEMENT RQ7: Isthefocallosswithsampleweightingeffectivefortacklingthevulnerabilitydetection classimbalanceproblem? TheobtainedgainsoftheStarCoder-basedmodelovertheCodeBERT-basedontheimbalancedX 1 withP datasetareminorcomparedtotheX withoutP dataset. A potentialreasonis underuti- 3 1 3 |
lizationofthemoreinformativeP andP partsonthisdataset. Inordertoexploretheinfluenceof 1 2 betterutilizationofP andP partsonthemodel’squality,weconductedexperimentsthatincorpo- 1 2 ratethefocallossandsampleweightingtechniques: • FocallossemphasizeshardandrareexamplescontainedintheP andP parts; 1 2 • SampleweightingalsofocusesmodelontheminoritycasesfromtheP andP parts. 1 2 Focal Loss. Focal loss Linetal. (2017) applies a modulating factor (1 − p )γ to the standard t crossentropyloss. Thisfactordown-weightswell-classifiedexamplesandfocusestrainingonhard misclassifiedcases. Thestandardcross-entropylosscorrespondstoγ =0. We tested γ values from 1 to 5 on the validation part of the X 1 with P 3 dataset. The results are presentedinTable8. 12PublishedasaconferencepaperatICOMP2024 γ =0 γ =1 γ =3 γ =5 ROCAUC 0.858 0.873 0.868 0.852 F1 0.277 0.265 0.272 0.250 Bestval. epoch 12 12 17 11 Bestthreshold 0.087 0.288 0.265 0.4 Table8:ResultingqualityofWizardCoderfordifferentγvaluesinfocallossontheimbalancedX 1 withP dataset. 3 Insummary,thefocallosswithγ =1achievesasmallimprovementinROCAUCoverthestandard crossentropyloss(γ =0).However,thegainsaresmall. The peak quality at γ = 1 followed by a decrease at higher γ values suggests the importance of balancing the emphasis on hard examples with retaining sufficient training signal from easier instances. Onepotentialreasonforsuchsmallgainsisthatthenoisyorimperfectlabelslimittheabilityofthe focalloss to accurately identifythe most importanthard examplesto prioritize. If the label of an exampledoesnotpreciselyreflectitsdifficulty,aheavyfocusonhardcasesmaybelesseffective. Theinconsistentimprovementsdemonstratetheneedforfurtherexplorationofmethodstoleverage hard examples without detriment to learning on easier cases, while taking the label quality into account. An interesting observationis that the focal loss leads to better calibrated models, with thresholds closerto0.5. Thisindicatesthatthepredictedclassprobabilitiesareclosertothetruevaluesunder thefocalloss. Thebettercalibrationisausefulcharacteristic. Addingsampleweights. SampleweightingassignshigherimportanceweightstotheP andP 1 2 examples compared to P 3 during training. Weights from 1.0 to 30.0 were tried. Larger weights place more emphasis on the minority informative data. The similar technique was reported to be effectivewiththefocallossintheoriginalarticleLinetal.(2017). Theappliedtechniqueisdifferentfromtheusualclassweightingschemesinceitputsmoreweight on boththe samplesofthe positive(P ) andnegative(P ) parts. In the previousexperiments,we 1 2 foundthattheclassweightingtechniquewasineffective,sothesampleweightingschemeprovides adifferentperspective. ThenumericresultsarepresentedinTable9. Bestval. Best Test Test Weight Threshold AUC epoch AUC F1 30 0.876 6 0.863 0.22 0.42 10.0 0.878 13 0.86 0.24 0.44 3.0 0.88 12 0.875 0.265 0.07 1.0(base) 0.87 12 0.858 0.277 0.08 Table9:ResultingqualityofWizardCoderfordifferentvaluesofP 1+P 2weightsontheimbalanced X withP dataset. 1 3 FromTable9weconcludethataddingsampleweightsfortheP andP partitionscanprovidesmall 1 2 improvementsinROCAUCandF1scoreoverthebaseline. However,largeweightvaluesdegrade performance. Thebestweightingschemeof3xattainsmarginalgainsinbothROCAUCandF1overunweighted training.However,excessiveweighting,like30x,hurtsthevalidationandtestmetrics. ThisindicatesthereisanoptimalmoderateweightingthatslightlyemphasizesthehardandrareP 1 andP exampleswithoutskewingthedistributiontooheavily.Furthertuningoftheweightingfactor 2 mayyieldadditionalgains. 13PublishedasaconferencepaperatICOMP2024 Combination of sample weights with focal loss Both the focal loss and additional weights do similar work. In the originalwork Linetal. (2017), it was stated thataddingweightsto the focal lossleadstosomeimprovements. Therefore, we combinedthe focalloss with the sample weighting technique. The results are pre- sentedinTable10. Weight γ Val. AUC TestAUC TestF1 3 1.0 0.88 0.877 0.273 10 3 0.872 0.853 0.226 10 1 0.877 0.863 0.243 3 3 0.874 0.865 0.286 None 1 0.87 0.873 0.265 None 3 0.87 0.868 0.272 Table10: TheresultsofWizardCoderusingthefocallossandthesampleweightingontheimbal- ancedX withP dataset. 1 3 Experimentswiththefocallossandsampleweightingdemonstratedminorimprovementsover thebaselinetraining,withthebestmodelachieving0.877ROCAUCversus0.86forthebase- line. However,neither technique providedsubstantialgains. Moreadvancedmethodsthatcan explicitlyaccountforasevereclassimbalanceareneeded. 7 THREATS TO VALIDITY Therearesomepotentialthreatstovaliditythatcouldlimittheobjectivenessofourstudy: • Thedatasetisnotsplitbyprojects,andsplittingbyprojectsmighthaveresultedinasig- nificantdecreaseinquality; • Whenfinetuningamodelintheclassificationregimewithbatchpacking,themodelisable toseethefunctionsthatprecedethecurrentoneintheinput. Thiscouldlimitthetraining effectivenessof the resultingmodel. However,the negativeimpacton trainingshouldbe mitigated since these functions are placed randomly, so it would be hard for the model to learnirrelevantfeatures. Also, duringinferencefortestdata, onlyoneis functionwas consideredinanysingleinputsequence,sothereisnobiasinthetestscores; |
• The function-levelgranularity of our dataset limits the types of vulnerabilities that span acrossmultiplefunctions; • Thedatasetsizeisrelativelysmall. Thismightlimititsrepresentativenessofthetruevul- nerabilitydistribution. 8 CONCLUSION Ourworkdemonstratesthe effectivenessoffinetuninglargelanguagemodelsforthe vulnerability detectionprobleminsourcecode.TheWizardCodermodelachievedstate-of-the-artresults,withthe ROCAUCscoreof0.69onthedatasetwithouteasynegativeexamples.Thisimprovesoverprevious CodeBERT-based models, likely due to the WizardCoder’s larger model capacity and pretraining corpus. Severalkeycontributionsaremade: • An efficientbatch packingstrategyis developedto mitigate small sequencelengths, pro- vidingover13xspeedupintrainingtime. Thisenablesfasteriterationandtuning; • An improvementin ROC AUC from 0.66 to 0.69 was obtained over the state-of-the-art non-LLMmodelContraBERTontheX datasetwithoutP part. Animprovementofthe 1 3 F1scorewasobtainedforthedatasetwithP 3 (0.27vs0.22); • For the highly imbalanced dataset, it was shown that the focal loss with sample weight- inggivesimprovementsfrom0.86to0.878. Despitetheseimprovements,moreadvanced methodsareneededtoproperlyemphasizetheminoritypositiveexamples; 14PublishedasaconferencepaperatICOMP2024 • A new moreprecisevulnerabilitybenchmarkdatasetis collected. It issmaller than most of the available datasets, as well as possesses quality labels that are free from the errors stemmingfromirrelevantcodeandcleanupchangesCroftetal.(2023). Opportunitiesremain for furtherimprovementsin this approach,mainly related to trainingon the datasetwiththeP partincluded. Futureworkshouldexploretechniqueslikecurriculumlearning, 3 active sampling, and data augmentation to better leverage the scarce minority data. The insights gainedcanguideresearchonbroadertasksinvolvingcodeanalysisandunderstanding. WeshowanimprovementinqualityofthevulnerabilitydetectiontaskbyfinetuningWizardCoder LLMmodelforthevulnerabilitydetectiontask. Consideringthequalityneededforbusinesstasks, there is still a gap between the achievedand requiredquality. This leaves us with a big room for improvement,particularly,inchoosingtheoptimalrepresentationofvulnerabilitiesandusageofa broaderproject-levelcontextinformation.Theseareveryexcitingprospectsforfutureresearch. REFERENCES GuruBhandari,AmaraNaseer,andLeonMoonen.Cvefixes:automatedcollectionofvulnerabilities andtheirfixesfromopen-sourcesoftware. InProceedingsofthe17thInternationalConference onPredictiveModelsandDataAnalyticsinSoftwareEngineering,PROMISE2021,pp.30–39, NewYork,NY,USA,2021.AssociationforComputingMachinery. ISBN9781450386807. doi: 10.1145/3475960.3475985.URLhttps://doi.org/10.1145/3475960.3475985. S.Chakraborty,R.Krishna,Y.Ding,andB.Ray. Deeplearningbasedvulnerabilitydetection: Are wethereyet? IEEETransactionsonSoftwareEngineering,48(09):3280–3296,sep2022. ISSN 1939-3520. doi:10.1109/TSE.2021.3087402. AntonCheshkov,PavelZadorozhny,andRodionLevichev. Evaluationofchatgptmodelforvulner- abilitydetection,2023. Roland Croft, M. Ali Babar, and M. Mehdi Kholoosi. Data quality for software vulnerability datasets.InProceedingsofthe45thInternationalConferenceonSoftwareEngineering,ICSE’23, pp.121–133.IEEEPress, 2023. ISBN 9781665457019. doi: 10.1109/ICSE48619.2023.00022. URLhttps://doi.org/10.1109/ICSE48619.2023.00022. ZhangyinFeng,DayaGuo,DuyuTang,NanDuan,XiaochengFeng,MingGong,LinjunShou,Bing Qin,TingLiu,DaxinJiang,andMingZhou. CodeBERT:Apre-trainedmodelforprogramming and natural languages. In Trevor Cohn, Yulan He, and Yang Liu (eds.), Findings of the Asso- ciationforComputationalLinguistics: EMNLP 2020,pp.1536–1547,Online,November2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.findings-emnlp.139. URL https://aclanthology.org/2020.findings-emnlp.139. MichaelFuandChakkritTantithamthavorn. Linevul: A transformer-basedline-levelvulnerability prediction. In2022IEEE/ACM19thInternationalConferenceonMiningSoftwareRepositories (MSR),pp.608–620,2022. doi:10.1145/3524842.3528452. MichaelFu, ChakkritKla Tantithamthavorn,Van Nguyen, and TrungLe. Chatgptfor vulnerabil- ity detection, classification, and repair: How far are we? In 2023 30th Asia-Pacific Software EngineeringConference(APSEC),pp.632–636,2023. doi:10.1109/APSEC60848.2023.00085. Edward J Hu, yelong shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. LoRA: Low-rank adaptation of large lan- guage models. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=nZeVKeeFYf9. Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, Qian Liu, EvgeniiZheltonozhskii, Terry Yue Zhuo, Thomas Wang, Olivier Dehaene, Mishig Davaadorj, Joel Lamy-Poirier, Joa˜o Monteiro,OlehShliazhko,NicolasGontier,NicholasMeade,ArmelZebaze,Ming-HoYee,Lo- geshKumarUmapathi,JianZhu,BenjaminLipkin,MuhtashamOblokulov,ZhiruoWang,Rudra |
Murthy, Jason Stillerman, Siva Sankalp Patel, Dmitry Abulkhanov, Marco Zocca, Manan Dey, 15PublishedasaconferencepaperatICOMP2024 Zhihan Zhang, Nour Fahmy, Urvashi Bhattacharyya, Wenhao Yu, Swayam Singh, Sasha Luc- cioni,PauloVillegas,MaximKunakov,FedorZhdanov,ManuelRomero,TonyLee,NadavTimor, JenniferDing,ClaireSchlesinger,HaileySchoelkopf,JanEbert,TriDao,MayankMishra,Alex Gu,JenniferRobinson,CarolynJaneAnderson,BrendanDolan-Gavitt,DanishContractor,Siva Reddy,DanielFried,DzmitryBahdanau,YacineJernite,CarlosMun˜ozFerrandis,SeanHughes, ThomasWolf, ArjunGuha,LeandrovonWerra,andHarmdeVries. Starcoder: maythesource bewithyou!,2023. Tsung-YiLin,PriyaGoyal,RossGirshick,KaimingHe,andPiotrDolla´r.Focallossfordenseobject detection. In2017IEEEInternationalConferenceonComputerVision(ICCV),pp.2999–3007, 2017. doi:10.1109/ICCV.2017.324. Shangqing Liu, Bozhi Wu, Xiaofei Xie, Guozhu Meng, and Yang Liu. Contrabert: Enhancing code pre-trained models via contrastive learning. In Proceedings of the 45th International Conference on Software Engineering, ICSE ’23, pp. 2476–2487. IEEE Press, 2023a. ISBN 9781665457019. doi: 10.1109/ICSE48619.2023.00207. URL https://doi.org/10.1109/ICSE48619.2023.00207. ZhihongLiu,QingLiao,WenchaoGu,andCuiyunGao. Softwarevulnerabilitydetectionwithgpt and in-contextlearning. In 2023 8th InternationalConference on Data Science in Cyberspace (DSC),pp.229–236,2023b. doi:10.1109/DSC59305.2023.00041. Ziyang Luo, Can Xu, Pu Zhao, QingfengSun, Xiubo Geng, Wenxiang Hu, ChongyangTao, Jing Ma,QingweiLin,andDaxinJiang. Wizardcoder:Empoweringcodelargelanguagemodelswith evol-instruct,2023. Sourab Mangrulkar, Sylvain Gugger, Lysandre Debut, Younes Belkada, Sayak Paul, and Benjamin Bossan. Peft: State-of-the-art parameter-efficient fine-tuning methods. https://github.com/huggingface/peft,2022. Erik Nijkamp, Hiroaki Hayashi, Caiming Xiong, Silvio Savarese, and Yingbo Zhou. Codegen2: Lessonsfortrainingllmsonprogrammingandnaturallanguages,2023. Serena E. Ponta, Henrik Plate, Antonino Sabetta, Michele Bezzi, and Ce´dric Dangre- mont. A manually-curated dataset of fixes to vulnerabilities of open-source software. In Proceedings of the 16th International Conference on Mining Software Repositories, MSR ’19, pp. 383–387. IEEE Press, 2019. doi: 10.1109/MSR.2019.00064. URL https://doi.org/10.1109/MSR.2019.00064. Moumita Das Purba, Arpita Ghosh, Benjamin J. Radford, and Bill Chu. Software vulnerability detectionusinglargelanguagemodels. In2023IEEE34thInternationalSymposiumonSoftware ReliabilityEngineeringWorkshops(ISSREW),pp.112–119,2023. doi:10.1109/ISSREW60843. 2023.00058. ShichaoWang, YunZhang,LiagfengBao, XinXia, andMinghuiWu. Vcmatch: Aranking-based approachforautomaticsecuritypatcheslocalizationforossvulnerabilities. In2022IEEEInter- nationalConferenceonSoftwareAnalysis,EvolutionandReengineering(SANER),pp.589–600, 2022. doi:10.1109/SANER53432.2022.00076. J.Zhang,Z.Liu, X.Hu,X.Xia, andS. Li. Vulnerabilitydetectionbylearningfromsyntax-based execution paths of code. IEEE Transactions on Software Engineering, 49(08):4196–4212,aug 2023. ISSN1939-3520. doi:10.1109/TSE.2023.3286586. Qinkai Zheng, Xiao Xia, Xu Zou, Yuxiao Dong, Shan Wang, Yufei Xue, Lei Shen, Zihan Wang, Andi Wang, Yang Li, Teng Su, Zhilin Yang, and Jie Tang. Codegeex: A pre-trained model forcodegenerationwithmultilingualbenchmarkingonhumaneval-x. InProceedingsofthe29th ACMSIGKDDConferenceonKnowledgeDiscoveryandDataMining,KDD’23,pp.5673–5684, NewYork,NY,USA,2023.AssociationforComputingMachinery. ISBN9798400701030. doi: 10.1145/3580305.3599790.URLhttps://doi.org/10.1145/3580305.3599790. Xin Zhou,SicongCao, XiaobingSun, andDavid Lo. Largelanguagemodelforvulnerabilityde- tection and repair: Literature review and the road ahead. ArXiv, abs/2404.02525, 2024. URL https://api.semanticscholar.org/CorpusID:268876353. 16PublishedasaconferencepaperatICOMP2024 A DISTRIBUTION OF CWES 902 900 800 700 600 504 500 429 391 400 324 300 273 261 201194 200 175 153142 125 100 91 90 85 81 81 74 Other E-862 E-200 E-863 E-352 W E-89 E-611 W E-79 W E-22 E-776 E-264 W E-20 E-835 E-295 E-287 E-203 E-755 E-444 E-119 W W W W W W W W W W W W W W C C C C C C C C C C C C C C C C C C ycneuqerF Figure1:Distributionofthetop19CWEcategoriesandothersinthetestsetwithP . 3 17PublishedasaconferencepaperatICOMP2024 71 70 60 50 40 36 30 23 22 20 13 10 9 10 8 8 8 6 6 6 6 4 4 4 4 4 Other W E-79 E-611 W E-20 W E-22 E-264 E-502 E-200 W E-94 E-863 E-287 W E-74 W E-78 E-284 E-276 E-755 W E-89 E-269 E-352 W W W W W W W W W W W C C C C C C C C C C C C C C C C C C ycneuqerF Figure2:Distributionofthetop19CWEcategoriesandothersinthetestsetwithoutP . 3 18This figure "hist_with_p3.png" is available in "png"(cid:10) format from: |
http://arxiv.org/ps/2401.17010v5This figure "hist_without_p3.png" is available in "png"(cid:10) format from: http://arxiv.org/ps/2401.17010v5 |
2402.00657 Pre-training by Predicting Program Dependencies for Vulnerability Analysis Tasks ZhongxinLiu ZhijieTang JunweiZhang TheStateKeyLaboratoryof ZhejiangUniversity ZhejiangUniversity BlockchainandDataSecurity, China China ZhejiangUniversity tangzhijie@zju.edu.cn jw.zhang@zju.edu.cn China liu_zx@zju.edu.cn XinXia∗ XiaohuYang Huawei TheStateKeyLaboratoryof China BlockchainandDataSecurity, xin.xia@acm.org ZhejiangUniversity China yangxh@zju.edu.cn ABSTRACT dependenceanalysis.ExperimentalresultsshowthatPDBERTben- Vulnerabilityanalysisiscrucialforsoftwaresecurity.Inspiredby efitsfromCDPandDDP,leadingtostate-of-the-artperformance thesuccessofpre-trainedmodelsonsoftwareengineeringtasks, onthethreedownstreamtasks.Also,PDBERTachievesF1-scores thisworkfocusesonusingpre-trainingtechniquestoenhancethe ofover99%and94%forpredictingcontrolanddatadependencies, understandingofvulnerablecodeandboostvulnerabilityanalysis. respectively,inpartialandcompletefunctions. Thecodeunderstandingabilityofapre-trainedmodelishighly CCSCONCEPTS relatedtoitspre-trainingobjectives.Thesemanticstructure,e.g., controlanddatadependencies,ofcodeisimportantforvulnerabil- •Softwareanditsengineering→Languagefeatures;•Secu- ityanalysis.However,existingpre-trainingobjectiveseitherignore rityandprivacy→Vulnerabilitymanagement;•Computing suchstructureorfocusonlearningtouseit.Thefeasibilityand methodologies→Knowledgerepresentationandreasoning. benefitsoflearningtheknowledgeofanalyzingsemanticstructure havenotbeeninvestigated.Tothisend,thisworkproposestwo KEYWORDS novelpre-trainingobjectives,namelyControlDependencyPredic- SourceCodePre-training,ProgramDependenceAnalysis,Vulnera- tion(CDP)andDataDependencyPrediction(DDP),whichaimto bilityDetection,VulnerabilityClassification,VulnerabilityAssess- predictthestatement-levelcontroldependenciesandtoken-level ment datadependencies,respectively,inacodesnippetonlybasedon itssourcecode.Duringpre-training,CDPandDDPcanguidethe ACMReferenceFormat: modeltolearntheknowledgerequiredforanalyzingfine-grained ZhongxinLiu,ZhijieTang,JunweiZhang,XinXia,andXiaohuYang.2024. dependenciesincode.Afterpre-training,thepre-trainedmodel Pre-trainingbyPredictingProgramDependenciesforVulnerabilityAnal- ysisTasks.In2024IEEE/ACM46thInternationalConferenceonSoftware canboosttheunderstandingofvulnerablecodeduringfine-tuning Engineering(ICSE’24),April14–20,2024,Lisbon,Portugal.ACM,NewYork, andcandirectlybeusedtoperformdependenceanalysisforboth NY,USA,13pages.https://doi.org/10.1145/3597503.3639142 partialandcompletefunctions.Todemonstratethebenefitsofour pre-trainingobjectives,wepre-trainaTransformermodelnamed 1 INTRODUCTION PDBERTwithCDPandDDP,fine-tuneitonthreevulnerability analysistasks,i.e.,vulnerabilitydetection,vulnerabilityclassifica- tion,andvulnerabilityassessment,andalsoevaluateitonprogram Softwarevulnerabilitiesareflawsorweaknessesthatcouldbe exploitedto violatesecuritypolicies[54]. Itiscrucial todetect, categorizeandassessvulnerabilities.Duetotherapidincreaseinthe ∗Correspondingauthor. numberofsoftwarevulnerabilitiesandthesuccessofdeeplearning techniques,researchershaveproposeddiversedeep-learning-based Permissiontomakedigitalorhardcopiesofallorpartofthisworkforpersonalor approachestoautomatevulnerabilityanalysis,suchasvulnerability classroomuseisgrantedwithoutfeeprovidedthatcopiesarenotmadeordistributed forprofitorcommercialadvantageandthatcopiesbearthisnoticeandthefullcitation detection[14,68],classification[8,70],patchidentification[66,69] onthefirstpage.Copyrightsforcomponentsofthisworkownedbyothersthanthe andassessment[37,38],andachievedpromisingresults. author(s)mustbehonored.Abstractingwithcreditispermitted.Tocopyotherwise,or Recently,inspiredbytheimpressiveeffectivenessofpre-training republish,topostonserversortoredistributetolists,requirespriorspecificpermission and/orafee.Requestpermissionsfrompermissions@acm.org. largemodelsinthenaturallanguageprocessing(NLP)field[10, ICSE’24,April14–20,2024,Lisbon,Portugal 17,50],researchersproposedtopre-trainlargemodelsonlarge- ©2024Copyrightheldbytheowner/author(s).PublicationrightslicensedtoACM. scalecode-relatedcorporaforcapturingthecommonknowledge ACMISBN979-8-4007-0217-4/24/04...$15.00 https://doi.org/10.1145/3597503.3639142 ofprogramminglanguages.Werefertosuchmodelsaspre-trained 4202 beF 1 ]ES.sc[ 1v75600.2042:viXraICSE’24,April14–20,2024,Lisbon,Portugal ZhongxinLiu,ZhijieTang,JunweiZhang,XinXia,andXiaohuYang codemodels.Pre-trainedcodemodelshaveshownconsistentper- tokenpairsincode,i.e.,DDPsuffersfromseveredataimbalance. formance improvements over the neural models without being Toaddressthisproblem,weleverageamaskingstrategytofilter pre-trained(forshort,non-pre-trainedmodels)onvarioussoftware impossibletokenpairsandenableDDPtofunctioneffectively. |
engineering (SE) tasks, e.g., code search [20], code clone detec- To demonstrate the effectiveness of CDP and DDP, we pre- tion[26],andcodecompletion[57].Motivatedbythesestudies, trainedaTransformermodelnamedPDBERT(ProgramDependence thisworkaimstoboostvulnerabilityanalysisusingpre-training, BERT) on 1.9M C/C++ functions using CDP and DDP as well withafocusonhelpingneuralmodelsbetterunderstandvulnerable asMLM.CDPandDDPbringseveralbenefitstoPDBERT:First, codethroughpre-trainingtechniques. PDBERTonlyrequiressourcecodeasinputandiscapableofhan- The code understanding ability of a pre-trained code model dlingthecodesnippetsthatcannotbecorrectlyparsed,e.g.,par- largelyhingesonitspre-trainingobjectives.Effectiveobjectives tialcode.Second,PDBERTincorporatestheknowledgeofend-to- canguidethemodeltolearnthepriorknowledgethatishelpfulfor endprogramdependenceanalysis,whichismoregeneralthanthe downstreamtasks.Earlypre-trainedcodemodels[20,33]directly knowledgeofusingprogramdependencies,andcanbetterboost usethepre-trainingobjectivesdesignedfornaturallanguages(NL), diversedownstreamtasks.Third,PDBERTcanbedirectlyusedto e.g.,MaskedLanguageModel(MLM)[17],ignoringthesyntactic performprogramdependenceanalysisforpartialandcompletecode. and semantic structure of code. Recently, some researchers [18, Pleasenotethatexistingstaticanalysistools,e.g.,Joern[64],can- 62]specificallydesignseveralpre-trainingtasks,e.g.,predicting notcorrectlyderiveprogramdependenciesinpartialcodewithout AbstractSyntaxTree(AST)nodetypes,tocapturethesyntactic manualintervention.Thus,althoughprogramdependenceanalysis structureofcode. isonlyoneofPDBERT’susagescenarios,PDBERTcomplements Thesemanticstructureofcode,e.g.,controlanddatadependen- staticanalysistoolswiththeabilitytoanalyzepartialcode. cies[21],playsanimportantroleinvulnerabilityanalysis[12,14, WeevaluatePDBERTinbothintrinsicandextrinsicways.For 68].Forexample,todetectause-after-freevulnerability,amodel intrinsicevaluation,wedirectlyapplyPDBERTtoanalyzecontrol needstoidentifywhethertheargumentofamemory-releasing and data dependencies for both partial and complete functions functioncallisused(whichreliesondatadependencies)inastate- basedonlyontheirsourcecode.Experimentalresultsshowthatthe mentthatmaybeexecutedafterthiscall(whichisrelatedtocontrol F1-scoresofPDBERTforidentifyingstatement-levelcontroldepen- dependencies).Tothebestofourknowledge,onlytwopre-trained denciesandtoken-leveldatadependenciesareover99%and95%, codemodels,i.e.,Code-MVPandGraphCodeBERT,considerthese- respectively,forpartialfunctions,andover99%and94%forcom- manticstructureofcode.Code-MVPtakescontrolflowgraph(CFG) pletefunctions.TheseresultsindicatethatPDBERTsuccessfully asinputduringpre-training.GraphCodeBERTtakesasinputdata learnstheknowledgeofprogramdependenceandcaneffectively flowedgesduringpre-trainingandinference.Ontheotherhand, analyzebothpartialandcompletefunctions.Moreover,thethrough- priorstudieshaveproposedsomenon-pre-trainedmodelsthatcon- putofPDBERTis23timeshigherthanthestate-of-the-artprogram sidersemanticstructureforvulnerabilityanalysis[14,44,68].These dependenceanalysistoolJoern,indicatingthatPDBERTismore approachesfirstextractcontrolanddatadependencies,i.e.,pro- suitablefortheusecaseswheresomelowlevelsofimprecisionare gramdependencies,usingstaticanalysistools,e.g.,Joern[64],and tolerantandthethroughputmattersmore.Forextrinsicevaluation, thentrainaneuralmodeltolearnfromtheextracteddependencies. wefine-tuneandevaluatePDBERTonthreevulnerabilityanalysis Thesepre-trainedandnon-pre-trainedmodelshavetwomainlimi- tasks,i.e.,vulnerabilitydetection,vulnerabilityclassification,and tations.First,theyrequiretheinputcodetobecorrectlyparsedto vulnerabilityassessment.PDBERTbenefitsfromCDPandDDP extractitssemanticstructure,andcannothandlepartialcode(e.g., andoutperformsthebest-performingbaselinesby5.9%-9.0%onthe incompletefunctions).However,theabilitytoanalyzepartialcode threetasks. isusefulandvaluableforsomereal-worldscenarios,e.g.,detecting Insummary,thecontributionsofthisworkareasfollows: vulnerablecodesnippetssharedonStackOverflow[60].Second, • We introduce the idea of pre-training code models to in- theytargetlearningrepresentationfromtheextractedsemantic corporatetheknowledgerequiredforend-to-endprogram structure,orlearningtousesuchstructure.Consideringthatdiffer- dependenceanalysis,andproposetwonovelpre-training entdownstreamtaskscanutilizethesemanticstructureinvarious objectives,i.e.,CDPandDDP,toinstantiatethisidea. ways,thegeneralityofsuchrepresentationcanbelimited. • Wehavebuiltapre-trainedmodelnamedPDBERTwithCDP, Tothisend,thisworkintroducestheideaofpre-trainingcode DDPandMLM,whichtothebestofourknowledge,isthe modelstoincorporatetheknowledgerequiredforend-to-endpro- firstneuralmodelthatcananalyzestatement-levelcontrol gramdependenceanalysis(i.e.,fromsourcecodetoprogramde- |
dependenciesandtoken-leveldatadependencies. pendencies),andproposestwonovelpre-trainingobjectives,i.e., • Weconductbothintrinsicandextrinsicevaluations,which ControlDependencyPrediction(CDP)andDataDependencyPre- showthatPDBERTcanaccuratelyandefficientlyidentify diction(DDP),toinstantiatethisidea.Specifically,CDPandDDP programdependenciesinbothpartialandcompletefunc- askthemodeltopredictallstatement-levelcontroldependencies tions,andcanfacilitatediversevulnerabilityanalysistasks. andtoken-leveldatadependenciesinaprogramrelyingsolelyon Ourreplicationpackageisavailableat[2,3]. sourcecode.Differentfromexistingpre-trainingobjectives,CDP andDDPtargetguidingthemodeltolearntherepresentationthat 2 PRELIMINARY encodesthesemanticstructureofcodeonlybasedonsource code.However,itisnoteasytotrainDDP.Becausethenumber ThissectionintroducesProgramDependenceGraph(PDG)and oftoken-leveldatadependenciesismuchlowerthanallpossible describeswhyhandlingpartialcodeisuseful.Pre-trainingbyPredictingProgramDependenciesforVulnerabilityAnalysisTasks ICSE’24,April14–20,2024,Lisbon,Portugal void foo(int x) 3.1 InputRepresentation 1 void foo(int x) { int y = source() 2 int y = source(); x y PDBERTonlyrequiressourcecodeasinput.Giventhesourcecode 3 if (y < MAX) { y if (y < MAX) 𝐶ofaprogram,PDBERTfirstusesasubword-basedtokenizer,e.g., 4 int z = y + x; int z = y + x 5 sink(z); z aByte-PairEncoding(BPE)tokenizer[53],totokenizeitintoa sink(z) 6 } sequenceoftokensandprependsthissequencewithaspecialtoken 7 } control dependency edge data dependency edge [CLS].Theresultingtokensequence,denotedas𝑇 = [𝑡 𝑐𝑙𝑠,𝑡 1,...,𝑡 𝑛], is then fed into a multi-layer Transformer model to obtain the Figure1:AsamplefunctionanditsPDG. contextualembeddingofeachtoken.Wedenotethesecontextual embeddingsas𝐻𝑡 = [ℎ 𝑐𝑡 𝑙𝑠,ℎ𝑡 1,...,ℎ 𝑛𝑡].Foreachtoken𝑡 𝑖,itschar spanisrepresentedas𝑆 𝑡𝑖 = [𝐼 𝑡𝑙 𝑖,𝐼 𝑡𝑟 𝑖]andrecorded,where𝐼 𝑡𝑙 𝑖 and𝐼 𝑡𝑟 𝑖 denotethestartandendcharacterindicesof𝑡 𝑖 inthesourcecode. 2.1 ProgramDependenceGraph 3.2 ProgramDependencyExtraction Programdependencies,includingcontrolanddatadependencies, ToconstructgroundtruthforCDPandDDP,weneedtoextract reflectthesemanticstructureofcode.Programdependencegraph controlanddatadependenciesinprograms.Givenaprogram,we (PDG) [21] explicitly represents control and data dependencies firstleverageastaticanalysistoolnamedJoern[64]toconstruct withdifferenttypesofedgesandisfrequentlyadoptedbyprevious itsPDGandASTbasedonitssourcecode,andcombinetheminto studiesforvulnerabilityanalysis[12,14,68].Figure1represents ajointgraph.Then,weextractcontrolanddatadependenciesfrom a samplefunction and itsPDG. In aPDG, each node denotes a thisgraph. statementorapredicate,andtherearetwotypesofedges:control dependencyedgesanddatadependencyedges.Controldependency 3.2.1 ControlDependencyExtraction. AsdescribedinSection2.1, edges(dottedgreenlinesinFigure1)presentthecontrolflowrela- thecontroldependencyedgesinthePDGrepresentcontrolde- tionshipsbetweenpredicatesandstatements,andcanbeusedto pendencies.Weencodethematthestatementlevelintoamatrix inferthecontrolconditionsonwhichastatementdepends.Data 𝐺𝑐 ∈R𝑚𝑐×𝑚𝑐 ,where𝑚𝑐 denotesthemaximumnumberofnodes d ree lp ae tin od ne sn hc ipy s,e ad ng des e( as co hli od fr te hd emlin ie ss lain beF li eg du wre it1 h) apr ve as re ian bt lt eh te had tef is-u ds ee - inthePDGandissetto50bydefault.𝐺 𝑖𝑐 ,𝑗 ∈{1,0}denoteswhether the 𝑗-thPDGnode(statement)iscontrol-dependentonthe𝑖-th finedinthesourcenodeandusedinthetargetnode.Bypredicting PDGnode(predicate). controlanddatadependenciesinprograms,themodelcanlearnto strengthentheconnectionsbetweencomputationallyrelatedparts 3.2.2 DataDependencyExtraction. Similartocontroldependen- ofaprogramandbettercapturethesemanticstructureofcode. cies,statement-leveldatadependenciescanbeextractedbasedon thedatadependencyedgesinthePDGandencodedasamatrix. 2.2 UsageScenarioofHandlingPartialCode However, data dependencies are naturally defined on variables. Comparedtopriorwork,oneadvantageofPDBERTisthatitcan Suchmatrixiscoarse-grained,ignoringtheinformationofvari- handlepartialcode,includingperformingprogramdependence ables.Therefore,weextractandencodedatadependenciesatafiner analysisanddownstreamtasksonpartialcode.Oneusagescenario leveltoconsidervariableinformationandcapturefiner-grained ofthisfeatureisanalyzingthecodesnippetsonStackOverflow. codestructure.Recallthateachdatadependencyisrepresented Priorwork[5,6]hasshownthatnovicesandevenmoresenior byadatadependencyedgeassociatedwithavariablevarinthe developerscopycodesnippetsfromStackOverflowintoproduction PDG.Forthe𝑖-thdatadependencyedge𝐷 𝑖,wefirstidentifythe software.Ifthecopiedsnippetsarevulnerable,theirproduction twoASTnodesthatcorrespondtothevardefinedin𝐷 𝑖’ssource softwarewillbepronetoattacks.Specifically,Fischeretal.[22] PDGnodeandthevarusedin𝐷 𝑖’stargetPDGnodefromthejoint observedthatinsecurecodesnippetsfromStackOverflowarecopied graph.Werefertothemasthestartnode𝑠 𝑖 andtheendnode𝑒 𝑖 of intopopularAndroidapplicationsinstalledbymillionsofusers. 𝐷 𝑖,respectively.Then,werepresent𝐷 𝑖 asthecombinationof𝑠 𝑖 |
From72KC++codesnippetsusedinatleastoneGitHubrepository, and𝑒 𝑖,i.e.,𝐷 𝑖 = [𝑠 𝑖,𝑒 𝑖].Consequently,thedatadependenciesin Verdi et al. [60] found 99 vulnerable code snippets of 31 types, theprogramarerepresentedas𝐺ˆ𝑑 = [𝐷 1,𝐷 2,...]. andthesesnippetshadaffected2,859GitHubprojects.Considering thesefacts,itisessentialandvaluabletoanalyzethecodesnippets 3.3 Token-LevelDataDependencyConstruction onStackOverflowbeforetheyarescatteredtootherplaces.We DDPaimstoproducetoken-leveldatadependenciesintheinput arguethatPDBERTcanbeusedasafundamentaltool/modelin program. In Section 3.2, data dependencies have been encoded thisscenario. asasetofASTnodepairs𝐺ˆ𝑑 .EachASTnodecorrespondstoa codeelement.Buteachcodeelementmaybemappedintomultiple 3 APPROACH tokens in𝑇 due to the subword-based tokenizer. Therefore, we ThissectionfirstdescribeshowPDBERTrepresentstheinput(§3.1) needtorefine𝐺ˆ𝑑 torepresenttoken-leveldatadependencies.As andhowtoconstructgroundtruthforCDPandDDP(§3.2and illustratedbyFigure2,weachievethisbyconverting𝐺ˆ𝑑 intoa §3.3).Then,weelaborateonthethreepre-trainingtasksusedby matrix𝐺𝑑 ,where𝐺 𝑖𝑑 ,𝑗 ∈ {0,1}denoteswhetherthe 𝑗-thtokenis PDBERT(§3.4)andtheusagesofPDBERT(§3.5). data-dependentonthe𝑖-thtokenin𝑇,asfollows:ICSE’24,April14–20,2024,Lisbon,Portugal ZhongxinLiu,ZhijieTang,JunweiZhang,XinXia,andXiaohuYang Results from Joern 46].Whenusedforprogramminglanguages(PL),itcanguidethe Used By modeltocapturethenaturalnessandtosomeextentthesyntactic Var1 Var2 Var3 Source Code int temp_flag = flag ;\n printf("%d", temp_flag ); structureofcode[61].CDPandDDPcanhelpthemodellearnthe Char Spans [4,12] [16,19] [36,44] knowledgeaboutthesemanticstructureofcode,strengthening Overlap Overlap Overlap theattentionbetweencomputationallyrelatedcodeelements.The Results from Tokenizer threetaskscomplementeachother. Tokens int Ġtemp _ flag Ġ= Ġflag Ġ; \n Ġprint f (" %d , Ġtemp _ flag Ġ)" Char Spans [3,7] [8,8][9,12] [15,19] [35,39][40,40][41,44] 3.4.1 MaskedLanguageModel(MLM). MLMrequiresthemodel Token Indices 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 toreconstructrandomlymaskedtokensfromthecorruptedinput Multi-to-Multi sequence.BeforebeingencodedbytheTransformermodel,acertain Reduce Token-Level Data Ġtemp Ġtemp proportionoftokensin𝑇 aresampledandreplacedwithaspecial Dependency = [3,7] [35,39] token[MASK].Basedonthecontextualembeddings𝐻𝑡 produced 1 13 bytheTransformermodel,weuseatwo-layerMLP(Multi-Layer Figure2:Constructingthegroundtruth𝐺𝑑 ofDDG. Perceptions)withtheSoftmaxfunctiontopredicttheoriginaltoken ofeach[MASK],andcalculatethelossofMLM,asfollows: MappingASTnodestotokens.Weretrievethecodeelement 𝑃(𝑡 𝑖|ℎ 𝑖𝑡 )=Softmax𝑡𝑖∈𝑉(MLP(ℎ 𝑖𝑡 )) (1) thatcorrespondstoeachASTnode𝑎 𝑖 in𝐺ˆ𝑑 ,andmapthecode elementintoitscharspan𝑆 𝑎𝑖 = [𝐼 𝑎𝑙 𝑖,𝐼 𝑎𝑟 𝑖],where𝐼 𝑎𝑙 𝑖 and𝐼 𝑎𝑟 𝑖 denote L𝑚𝑙𝑚 = |𝑀1 𝑡| ∑︁ −log𝑃(𝑡 𝑖|ℎ 𝑖𝑡 ) (2) the indices of 𝑎 𝑖’s start and end characters in the source code, 𝑖∈𝑀𝑡 r te es mp pe _c fti lv ae gly i. sF [o 4,r 12ex ].a Bm ap sele d, oin nF thig eu cr he a2 r, st ph ae nc sh oa fr 𝑎s 𝑖p aa nn do thf eth te okfi er ns st w prh oe br ae bV iliti yst oh fe pv ro oc da ub cu inla gr 𝑡y 𝑖,o af nt dhe 𝑀m 𝑡o ad reel t, hS eof intm dia cx e𝑡 s𝑖∈ o𝑉 ftd he en so at mes pt lh ede in𝑇,wemap𝑎 𝑖 intoasubsequenceof𝑇,namely𝑇 𝑎𝑖.Specifically, tokens.Followingpriorwork[20,26,46],wesample15%oftokens wefindthetokensin𝑇 ofwhichthecharspansareoverlapped tomask.Wealsoreplace10%ofthemwithrandomtokensand with𝑎 𝑖’scharspan,andgatherthesetokensinordertoconstruct unmaskanother10%ofthemtoalleviatetheinconsistencybetween 𝑇 𝑎𝑖 = [𝑡 1𝑎𝑖,𝑡 2𝑎𝑖,...]. For example, in Figure 2, each temp_flag is pre-trainingandfine-tuning. mappedto[˙Gtemp,_,flag],whereG˙ isusedbythetokenizerto 3.4.2 Statement-LevelControlDependencyPrediction(CDP). CDP denote the space before a word. After this mapping, each data dependencyedge𝐷 𝑖 isrepresentedasapairoftokensequences𝑇 𝑠𝑖 aimstopredictallthestatement-levelcontroldependenciesina and𝑇 𝑒𝑖,where𝑠 𝑖 and𝑒 𝑖 arethestartandendnodesof𝐷 𝑖. p enro cg or da em thb eas ge rd ouo nn dit ts ruso thur oc fe Cc Dod Pe a. sA asd me as tc rr ii xbe 𝐺d 𝑐i .n ToSe pc rt eio dn ic3 t. 𝐺2. 𝑐1 ,, ww ee Generatingtokenedges.Basedonthenewrepresentationsof encodeeachPDGnodeintoafeaturevectorbasedonthecontextual datadependencyedges,eachtoken-leveldatadependencycanbe representedbytheedgesbetweenthetokensin𝑇 𝑠𝑖 andthetokens embeddings𝐻𝑡 ,andpredictwhetherthereisacontroldependency i i pnn le𝑇𝑇 n𝑒𝑒 t𝑖𝑖 y. uI osn fit nu eg dit gti hv ee e sl aCy n, aw drte mesc oia a sn n toc p fo ro tn hdn eue mcc tt i aot rh n ee . uHt no o nk w ee cn e evs sei srn a, rt𝑇 yh𝑠 .𝑖 is Tt wo heit l rh l ee p fort roo ek d ,ue fn c oes r b I tn he at dw te be t eae lin ol, ne fi ga rc s th ot, 𝑞p fo 𝑘a ri ar e no a dcf h aP vD P eD rG aG gn eno td o he d es e cb 𝑞 oa 𝑘 ns twe ed xe to uidn ae lt n eh t me ifi byr ef t de h da e it nu to gr ke se ov n fe s tc hit no er s𝑇s e. each data dependency edge 𝐷 𝑖, we choose to connect only the tokenstoobtainavectorℎˆ𝑞 𝑘.Next,weinputℎˆ𝑞 𝑘 intoaone-layer |
fi der pst ento dk ee nn cysi an nd𝑇 𝑠 p𝑖 ra on dd uc𝑇 e𝑒 s𝑖, ow nlh yic oh nee tff oe kc et niv -le el vy er le ep dr ge es ,e in .et .s ,𝑡t 𝑠h 𝑖is →da 𝑡𝑒ta 𝑖. M obL taP inac ati vv ea ct te od rb ℎy 𝑞 𝑘,th ae sfR oe lL loU wf su :nctionfordimensionreductionand 1 1 Forexample,inFigure2,thefirsttokensofthetwotemp_flag a rer pe rc eo sen nn te ec dte ad s. aA pf ate irr oth fi ts oks ete np s, ,e ea .gc .h ,𝐷d 𝑖a =ta [d 𝑡e 𝑠𝑖p ,e 𝑡𝑒n 𝑖d ]e .ncyedge𝐷 𝑖 is ℎˆ𝑞 𝑘 = |𝑞1 𝑘| 𝑡𝑖∑︁ ∈𝑞𝑘ℎ 𝑖𝑡 (3) 1 1 Constructing𝐺𝑑 .Weinitialize𝐺𝑑 ∈R𝑚𝑑×𝑚𝑑 asamatrixfilled ℎ𝑞 =ReLU(MLP(ℎˆ𝑞 )) (4) withzeros,where𝑚𝑑 isthemaxsequencelengthofthemodel. 𝑘 𝑘 𝑞 Eachelementin𝐺𝑑 representsthedatadependencybetweentwo where|𝑞 𝑘|isthenumberoftokensbelongingto𝑞 𝑘.ℎ 𝑘isregardedas tokensinthecode.Foreachdatadependencyedge𝐷 𝑖 = [𝑡𝑒𝑖,𝑡𝑒𝑖], thefeaturevectorof𝑞 𝑘.Then,weuseabilinearlayer,acommonly we retrieve the token indices of𝑡𝑠𝑖 and𝑡𝑒𝑖 in𝑇, assumin1 g th1 at usedmoduleforrelationpredictions[55],topredicttheprobability t p uh r se e ey s dea anr ste e td h𝑥 eia n gn rFd oig𝑦 uu, nra den td 2 r, us 𝑥 te ht =𝐺 of1𝑥𝑑 , D,𝑦 𝑦 Dt =o P.11 3.1 ,F so or 𝐺th 𝑑 1e ,11 3da ista sed te tp oen 1.d 𝐺en 𝑑c wy ie lldg be e 𝑞𝑃 𝑖𝑖𝑐 , ,𝑗 at sh fa ot lla oP wD s:Gnode𝑞 𝑃𝑗 𝑖𝑐 ,i 𝑗s =co 𝜎n (t ℎro 𝑞 𝑖⊤l- 𝑊de 𝑐p ℎe 𝑞 𝑗nd +e 𝑏n 𝑐t )onanotherPDGno (d 5e ) where𝑊 𝑐 and𝑏 𝑐 aretrainableparametersand𝜎 istheSigmoid 3.4 Pre-trainingTasks function.Becausetheinputsofthebilinearlayerarenotcommuta- PDBERTispre-trainedusingthreetasks:MaskedLanguageModel tive,thepredictedcontroldependenciesbetweentwoPDGnodes (MLM),statement-levelControlDependencyPrediction(CDP)and are asymmetric, i.e.,𝑃 𝑖𝑐 ,𝑗 is not necessarily equal to𝑃𝑐 𝑗,𝑖. This is token-levelDataDependencyPrediction(DDP),asshowninFig- inaccordwiththefactthatcontroldependenciesaredirectional. ure3.MLMiswidelyusedbyexistingpre-trainedmodels[20,26, Finally,wecalculatethecrossentropyasthelossofCDPbasedonPre-trainingbyPredictingProgramDependenciesforVulnerabilityAnalysisTasks ICSE’24,April14–20,2024,Lisbon,Portugal 11vvooiidd ffoooo((iinntt xx)) {{ MLM CDP 2 32 3 i ii iffnn tt (( yyyy <<== MMss AAoo XXuu ))rr cc {{ee(());; Source (2M ,5) HeM aL dM e r CDP Header 1 2v o i id n tf o yo ( =i n st o ux r) c e{ (); 4 54 5 i si sn in it nt n k kz (z ( z z= )= ) ; ;yy ++ xx;; PDBERT y M 3 4 i f i( ny t z< M =A X y) +{ x; 66 }} Transformer Predict Mask( e3, d4 ) Tokens 5 sink(z); 77}} Statement 6 } Control Source Code Features 7} Dependency Predict Statement-Level Control Dependencies ĠĠvvooiidd ĠĠffoooo (( iinntt ĠĠxx )) ...... \\nn M M DDP 1111vvvvooooiiiidddd ffffoooooooo((((iiiinnnntttt xxxx)))) {{{{ ĠĠ ĠĠiinntt ĠĠyy ĠĠ== [[MMAASSKK]] (( ...... \\nn DDP Header 2222 iiiinnnntttt yyyy ==== ssssoooouuuurrrrcccceeee(((())));;;; 3333 iiiiffff ((((yyyy <<<< MMMMAAAAXXXX)))) {{{{ ĠĠ ĠĠiiff ĠĠ(( [[MMAASSKK]] ĠĠ<< ...... ĠĠMMAAXX...... \\nn Token 4 54 54 54 5 i si si si sn in in in it nt nt nt n k k k kz (z (z (z ( z z z z= )= )= )= ) ; ; ; ;yyyy ++++ xxxx;;;; ĠĠ ĠĠ}} \\nn Contextual 6666 }}}} Data Migrate to Downstream Embeddings 7777}}}} Dependency TTookkeennss After Pre-training Predict Token-Level Data Dependencies Figure3:Thepre-trainingtasksofPDBERT. thepredictedprobabilitiesandthegroundtruth𝐺𝑐 ofsize|𝐺𝑐|: formulameansthatthepredictionlossof𝐺 𝑖𝑑 ,𝑗 willbemaskedif L𝑐𝑑𝑝 = |𝐺1 𝑐| ∑︁ −𝐺 𝑖𝑐 ,𝑗log𝑃 𝑖𝑐 ,𝑗 −(1−𝐺 𝑖𝑐 ,𝑗)log(1−𝑃 𝑖𝑐 ,𝑗) (6) t tohe into trk oe dn ucty ep me oo rf ee ait nh der st𝑡 r𝑖 oo nr g𝑡 e𝑗 ri ps rn ioo rt sI td oen tati cfi ke ler. tI ht ii ss pa rl oso blp eo ms ,s eib .gle ., 𝑖,𝑗 onlypredictingdatadependenciesbetweenidenticalidentifiers. 3.4.3 Token-LevelDataDependencyPrediction(DDP). DDPtargets We only consider token types, which can help the model learn predictingallthetoken-leveldatadependenciesinaprogramonly moreknowledgerelatedtoprogramdependenceanalysis,e.g.,the basedonitssourcecode.AsdescribedinSection3.3,theground knowledgeofrecognizingidenticalidentifiers. truthofDDPisencodedasamatrix𝐺𝑑 .Thecontextualembeddings Besidesthismaskingstrategy,anotherwaytoconsidertoken 𝐻𝑡 arenaturallythefeaturevectorsofthetokensin𝑇.Topredict typesduringpre-trainingistoexplicitlyprovideidentifierpairs 𝐺𝑑 ,DDPalsousesaone-layerMLPwiththeReLUfunctionfor tothemodelforprediction.However,duringinference,thisway dimensionreductionandabilinearlayerforprediction,asfollows: requiresadditionaltoolstogatheridentifierpairs,whilethemask- ℎˆ 𝑖𝑡 =ReLU(MLP(ℎ 𝑖𝑡 )) (7) ingstrategycanbeeasilydisabledtoenableend-to-endprogram dependenceanalysis.Also,themaskingstrategyimplicitlyguides 𝑃 𝑖𝑑 ,𝑗 =𝜎(ℎˆ 𝑖𝑡⊤𝑊 𝑑ℎˆ𝑡 𝑗 +𝑏 𝑑) (8) themodeltofocusonidentifierpairs,whichcanhelpthemodel where𝑊 𝑑 and𝑏 𝑑 aretrainableparameters.Nevertheless,compared learnmoresyntacticknowledgeofcode. to𝐺𝑐 ,𝐺𝑑 focusesontoken-levelrelationsandthereareusuallya largenumberofelementsin𝐺𝑑 .Forexample,ifthelengthof𝑇 is 3.5 TheUsagesofPDBERT 512,thesizeof𝐺𝑑 |
wouldbeover262,144.Evenworse,considering PDBERTtakesasinputonlyacodesnippet,whichcanbeeither thatthenumberofdatadependenciesismuchlowerthanallpossi- bletokenpairsinaprogram,𝐺𝑑 isusuallyhighlysparse.IfDDP apartialoracompletefunction.Likeexistingpre-trainedmod- directlypredictsallelementsin𝐺𝑑 ,themodelwillmostlylearn els[17,20,26],PDBERToutputsasequenceofcontextualembed- dingsbyitself.Suchembeddingsareconvertedtotaskoutput,such fromthelargeproportionofzerosin𝐺𝑑 andconstantlypredict asaclassificationlabeloratokensequence,byaheader(i.e.,anout- thatthereisnodatadependencybetweentwotokens. putlayer).Duringpre-training,differentheadersareconnectedto Toalleviatethisproblem,weproposeatoken-type-basedmask- PDBERTfordifferentpre-trainingtasks,asshowninFigure3.After ingstrategytomaskthevaluesassociatedwithoneormorenon- pre-training,wecansimplyconnectPDBERTwiththeheadersof Identifiersin𝐺𝑑 .Specifically,wefirstusealexertotokenizethe CDPandDDPtoinferthestatement-levelcontroldependenciesand inputprogram𝐶intoasequenceofcodetokens𝑇′ = [𝑡′,𝑡′,...,𝑡′ ] 1 2 |𝑇′| thetoken-leveldatadependenciesinacodesnippet.Foradown- withtheirtokentypes(e.g.,KeywordandIdentifier)labeledand streamtask,anewheader(usuallyanMLP)isconnectedtoPDBERT charspansrecorded.Notethat𝑇′ isdifferentfromthetokense- andfine-tunedwithPDBERTonatask-specificdataset.Eachheader quence𝑇 producedbythesubword-basedtokenizer,andeachcode istask-specificandistrainedtocontaintheinformationaboutpro- token𝑡 𝑖′ maybeoverlappedwithmultipletokensin𝑇.Next,for ducingtaskoutputbasedonthecontextualembeddingsoutputby eachcodetoken𝑡 𝑖′,weidentifyitsoverlappingtokensin𝑇 based PDBERT.Suchinformationisusuallyunhelpfulforothertasks[15]. ontheircharspansandgathersuchtokensinorderintoatoken AstheheadersofMLM,CDPandDDParetask-specificandare sequence𝑇 𝑡 𝑖′.Thetokentypeof𝑡 𝑖′ispropagatedtoeachtokenin notusedfordownstreamtasks,wearguetheyhavenosideeffects. 𝑇 𝑡′.Then,wemasknon-Identifiertokensin𝑇 andcalculatetheloss PleasenotethatPDBERTonlytakesasinputthesourcecodeand 𝑖 ofDDPasfollows: doesnotrequireparsingtheinputprogramorconstructingitsPDG 𝑑𝑑𝑝 (cid:205) 𝑖,𝑗𝑚𝑑 𝑖𝑚𝑑 𝑗[−𝐺 𝑖𝑑 ,𝑗log𝑃 𝑖𝑑 ,𝑗 −(1−𝐺 𝑖𝑑 ,𝑗)log(1−𝑃 𝑖𝑑 ,𝑗)] duringfine-tuning.Wehypothesizethattheknowledgeofprogram L = (9) ((cid:205)𝑛 𝑚𝑑)2 dependencehasbeenlearnedandabsorbedbyPDBERTduring 𝑘=1 𝑘 pre-training,andsuchknowledgecanboostthedownstreamtasks 𝑚𝑑 𝑖 ∈ {0,1}indicateswhether𝑡 𝑖’stokentypeisIdentifier.This thataresensitivetoprogramdependencies.ICSE’24,April14–20,2024,Lisbon,Portugal ZhongxinLiu,ZhijieTang,JunweiZhang,XinXia,andXiaohuYang 4 PRE-TRAININGSETUP Table1:Intrinsicevaluationresultsonpartialcodeinterms ofF1-score. Thissectiondescribesthedatasetandtheconfigurationsusedto pre-trainourmodel. Dependency 5 10 15 20 25 30 4.1 Pre-trainingDataset Control 98.99 99.29 99.30 99.25 99.23 99.21 Inthiswork,wetargetC/C++vulnerabilities.Weusethedataset Data 97.38 96.61 96.05 95.63 95.44 95.08 collectedbyHanzietal.[29]asourpre-trainingdataset.Itcontains Overall 97.69 97.51 97.29 97.07 97.00 96.84 over2.28MC/C++functionseitherextractedfromthetop1060 C/C++open-sourceprojectsonGitHuborgatheredfromtheDraper dataset[52].Weshufflethesefunctionsandpartitionthemintothe Table2:Intrinsicevaluationresultsoncompletefunctions training,validationandtestsets,referredtoas𝑃𝑇 𝑡𝑟𝑎𝑖𝑛,𝑃𝑇 𝑣𝑎𝑙 and intermsofF1-score. 𝑃𝑇 𝑡𝑒𝑠𝑡,respectively.𝑃𝑇 𝑣𝑎𝑙isusedforhyperparametertuningduring pre-training.𝑃𝑇 𝑡𝑒𝑠𝑡 isleveragedtoevaluatePDBERTonprogram Dependency (0,10] (10,20] (20,30] (30,+∞] dependenceanalysis.ToconstructgroundtruthforCDPandDDP, Control 99.91 99.82 99.44 99.23 weleverageJoerntobuildtheASTandPDGofeachfunction.Some Data 97.59 96.35 95.28 94.31 functionsinthisdatasetcannotbeanalyzedbyJoerninareasonable Overall 98.07 97.56 96.93 96.46 time,andarefilteredbysettingtimeoutto30minutes.Inaddition, wededuplicate𝑃𝑇 𝑣𝑎𝑙𝑖𝑑and𝑃𝑇 𝑡𝑒𝑠𝑡andremovethefunctionsinthem *(0,10]denotesthesubsetwheretheLOCofeachfunctionisover thatalsoappearin𝑃𝑇 𝑡𝑟𝑎𝑖𝑛Finally,𝑃𝑇 𝑡𝑟𝑎𝑖𝑛,𝑃𝑇 𝑣𝑎𝑙and𝑃𝑇 𝑡𝑒𝑠𝑡contain 0andnomorethan10,andsoforth. about1.9M,155.0Kand60.4KC/C++functions,respectively. 5.1.2 ExperimentalSetup. Weconducttheintrinsicevaluationon 4.2 Pre-trainingModelConfigurations 𝑃𝑇 𝑡𝑒𝑠𝑡,whichisneverusedduringpre-training.PDBERTtakesas FollowingCodeBERT[20],PDBERTusesa12-layerTransformer inputonlyacodesnippetandpredictsalltheelementsinits𝐺𝑐 with 768 dimensional hidden states and sets the max sequence and𝐺𝑑 .Duringevaluations,wedonotassumethatthetypesof lengthto512.Duringpre-trainingandfine-tuning,wetruncatethe codetokensareavailableandthusdonotusethetoken-type-based codesnippetswithmorethan512tokensor𝑚𝑐 statementstofitthe maskingstrategy(cf.Section3.4.3).F1-scoreisusedastheevalu- model.CDPandDDPonlyconsiderthecontrolanddatadependen- ationmetric,whichiscalculatedbyflatteningandconcatenating cieswithinthetokensinputtothemodel.WeinitializePDBERT thepredictedgraphsofalltestsamples. withtheparametersofCodeBERTtoacceleratethetrainingpro- WefirstevaluatethefeasibilityofPDBERTinanalyzingpartial cessfollowingGraphCodeBERT[26].Thepre-trainingobjectiveof code,asthisisauniquebenefitofPDBERT.Specifically,weextract PDBERTistojointlyminimizethelossesofthethreepre-training thefirst𝐾consecutivestatementsofeachfunctionin𝑃𝑇 𝑡𝑒𝑠𝑡 tocraft |
tasks: subtestsets.𝐾 issetto{5,10,15,20,25,30}.Foreachsubtestset,a L=𝑎 1L𝑐𝑡𝑟𝑙 +𝑎 2L𝑑𝑎𝑡𝑎 +𝑎 3L𝑚𝑙𝑚 (10) unique𝐾 isusedandthefunctionswithlessthan𝐾 statements where𝑎 1,𝑎 2 and𝑎 3 aretheweightsofthethreelosses.Inour areignored.PDBERTtakesasinputeachpartialcodesnippetand implementation, we set𝑎 1 = 5,𝑎 2 = 20 and𝑎 3 = 1 based on predicts the program dependencies among its K statements. To gridsearchon𝑃𝑇 𝑣𝑎𝑙𝑖𝑑.PDBERTispre-trainedon4NvidiaRTX furtherunderstandtheknowledgelearnedbyPDBERT,wealso 3090GPUsfor10epochswithbatchsize128.WeusetheAdam evaluatePDBERT’seffectivenessinanalyzingcompletefunctions. optimizer[35]withaninitiallearningrate1e-4andapplythepoly- FollowingCodeBERT,themaxsequencelengthofPDBERTisset nomialdecayscheduler(𝑝 =2).Ittakesapproximately72hoursto to512.Thuswefilteroutthecompletefunctionswithover512 finishpre-training. tokensin𝑃𝑇 𝑡𝑒𝑠𝑡,andover83%ofthefunctionsin𝑃𝑇 𝑡𝑒𝑠𝑡 arekept. Wepartitiontheremainingfunctionsintoseveralnon-overlapping 5 EVALUATION subsetsbasedontheirLinesofCode(LOC)andapplyPDBERTto them,aimingtoshowPDBERT’seffectivenessacrossfunctionsof To demonstrate the effectiveness of PDBERT, we conduct both varyinglengths.Besides,todemonstratethethroughputofPDBERT, intrinsicandextrinsicevaluations.Forintrinsicevaluation,wedi- wemeasuretheaveragetimecostofPDBERTtoperformCDPand rectlyusePDBERTtoperformprogramdependenceanalysisfor DDPforacompletefunctionandcompareitwithJoern,thestate- bothpartialandcompletefunctions.Forextrinsicevaluation,we of-the-artprogramdependenceanalysistool.Specifically,PDBERT fine-tuneandevaluatePDBERTonthreefunction-levelvulnerabil- isdeployedononeNvidiaRTX3090GPUtoperformpredictions ityanalysistasks,i.e.,vulnerabilitydetection,vulnerabilityclassifi- withbatchsize1.JoernisrunontwoIntelXeon6226RCPUswith cation,andvulnerabilityassessment.Pleasenotethatsincesome 64coresintotalusingthedefaultconfiguration. baselinescannothandlepartialfunctions,theextrinsicevaluation isconductedoncompletefunctions. 5.1.3 Results. Table1showstheevaluationresultsofPDBERT onpartialcodeintermsofF1-score.Theresultspresentedinthe 5.1 IntrinsicEvaluation “Overall”rowarecalculatedonthecombinationofcontrolanddata 5.1.1 Motivation. TounderstandtowhatextentPDBERThaslearned dependencies.WecanseethatPDBERTperformsverywellonCDP, theknowledgeofprogramdependence,weapplyPDBERTtoper- achievingF1-scoresofabout99%forallKs.AsforDDP,PDBERT formCDPandDDPwithoutfurthertrainingorfine-tuning. achieves F1-scores of over 95%, which is also impressive. NotePre-trainingbyPredictingProgramDependenciesforVulnerabilityAnalysisTasks ICSE’24,April14–20,2024,Lisbon,Portugal thatforeachcodesnippet,DDPinstancesarehighlyimbalanced. Table3:Thestatisticsofdatasets. Thus,DDPismorechallengingthanCDP.TheoverallF1-scoresof PDBERTareover96%forallKs,indicatingthefeasibilityandeffec- LOC CC Dataset #Func #Vul #Non-Vul tivenessofPDBERTinanalyzingprogramdependenciesforpartial code.Wealsonoticethatthecontrol,dataandoverallF1-scoresof Mean Median Mean Median PDBERTslightlydecreaseas𝐾increases.Thistrendisattributedto ReVeal 22.6K 2.2K 20.4K 37.2 15 7.1 3 theincreaseddifficultyinanalyzinglongcodesnippetscomparedto Big-Vul 188.6K 10.9K 177.7K 25.0 12 5.9 3 shorterones.Pleasenotethatexistingstaticanalysistools,e.g.,Jo- Devign 27.2K 12.4K 14.8K 51.8 26 11.2 5 ern,lacktheabilitytoautomaticallyderiveprogramdependencies VC 7.6K 7.6K N/A 80.5 32 16.7 6 inpartialcode,andthusareunsuitableforthisscenario. VA 9.9K 9.9K N/A 79.1 32 16.3 6 Table2presentstheperformanceofPDBERToncompletefunc- *#Func,#Vuland#Non-Vulrefertothenumbersofall,vulnerable,andnon-vulnerable tions,whichcloselyalignswiththatonpartialcode.AcrossallLOC functions.CCdenotescyclomaticcomplexity.VCandVArefertothedatasetsusedfor ranges,PDBERTachievescontrol,dataandoverallF1-scoresofover vulnerabilityclassificationandvulnerabilityassessment,respectively. 99%,94%and96%,respectively,demonstratingtheeffectivenessand potentialutilityofPDBERTinperformingprogramdependence thestate-of-the-artencoder-onlypre-trainedcodemodel,which analysisoncompletefunctions.Similarly,thereisaslightdecrease usesMLM,node-typeMLM(NT-MLM),andacontrastivelearning inthecontrol,data,andoverallF1-scoresastheLOCincreases. objectiveforpre-training,andisalsoevaluatedonvulnerability Regardingthroughput,onaverage,Joerntakesover460msto detection.Unfortunately,thepre-trainedmodelofDISCOisnot derivethePDGofacompletefunction,whereasPDBERTaccom- publiclyavailable.Therefore,wecanonlycomparePDBERTwith plishesthesametaskinjust19ms,makingit23timesfasterthan DISCOonvulnerabilitydetectionbasedontheresultsreportedinits Joern.Pleasenotethatthiscomparisonmaynotbeentirelyfair,as paper[18].Therearealsootherencoder-onlypre-trainedcodemod- PDBERTisdeployedonaGPUwhileJoernisrunontwoCPUs. els,includingCuBERT[33],ContraCode[31]andCode-MVP[62]. TheresultonlyhighlightstheadvantageofPDBERTinthrough- Buttheyarepre-trainedonlyonPythonorJavaScriptprograms |
putanddoesnotmeanPDBERTcanreplaceJoerninallusecases. andhencearenotsuitableforC/C++programs.Inaddition,for Nevertheless,itimpliesthatPDBERTismoresuitablethanJoern mostofthem,thepre-trainedmodelsarenotreleased. fortheusecaseswheresomelowlevelsofimprecisionaretolerant Fornon-pre-trainedmodels,weadoptabidirectionalLSTM(Bi- andhighthroughputmattersmore. LSTM)[24]andamulti-layerTransformermodel[59]asbaselines. Toconclude,PDBERThaslearnedtheknowledgeofpro- Theyarefrequentlyusedasencoderstohandlesequentialinputs. gramdependenceandcanaccuratelyextractprogramdepen- ToeliminatetheOut-of-Vocabulary(OoV)problem[34],theirvo- dencies for both partial and complete functions with high cabulariesandtokenizersarebuiltusingtheBPEalgorithm[53]. throughput. WebuildfourvariantsofPDBERT,namelyMLM,MLM+CDP, MLM+DDPandMLM+CDP+DDP.Theyarepre-trainedusingthe samesettingsbutwithdifferentpre-trainingobjectives,asreflected 5.2 CommonBaselinesandVariantsfor bytheirnames.Comparingtheirperformancecanhelpunderstand ExtrinsicEvaluation thecontributionsofdifferentobjectivestodifferenttasks. Theextrinsicevaluationaimstoinvestigatewhetherthepre-training 5.3 VulnerabilityDetection ofPDBERTiseffectiveandcanboostvulnerabilityanalysistasks. Tothisend,wecomparePDBERTwiththepre-trainedcodemodels usingotherpre-trainingobjectivesandnon-pre-trainedmodels.As 5.3.1 Introduction. Vulnerabilitydetectionisafundamentalprob- aproofofconcept,PDBERTispre-trainedbasedonanencoder-only leminsoftwaresecurity.Followingpriorwork[14,18],thistask Transformermodel,becausedecoder-onlymodelsaresub-optimal aimstopredictwhetherafunctionisvulnerableornot,i.e.,function- forunderstandingtasks[17,25]andencoder-decodermodelsre- levelvulnerabilitydetection. quiremuchmoredataandtimetoconverge[7,63].Forexample, Ahmadetal.[7]spent2,208GPUhoursandused727millionfunc- 5.3.2 ExperimentalSetup. WeevaluatePDBERTonthreewidely- tions/documentstopre-trainPLBART.FollowingDingetal.[18], used C/C++ function-level vulnerability detection datasets, i.e., toperformafaircomparison,weonlyconsiderencoder-onlypre- ReVeal[14],Big-Vul[19]andDevign[68].Theirstatisticsarepre- trainedmodelsasbaselines. sentedinTable3. Specifically,forpre-trainedcodemodels,weuseCodeBERT[20], ReVealisreleasedbytheReVealpaper[14],andiscollectedfrom GraphCodeBERT[26],VulBERTa[29]andDISCO[18]asbase- LinuxDebianKernelandChromium.Thisdatasetisimbalancedand lines.CodeBERTiswidelyusedinvarioussoftwareengineering closetoreal-worldscenarios.Werandomlysplit70%/10%/20%ofthe tasks[30,36,66,67],andPDBERTisinitializedwiththeparameters datasetfortraining,validationandtesting,respectively,following ofCodeBERT.GraphCodeBERTmightbeconsideredsomewhat priorwork[14,18].Consideringthedataimbalanceandfollowing similartoourDDPtask(cf.Section7.1).ComparingPDBERTwith priorwork[14,18],weuseF1-scoreastheevaluationmetric. GraphCodeBERTcanhelpdemonstratethebenefitsofDDPand Big-Vul is released by Fan et al. [19], and is collected from PDBERT.VulBERTa[29]ispre-trainedbyHanzietal.withtheir 348 Github projects. Considering the large scale of this dataset collectedC/C++functions(cf.Section4.1)usingMLM.DISCOis andfollowingpriorwork[41],werandomlysplititintotraining,ICSE’24,April14–20,2024,Lisbon,Portugal ZhongxinLiu,ZhijieTang,JunweiZhang,XinXia,andXiaohuYang Table4:Evaluationresultsonvulnerabilitydetection. best-performingbaselineononedatasetusuallydoesnotperform bestonanother.Nevertheless,PDBERTconsistentlyperformsbetter Dataset thanallthebaselines. ForthevariantsofPDBERT,MLM+CDP+DDPoutperformsMLM Model ReVeal Big-Vul Devign onthethreedatasetsbysubstantialmargins,highlightingtheeffec- (F1-score) (F1-score) (Accuracy) tivenessofourpre-trainingobjectives.MLM+CDP+DDPperforms Bi-LSTM 34.25 33.11 61.24 betterthanMLM+CDPandMLM+DDP,indicatingthatCDPand Transformer 40.91 34.90 60.51 DDParebothhelpful.Inaddition,MLM+DDPoutperformsGraph- VulDeePecker 29.03 13.07 45.68 CodeBERT,indicatingthebenefitsofDDP. Devign 26.43 13.77 52.27 In summary, PDBERT is more effective in vulnerability SySeVR 33.73 24.80 48.87 detectionthanthebaselinesonthethreedatasets.BothCDP ReVeal 32.24 23.93 54.55 andDDPcontributetoPDBERT’seffectiveness. CodeBERT 44.27 54.48 62.08 GraphCodeBERT 45.03 54.06 64.02 5.4 VulnerabilityClassification VulBERTa 44.19 36.75 62.88 5.4.1 Introduction. Afteravulnerabilityisdetected,itwillusually DISCO† 46.4* - 63.8 belabeledwithacategorytohelppractitionersunderstanditsroot PDBERT cause,impactandpossiblemitigation[70].However,thislabeling MLM 44.65 55.99 65.52 processrequiresexpertstomanuallyanalyzethevulnerability.This MLM+CDP 45.93 56.28 66.29 taskaimstoautomatethisprocessbyautomaticallyclassifyinga MLM+DDP 47.42 58.51 65.85 vulnerablefunctionbasedonitssourcecode.Specifically,wefocus |
MLM+CDP+DDP 48.38 59.41 67.61 ontheCommonWeaknessEnumeration(CWE)categories,which †Thepre-trainedmodelofDISCOisnotpubliclyavailable.Itsresultsarecopied areadoptedbymanywell-knownsoftwarevulnerabilitydatabases, fromtheDISCOpaper[18]. suchasNVD[1]andVulDB[4],forvulnerabilityclassification. *SincetheReVealdatasetdoesnotprovideofficialsplits,thetestsetusedbyDISCO canbedifferentfromotherapproaches. Thistaskisamulti-classclassificationtask. 5.4.2 Experimental Setup. We construct a dataset for this task validationandtestingsetsby80%/10%/10%.Sincethisdatasetis basedontheBig-Vuldataset[19].Besidessourcecode,theBig- highlyimbalanced,wealsouseF1-scoreastheevaluationmetric. Vuldatasetalsocollectsothervulnerability-relatedinformation, DevignisreleasedbytheDevignpaper[68],containing27.2K suchastheCWEcategoryandtheCommonVulnerabilityScoring functionscollectedfromFFmpegandQemu.Thisdatasetisbal- System(CVSS)scores,foreachvulnerablefunction.Toconstruct ancedandlessrealisticthanReVealandBig-Vul.Weusethisdataset ourdataset,weremoveallthenon-vulnerablefunctions.Wealso becauseitisincludedinthefrequentlyusedCodeXGLUEbench- filteroutthevulnerablefunctionsbelongingtotheCWEcategories mark[47].Wealsousethetrain/valid/testsplitsfromCodeXGLUE. withlessthan100(about1%)instances,toavoidtheabsenceofsome FollowingthedesignofCodeXGLUEandpriorwork[18],weuse CWEcategoriesafterdatasplitting.Theresultingdatasetcontains Accuracyastheevaluationmetric. 7.6Kvulnerablefunctionsin14CWEcategories.Werandomlysplit ForPDBERTandthecommonbaselinesintroducedinSection5.2, 80%/10%/10%ofthisdatasetfortraining/validation/testing. anewMLPisappendedtoeachmodeltoperformprediction.Apart Duetothelong-taileddistributionofCWEcategories,weuse fromthecommonbaselines,wealsocomparePDBERTwiththe threemetrics,i.e.,MacroF1,WeightedF1andthemulti-classver- state-of-the-artnon-pre-trainedmodelsthatarespeciallydesigned sionofMatthewsCorrelationCoefficient(MCC)[23],forevalu- forvulnerabilitydetection,includingVulDeepecker[44],Devign ation.Thesemetricsarealsousedbyothervulnerability-related [68],SySeVR[43],andReVeal[14].ForDevign,itsimplementation studies[28,38].MacroF1istheunweightedmeanoftheF1-scores isnotreleased,soweusetheimplementationandsettingsprovided ofallcategories,whereasWeightedF1considersweightedmean. bytheauthorsofReVeal[14].Forotherbaselines,weusetheir MCCmeasuresthedifferencesbetweenactualvaluesandpredicted officialimplementationandsettingstoconductexperiments. values.NotethatMCCrangesfrom-1to1andisnotdirectlypropor- tionaltoF1-score.AnewMLPisusedbyPDBERTandeachofthe 5.3.3 Results. Table4presentsourevaluationresultsonthistask. commonbaselines,respectively,astheclassifier.Tothebestofour Wecanseethatthepre-trainedmodelsoutperformallthenon- knowledge,existingapproachesforCWEcategorypredictioneither pre-trainedmodelsonthethreedatasetsbysubstantialmargins, arebasedonvulnerabilitydescriptionsorrequireinter-procedural confirmingtheeffectivenessofpre-training.Itisalittlesurpris- analysis.Consideringthatthistaskonlytakesasinputvulnerable ingthatBi-LSTMandTransformerperformbetterthantheother functions,thereisnosuitabletask-specificbaseline. non-pre-trainedmodels.Onepossiblereasonisthattheybothuse BPEtokenizers,whichhavebeenshowntobeeffectiveinmodeling 5.4.3 Results. Table5presentstheevaluationresultsonthistask. codebypriorwork[34,58].Dingetal.[18]alsoreportedsimilar Wecanseethatthepre-trainedmodelsoutperformallthenon-pre- results.OntheReVeal,Big-VulandDevigndatasets,PDBERTim- trainedbaselines,i.e.,Bi-LSTMandTransformer,bylargemargins. provesthebest-performingbaselines,i.e.,DISCO,CodeBERT,and Forpre-trainedmodels,VulBERTaachievessimilarperformanceto GraphCodeBERT,by4.3%,9.0%and5.6%,respectively,inF1-score CodeBERT.PDBERTimprovesCodeBERTinMacroF1,Weighted orAccuracy.Itisworthmentioningthatamongthebaselines,the F1andMCCby11.7%,9.1%and12.2%,respectively.PDBERTalsoPre-trainingbyPredictingProgramDependenciesforVulnerabilityAnalysisTasks ICSE’24,April14–20,2024,Lisbon,Portugal Table5:Evaluationresultsonvulnerabilityclassification. theofficialimplementationofDeepCVA,andtrainandevaluate DeepCVAonourdataset.ForeachCVSSmetric,weappendanew Model MacroF1(%) WeightedF1(%) MCC MLPtoPDBERTandeachofthecommonbaselines,respectively,as theclassifier.FollowingDeepCVA,weusetwoevaluationmetrics, Bi-LSTM 26.48 39.45 0.3001 i.e.,MacroF1andthemulti-classversionofMCC. Transformer 35.35 46.34 0.3737 5.5.3 Results. Table6presentstheevaluationresultsonthevulner- CodeBERT 51.91 57.37 0.5030 abilityassessmenttask.Onaverage,thepre-trainedmodelsperform GraphCodeBERT 54.74 59.82 0.5331 betteroratleastnoworsethanthebestnon-pre-trainedmodel,i.e., VulBERTa 50.11 57.36 0.5035 DeepCVA,demonstratingtheeffectivenessofpre-training.Note |
PDBERT thatassessingCVSSmetricsoftenrequiresmorecontextandproject- MLM 54.49 60.42 0.5372 specificinformation,e.g.,thesystemconfigurationrequiredtoex- MLM+CDP 56.51 59.93 0.5325 ploitthevulnerability,thandetectingorclassifyingvulnerabilities. MLM+DDP 56.93 62.07 0.5614 Suchinformationcanhardlybefoundinthevulnerablefunction. MLM+CDP+DDP 57.96 62.60 0.5644 Therefore,theperformanceimprovementsthatcanbeachievedby improvingcodeunderstandingarelimited.Nevertheless,PDBERT outperformsthebest-performingbaseline,i.e.,GraphCodeBERT, outperformsGraphCodeBERTinthethreemetricsby5.9%,4.6% on each CVSS metric. On average, MLM+CDP+DDP improves and5.9%,demonstratingitseffectivenessinthistask. GraphCodeBERT in Macro F1 and MCC by 2.8% and 3.3%. The AsforPDBERT’svariants,MLM+CDP+DDPoutperformsMLM performance improvements of the best-performing variant, i.e., in all the metrics by substantial margins, highlighting that our MLM+CDP,overGraphCodeBERTinthetwometricsare3.2%and pre-trainingobjectivesareeffective.MLM+CDP+DDPimproves 6.0%,respectively.Theseresultsindicatethatourpre-trainingtech- MLM+CDPandMLM+DDP,whichindicatesthatCDPandDDPare niqueiseffectiveinassessingvulnerabilities. bothbeneficial.Also,MLM+DDPoutperformsGraphCodeBERTby ForthevariantsofPDBERT,MLM+CDP+DDPoutperformsMLM substantialmargins,confirmingthebenefitsofDDP. onalltheCVSSmetricsandonaverage,highlightingthecontribu- Insummary,PDBERTcanboostvulnerabilityclassification tionofourpre-trainingobjectives.TheperformanceofMLM+CDP andbothCDPandDDPareeffectiveandbeneficialforthis andMLM+CDP+DDPisveryclosewhenassessingAvailability,In- task. tegrity,andConfidentiality.However,MLM+CDPperformsbetter thanMLM+CDP+DDPonAccessComplexity.Onepossiblerea- sonisthatAccessComplexitycaresmoreabouttheconditionsor 5.5 VulnerabilityAssessment privilegesrequiredtoexecutethevulnerablestatements,which 5.5.1 Introduction. Vulnerabilityassessmentisaprocessthatde- arehighlyrelatedtocontroldependencies. Thisshowsthatfor terminesvariouscharacteristicsofvulnerabilitiesandhelpspracti- vulnerabilityassessment,DDPdoesnotprovidesignificantbenefits. tionersprioritizetheremediationofcriticalvulnerabilities[37,38]. Besides,MLM+DDPstilloutperformsGraphCodeBERT,indicating CVSSisacommonlyusedexpert-basedvulnerabilityassessment theeffectivenessofDDP. framework. It defines a series of metrics, i.e., CVSS metrics, to Insummary,PDBERTiseffectiveinvulnerabilityassess- measuretheseverityofavulnerabilityrelativetoothervulnerabil- mentandCDPismoreimportantthanDDPforthistask. ities.However,quantifyingthesemetricsforanewvulnerability requiresmanualeffortsofsecurityexpertsandthereisusuallya delayinsuchmanualprocess[38].Tothisend,thevulnerability 6 DISCUSSION assessmenttaskaimstoautomaticallyassesstheCVSSmetricsfor This section discusses the limitations of our approach and the vulnerabilities.Inthiswork,wefocusonfunction-levelvulnera- threatstovalidityofthiswork. bilityassessment,whichtakesasinputavulnerablefunctionand outputsthevaluesofCVSSmetricsforit.Specifically,thistask 6.1 Limitations targetsfourmetricsthatareimportantandcouldbeinferredbased Duetothelimitationsofcomputationresources,followingCode- onsourcecode,includingAvailability,Confidentiality,Integrity, BERT,wesetthemaxsequencelengthofPDBERTas512inthis andAccessComplexity. work.Consequently,whenusedforprogramdependenceanaly- 5.5.2 ExperimentalSetting. Wealsoconstructadatasetforthis sis,PDBERTcannotproperlyanalyzecodesnippetswithover512 taskbasedontheBig-Vuldataset[19].Specifically,weremovethe tokens.However,asshowninSection5.1.2,over83%ofC/C++ vulnerablefunctionswithinvalidCVSSscores(e.g.“???”),result- functions in the test set constructed from open-source projects inginadatasetwith9.9KvulnerablefunctionsandtheirCVSS adheretothislengthlimit.Basedonourintrinsicevaluation,we scores.Werandomlysplit80%/10%/10%ofthemfortraining/val- arguethatPDBERTiseffectiveandefficientforanalyzingmost idation/test.Weusethestate-of-the-artvulnerabilityassessment functionsinpractice.Whenappliedtodownstreamtasks,PDBERT approachnamedDeepCVA[38]asthetask-specificbaseline.Deep- truncateslongcodesnippetsbeforeprocessingthem.Although CVAusesConvolutionalNeuralNetworks(CNN)toextractfeatures thismaynegativelyaffecttheperformance,ourextrinsicevalua- fromvulnerablecodeandpredictmultipleCVSSmetricsinparallel tionshowsthatPDBERTstilleffectivelyboostsdownstreamtasks. throughmulti-tasklearning.Toperformafaircomparison,wereuse Ontheotherhand,thislimitationcomesfromtheTransformerICSE’24,April14–20,2024,Lisbon,Portugal ZhongxinLiu,ZhijieTang,JunweiZhang,XinXia,andXiaohuYang Table6:Evaluationresultsonvulnerabilityassessment. AccessComplexity Availability Integrity Confidentiality Mean Model MacroF1(%) MCC MacroF1(%) MCC MacroF1(%) MCC MacroF1(%) MCC MacroF1(%) MCC |
Bi-LSTM 59.20 0.4302 66.07 0.5423 72.26 0.5802 71.29 0.5589 67.21 0.5279 Transformer 47.33 0.3590 63.52 0.5115 71.33 0.5595 69.56 0.5256 62.93 0.4889 DeepCVA 73.06 0.6093 74.50 0.6579 75.64 0.6394 75.30 0.6290 74.63 0.6337 CodeBERT 69.69 0.6054 76.98 0.6908 77.59 0.6580 76.24 0.6368 75.13 0.6477 GraphCodeBERT 74.78 0.6484 77.21 0.6895 76.72 0.6490 75.94 0.6305 76.16 0.6544 VulBERTa 68.97 0.6062 75.11 0.6615 77.46 0.6621 75.36 0.6184 74.23 0.6371 PDBERT MLM 75.08 0.6639 75.64 0.6782 78.15 0.6731 77.46 0.6482 76.58 0.6658 MLM+CDP 79.75 0.7300 78.31 0.6998 78.85 0.6851 77.48 0.6605 78.60 0.6939 MLM+DDP 76.58 0.6655 77.97 0.6993 79.37 0.6925 77.26 0.6474 77.59 0.6724 MLM+CDP+DDP 78.06 0.6682 78.15 0.6965 79.38 0.6884 77.52 0.6519 78.28 0.6763 Thebestresultsarebold,andthesecond-bestresultsareunderlined. architecture[59],notfromourpre-trainingtechnique.Itcanbe theyareallcollectedfromopen-sourcerepositories.However,the mitigatedbysimplyincreasingthemaxsequencelengthofPDBERT, pre-trainingdatasetcontainsbothvulnerableandbenignfunctions, whichrequiresmorecomputationresourcesandadvancedhard- andthetask-specificlabelsofeachfunctionareunknownduring ware.Consideringthatourpre-trainingtechniqueisorthogonalto pre-training.Inaddition,priorwork[10]hasshownthatdatacon- theunderlyingmodelarchitecture,onecanalsoadoptspecialized taminationmayhavelittleimpactontheperformanceofpre-trained modelarchitecturesforlongcodesnippets,suchasLongFormer[9] models.Therefore,wearguethatthisthreatislimited. andLongCoder[27],toaddressthislimitation.Sincethisworkfo- cusesonpre-trainingtechniques,wechoosethemostwidelyused 7 RELATEDWORK Transformerarchitecture,followthesettingsofpriorwork[20,26], 7.1 Pre-TrainedCodeModels andleavehandlinglongcodesnippetsasfuturework. Existing pre-trained code models can generally be divided into Inaddition,CDPfocusesonstatement-levelcontroldependen- encoder-only[18,20,26,31,33,62],decoder-only[40,45,57]and ciesanddoesnothandlethecontrolflowwithinasinglestatement, encoder-decodermodels[7,13,25,32,42,48,49,63,65].Asaproof e.g.,ternaryoperators.FutureworkmayimproveCDPbyconsid- ofconcept,thisworkfocusesonpre-traininganencoder-onlymodel eringsuchcontrolflow. forvulnerabilityanalysis.Forexistingencoder-onlypre-trained models,someofthem[20,33]directlyadoptthepre-trainingtasks 6.2 ThreatstoValidity designedforNL.Forexample,CodeBERT[20]adoptsMLM[17]and GeneralizationofPDBERT.PDBERTispre-trainedonC/C++pro- ReplacedTokenDetection(RTD)[16].Thesemodelscancapturethe gramsandmaynotbesuitableforotherprogramminglanguages. naturalnessofcode.Recently,somepre-trainingtechniques[18,62] However,ourproposedpre-trainingobjectivesarelanguage-agnostic. areproposedtohelpmodelslearnthesyntacticstructureofcode. Practitionerscanpre-traintheirmodelswithCDPandDDPfor Forexample,DISCO[18]adoptsapre-trainingobjectivetopredict otherorevenmultipleprogramminglanguages. themaskedASTtypesintheinput.Somepriorwork[11,18,31,62] ThelimitationsofusingJoern.TobuildgroundtruthforCDPand alsoleveragescontrastivelearning(CL)objectivestohelpmodels DDP,weuseJoerntoextractPDGs.Therefore,PDBERTisexpected learnhigh-levelfunctionalsimilaritiesbetweenprograms.Different topredicttheprogramdependenciesanalyzedbyJoern,andmay fromthem,ourpre-trainedobjectivesaimtohelpmodelslearnto sharethesamelimitationsasJoern.However,Joernisthestate-of- capturethesemanticstructureofcode. the-artsource-code-basedprogramdependenceanalysistool.Itis Tothebestofourknowledge,onlytwopre-trainedcodemod- ofhighaccuracyandhasbeenwidelyusedforvulnerability-related els,i.e.,Code-MVP[62]andGraphCodeBERT[26],considerthe tasks[12,14,68].Inaddition,ourpre-trainingtechniquecanbe semanticstructureofcode(e.g.,control-anddata-flowinforma- regardedastrainingthemodeltolearntheprogramdependence tion).Code-MVPexplicitlytakescontrolflowgraph(CFG)asinput analysisknowledgeofJoern,whichcanstillbebeneficial(asshown duringpre-trainingandtargetlearningrepresentationfromCFG. byourextrinsicevaluation).Ourintrinsicevaluationdemonstrates GraphCodeBERTispre-trainedbypredictingafewmaskeddata theuniquebenefitsofPDBERTinprogramdependenceanalysis flowedgeswithotherunmaskedonesandavariablesequenceas andatleastassesseshowclosePDBERTistoJoerninanalyzing input.Duringinference,thevariablesequenceandalldataflow complete functions. Therefore, we argue that the limitations of edges of a program need to be extracted as input. This implies Joerndonotaffecttheconclusionsofthiswork. thatGraphCodeBERTalsotargetslearningrepresentationfrom DataContamination.Itispossiblethatsomefunctionsinthe dataflow,asmentionedinitspaper[26].Comparedtothem,first, |
pre-trainingdatasetalsoappearinsomefine-tuningdatasets,as PDBERTconsiderscontrolanddatadependenciessimultaneously,Pre-trainingbyPredictingProgramDependenciesforVulnerabilityAnalysisTasks ICSE’24,April14–20,2024,Lisbon,Portugal thusitcanlearnmorecomprehensiveknowledgeaboutprogram alsofine-tuneandevaluatePDBERTonthreevulnerabilityanalysis dependence.Moreimportantly,PDBERTtargetslearningtherep- tasks,i.e.,vulnerabilitydetection,vulnerabilityclassification,and resentationthatencodesthesemanticstructureofcodeandis vulnerabilityassessment.ExperimentalresultsindicatePDBERT designedto“absorb”theknowledgerequiredforend-to-end benefitsfromCDPandDDPanditismoreeffectivethanthestate- programdependenceanalysis,whichhasnotbeeninvestigated. of-the-artbaselinesonallthesetasks. AsdiscussedinSection1anddemonstratedinSection5,thisdesign In the future, we would like to investigate how to pre-train bringsseveraluniqueandsignificantbenefits:(1)PDBERTonly encoder-decodermodelswithCDPandDDP.Weplantoinvesti- takessourcecodeasinputandcanproperlyprocess“unparsable” gatetheeffectivenessofPDBERTonmoredownstreamtasksand code.(2)TheknowledgelearnedbyPDBERTismoregeneraland pre-trainPDBERTwithamultilingualcorpus.Inaddition,forthe canbetterboostdownstreamtasks.(3)PDBERTcandirectlybeused downstreamtaskswherestaticanalyzerscansuccessfullyextract toanalyzestatement-levelcontroldependenciesandtoken-level theprogramdependenciesintheinputcode,itwouldbeinterest- datadependencies,whichtothebestofourknowledge,cannotbe ingtoinvestigatewhethercombiningourpre-trainingtechnique achievedbyexistingneuralmodels.Therefore,webelievePDBERT withexplicitlyprovidedprogramdependenciescanfurtherimprove providessignificantcontributionscomparedtoexistingpre-trained performance. codemodels. ACKNOWLEDGMENTS 7.2 DeepLearningforVulnerabilityAnalysis Thisresearch/projectissupportedbytheNationalNaturalScience Many deep-learning-based approaches have been proposed for FoundationofChina(No.62202420)andtheFundamentalResearch vulnerabilityanalysis[14,38,44,68,70].Forvulnerabilitydetec- FundsfortheCentralUniversities(No.226-2022-00064).Zhongxin tion,thestate-of-the-artapproachesfirstleveragestaticanalysis LiugratefullyacknowledgesthesupportofZhejiangUniversity tools,e.g.,Joern,toextractPDGs,andrepresentprogramsindif- EducationFoundationQizhenScholarFoundation. ferentforms,suchascodegadgets[44],syntax-based,semantics- REFERENCES based,andvectorrepresentations[43],orgraph-basedrepresen- tations[14,41,68],basedontheirPDGs.Then,theyleveragedif- [1] 2023.Nationalvulnerabilitydatabase.https://nvd.nist/. [2] 2023.OurGitHubrepository.https://github.com/ZJU-CTAG/PDBERT. ferent neural models, such as Bi-LSTM [44], CNN [43, 68], and [3] 2023.Ourreplicationpackage.https://doi.org/10.5281/zenodo.8196552. GNN[14,41,68],toextractthefeaturevectorofeachinputpro- [4] 2023.VulDB.https://vuldb.com/?kb.sources. gramforvulnerabilitydetection.Comparedtotheseapproaches, [5] RabeAbdalkareem,EmadShihab,andJuergenRilling.2017.Oncodereusefrom StackOverflow:AnexploratorystudyonAndroidapps.Inf.Softw.Technol.88 PDBERTdoesnotrequireprogramdependenciesasinputandcan (2017),148–158. https://doi.org/10.1016/j.infsof.2017.04.005 handlepartialcode,providinguniqueadvantages.Also,theseap- [6] YaseminAcar,MichaelBackes,SaschaFahl,DoowonKim,MichelleL.Mazurek, proachesaretrainedfromscratchtolearnhowtouseprogram andChristianStransky.2016.YouGetWhereYou’reLookingfor:TheImpactof InformationSourcesonCodeSecurity.InProceedingsofthe2016IEEESymposium dependenciesonaspecifictask,whilePDBERTispre-trainedto onSecurityandPrivacy.289–305. https://doi.org/10.1109/SP.2016.25 learntheknowledgeofprogramdependenceandcanbeeasilyap- [7] WasiUddinAhmad,SaikatChakraborty,BaishakhiRay,andKai-WeiChang. 2021.UnifiedPre-trainingforProgramUnderstandingandGeneration.InPro- pliedtomultiplevulnerabilityanalysistasks.Forvulnerability ceedingsofthe2021ConferenceoftheNorthAmericanChapteroftheAssociation classification,mostexistingstudiesfocusonpredictingCWEcat- forComputationalLinguistics:HumanLanguageTechnologies.2655–2668. egoriesbasedonexpert-curatedvulnerabilitydescriptions[8,51]. [8] MasakiAota,HideakiKanehara,MasakiKubo,NoboruMurata,BoSun,and 𝜇VulDeePecker[70]istheonlyworkthatpredictsCWEcategories TakeshiTakahashi.2020. AutomationofVulnerabilityClassificationfromIts DescriptionUsingMachineLearning.InProceedingsofthe2020IEEESymposium basedonsourcecode,butitusesinter-proceduralanalysis,while onComputersandCommunications.1–7. ourworkfocusesonfunction-levelvulnerabilities.Forvulnera- [9] IzBeltagy,MatthewE.Peters,andArmanCohan.2020.Longformer:TheLong- DocumentTransformer.CoRRabs/2004.05150(2020).arXiv:2004.05150 https: bilityassessment,mostpriorworkpredictstheCVSSmetrics //arxiv.org/abs/2004.05150 |
basedonvulnerabilitydescriptionsinsteadofsourcecode[39,56]. [10] TomB.Brown,BenjaminMann,NickRyder,MelanieSubbiah,JaredKaplan, PrafullaDhariwal,ArvindNeelakantan,PranavShyam,GirishSastry,Amanda DeepCVA[38]leveragesCNNandmulti-tasklearningforcommit- Askell,SandhiniAgarwal,ArielHerbert-Voss,GretchenKrueger,TomHenighan, levelvulnerabilityassessment.Leetal.[36]proposedtopredict RewonChild,AdityaRamesh,DanielM.Ziegler,JeffreyWu,ClemensWinter, theCVSSmetricsbasedonvulnerablestatements,whichrequire ChristopherHesse,MarkChen,EricSigler,MateuszLitwin,ScottGray,Benjamin Chess,JackClark,ChristopherBerner,SamMcCandlish,AlecRadford,Ilya significantmanualeffortstoidentifyorrequirevulnerabilityfixes Sutskever,andDarioAmodei.2020.LanguageModelsAreFew-ShotLearners. tobeknown. InAdvancesinNeuralInformationProcessingSystems33:AnnualConferenceon NeuralInformationProcessingSystems2020,NeurIPS2020. [11] NghiDQBui,YijunYu,andLingxiaoJiang.2021.Self-SupervisedContrastive 8 CONCLUSIONANDFUTUREWORK LearningforCodeRetrievalandSummarizationviaSemantic-PreservingTrans- formations.InProceedingsofthe44thInternationalACMSIGIRConferenceon Inthiswork,weproposetwonovelpre-trainingobjectives,i.e.,CDP ResearchandDevelopmentinInformationRetrieval.511–521. andDDP,tohelpincorporatetheknowledgeofend-to-endprogram [12] SicongCao,XiaobingSun,LiliBo,RongxinWu,BinLi,andChuanqiTao.2022. MVD:Memory-RelatedVulnerabilityDetectionBasedonFlow-SensitiveGraph dependenceanalysisintoneuralmodelsandboostvulnerability NeuralNetworks.InProceedingsofthe44thIEEE/ACMInternationalConference analysis.CDPandDDPaimtopredictthestatement-levelcontrol onSoftwareEngineering.1456–1468. dependenciesandthetoken-leveldatadependenciesinaprogram [13] SaikatChakraborty,ToufiqueAhmed,YangruiboDing,PremkumarT.Devanbu, andBaishakhiRay.2022. NatGen:GenerativePre-Trainingby“Naturalizing” onlybasedonitssourcecode.Asaproofofconcept,webuildapre- SourceCode.InProceedingsofthe30thACMJointEuropeanSoftwareEngineering trainedmodelnamedPDBERTwithCDPandDDP,whichachieves ConferenceandSymposiumontheFoundationsofSoftwareEngineering.18–30. [14] SaikatChakraborty,RahulKrishna,YangruiboDing,andBaishakhiRay.2022. F1-scores of over 99% and 94% for predicting control and data DeepLearningBasedVulnerabilityDetection:AreWeThereYet?IEEETrans. dependencies,respectively,inpartialandcompletefunctions.We SoftwareEng.48,9(2022),3280–3296.ICSE’24,April14–20,2024,Lisbon,Portugal ZhongxinLiu,ZhijieTang,JunweiZhang,XinXia,andXiaohuYang [15] TingChen,SimonKornblith,MohammadNorouzi,andGeoffreyHinton.2020.A Proceedingsofthe19thInternationalConferenceonMiningSoftwareRepositories. simpleframeworkforcontrastivelearningofvisualrepresentations.InProceed- 621–633. ingsofthe37thInternationalConferenceonMachineLearning.PMLR,1597–1607. [37] TrietHuynhMinhLe,HuamingChen,andMuhammadAliBabar.2023.ASurvey [16] KevinClark,Minh-ThangLuong,QuocV.Le,andChristopherD.Manning.2020. onData-drivenSoftwareVulnerabilityAssessmentandPrioritization. ACM ELECTRA:Pre-trainingTextEncodersasDiscriminatorsRatherThanGenerators. Comput.Surv.55,5(2023),100:1–100:39. InProceedingsofthe8thInternationalConferenceonLearningRepresentations. [38] TrietHuynhMinhLe,DavidHin,RolandCroft,andMuhammadAliBabar.2021. [17] JacobDevlin,Ming-WeiChang,KentonLee,andKristinaToutanova.2019.BERT: DeepCVA:AutomatedCommit-levelVulnerabilityAssessmentwithDeepMulti- Pre-trainingofDeepBidirectionalTransformersforLanguageUnderstanding.In taskLearning.InProceedingsofthe36thIEEE/ACMInternationalConferenceon Proceedingsofthe2019ConferenceoftheNorthAmericanChapteroftheAssociation AutomatedSoftwareEngineering.717–729. forComputationalLinguistics:HumanLanguageTechnologies.4171–4186. [39] TrietHuynhMinhLe,BushraSabir,andMuhammadAliBabar.2019.Automated [18] YangruiboDing,LucaBuratti,SaurabhPujar,AlessandroMorari,Baishakhi SoftwareVulnerabilityAssessmentwithConceptDrift.InProceedingsofthe Ray,andSaikatChakraborty.2022.TowdsLearning(Dis)-SimilarityofSource 2019IEEE/ACM16thInternationalConferenceonMiningSoftwareRepositories. CodefromProgramContrasts.InProceedingsofthe60thAnnualMeetingofthe 371–382. AssociationforComputationalLinguistics(Volume1:LongPapers).6300–6312. [40] YujiaLi,DavidChoi,JunyoungChung,NateKushman,JulianSchrittwieser,Rémi [19] JiahaoFan,YiLi,ShaohuaWang,andTienN.Nguyen.2020. AC/C++Code Leblond,TomEccles,JamesKeeling,FelixGimeno,AgustinDalLago,etal.2022. VulnerabilityDatasetwithCodeChangesandCVESummaries.InProceedingsof Competition-levelcodegenerationwithalphacode. Science378,6624(2022), the17thInternationalConferenceonMiningSoftwareRepositories.508–512. 1092–1097. [20] ZhangyinFeng,DayaGuo,DuyuTang,NanDuan,XiaochengFeng,MingGong, [41] YiLi,ShaohuaWang,andTienN.Nguyen.2021.Vulnerabilitydetectionwithfine- |
LinjunShou,BingQin,TingLiu,DaxinJiang,andMingZhou.2020.CodeBERT: grainedinterpretations.InProceedingsofthe29thACMJointEuropeanSoftware APre-TrainedModelforProgrammingandNaturalLanguages.InFindingsof EngineeringConferenceandSymposiumontheFoundationsofSoftwareEngineering. theAssociationforComputationalLinguistics:EMNLP2020(FindingsofACL, 292–303. Vol.EMNLP2020).1536–1547. [42] ZhiyuLi,ShuaiLu,DayaGuo,NanDuan,ShaileshJannu,GrantJenks,Deep [21] JeanneFerrante,KarlJ.Ottenstein,andJoeD.Warren.1987.TheProgramDe- Majumder,JaredGreen,AlexeySvyatkovskiy,andShengyuFu.2022.Automating pendenceGraphandItsUseinOptimization.ACMTransactionsonProgramming CodeReviewActivitiesbyLarge-ScalePre-Training.InProceedingsofthe30th LanguagesandSystems9,3(1987),319–349. ACMJointEuropeanSoftwareEngineeringConferenceandSymposiumonthe [22] FelixFischer,KonstantinBöttinger,HuangXiao,ChristianStransky,Yasemin FoundationsofSoftwareEngineering.1035–1047. Acar,MichaelBackes,andSaschaFahl.2017.Stackoverflowconsideredharmful? [43] ZhenLi,DeqingZou,ShouhuaiXu,HaiJin,YaweiZhu,andZhaoxuanChen.2021. theimpactofcopy&pasteonandroidapplicationsecurity.InProceedingsofthe SySeVR:AFrameworkforUsingDeepLearningtoDetectSoftwareVulnerabilities. 2017IEEESymposiumonSecurityandPrivacy(SP).121–136. IEEETransactionsonDependableandSecureComputing19,4(2021),2244–2258. [23] JanGorodkin.2004.ComparingTwoK-categoryAssignmentsbyaK-category [44] ZhenLi,DeqingZou,ShouhuaiXu,XinyuOu,HaiJin,SujuanWang,Zhijun CorrelationCoefficient. Computationalbiologyandchemistry28,5-6(2004), Deng,andYuyiZhong.2018.VulDeePecker:ADeepLearning-BasedSystemfor 367–374. VulnerabilityDetection.InProceedingsofthe25thAnnualNetworkandDistributed [24] AlexGraves,Abdel-rahmanMohamed,andGeoffreyE.Hinton.2013.Speech SystemSecuritySymposium. recognitionwithdeeprecurrentneuralnetworks.InProceedingsoftheIEEE [45] FangLiu,GeLi,YunfeiZhao,andZhiJin.2020. Multi-TaskLearningBased InternationalConferenceonAcoustics,SpeechandSignalProcessing.6645–6649. Pre-TrainedLanguageModelforCodeCompletion.InProceedingsofthe35th [25] DayaGuo,ShuaiLu,NanDuan,YanlinWang,MingZhou,andJianYin.2022. IEEE/ACMInternationalConferenceonAutomatedSoftwareEngineering.473–485. UniXcoder:UnifiedCross-ModalPre-trainingforCodeRepresentation.InPro- [46] YinhanLiu,MyleOtt,NamanGoyal,JingfeiDu,MandarJoshi,DanqiChen,Omer ceedingsofthe60thAnnualMeetingoftheAssociationforComputationalLinguistics Levy,MikeLewis,LukeZettlemoyer,andVeselinStoyanov.2019.RoBERTa:A (Volume1:LongPapers).7212–7225. RobustlyOptimizedBERTPretrainingApproach.CoRRabs/1907.11692(2019). [26] DayaGuo,ShuoRen,ShuaiLu,ZhangyinFeng,DuyuTang,ShujieLiu,Long arXiv:1907.11692 Zhou,NanDuan,AlexeySvyatkovskiy,ShengyuFu,MicheleTufano,ShaoKun [47] ShuaiLu,DayaGuo,ShuoRen,JunjieHuang,AlexeySvyatkovskiy,Ambrosio Deng,ColinB.Clement,DawnDrain,NeelSundaresan,JianYin,DaxinJiang, Blanco,ColinB.Clement,DawnDrain,DaxinJiang,DuyuTang,GeLi,Lidong andMingZhou.2021. GraphCodeBERT:Pre-trainingCodeRepresentations Zhou,LinjunShou,LongZhou,MicheleTufano,MingGong,MingZhou,Nan withDataFlow.InProceedingsofthe9thInternationalConferenceonLearning Duan,NeelSundaresan,ShaoKunDeng,ShengyuFu,andShujieLiu.2021. Representations. CodeXGLUE:AMachineLearningBenchmarkDatasetforCodeUnderstanding [27] DayaGuo,CanwenXu,NanDuan,JianYin,andJulianJ.McAuley.2023.Long- andGeneration.InProceedingsoftheNeuralInformationProcessingSystemsTrack Coder:ALong-RangePre-trainedLanguageModelforCodeCompletion.In onDatasetsandBenchmarks1,NeurIPSDatasetsandBenchmarks2021. InternationalConferenceonMachineLearning,ICML2023(ProceedingsofMachine [48] AntonioMastropaolo,SimoneScalabrino,NathanCooper,DavidNaderPalacio, LearningResearch,Vol.202).PMLR,12098–12107. https://proceedings.mlr.press/ DenysPoshyvanyk,RoccoOliveto,andGabrieleBavota.2021. Studyingthe v202/guo23j.html UsageofText-to-TextTransferTransformertoSupportCode-RelatedTasks. [28] ZhuobingHan,XiaohongLi,ZhenchangXing,HongtaoLiu,andZhiyongFeng. InProceedingsofthe2021IEEE/ACM43rdInternationalConferenceonSoftware 2017.LearningtoPredictSeverityofSoftwareVulnerabilityUsingOnlyVulner- Engineering(ICSE).336–347. abilityDescription.InProceedingsofthe2017IEEEInternationalConferenceon [49] ChanganNiu,ChuanyiLi,VincentNg,JidongGe,LiguoHuang,andBinLuo. SoftwareMaintenanceandEvolution.125–136. 2022.SPT-Code:Sequence-to-SequencePre-TrainingforLearningSourceCode [29] HazimHanifandSergioMaffeis.2022.VulBERTa:SimplifiedSourceCodePre- Representations.InProceedingsofthe44thIEEE/ACM44thInternationalConference TrainingforVulnerabilityDetection.InProceedingsofthe2022InternationalJoint onSoftwareEngineering.1–13. ConferenceonNeuralNetworks.1–8. [50] ColinRaffel,NoamShazeer,AdamRoberts,KatherineLee,SharanNarang, |
[30] QingHuang,ZhiqiangYuan,ZhenchangXing,XiweiXu,LimingZhu,and MichaelMatena,YanqiZhou,WeiLi,andPeterJ.Liu.2020. Exploringthe QinghuaLu.2022.Prompt-tunedcodelanguagemodelasaneuralknowledge LimitsofTransferLearningwithaUnifiedText-to-TextTransformer.J.Mach. basefortypeinferenceinstatically-typedpartialcode.InProceedingsofthe37th Learn.Res.21,140(2020),1–67. IEEE/ACMInternationalConferenceonAutomatedSoftwareEngineering.1–13. [51] JukkaRuohonenandVilleLeppänen.2018.TowardValidationofTextualInfor- [31] ParasJain,AjayJain,TianjunZhang,PieterAbbeel,JosephGonzalez,andIon mationRetrievalTechniquesforSoftwareWeaknesses.InDatabaseandExpert Stoica.2021.ContrastiveCodeRepresentationLearning.InProceedingsofthe SystemsApplications-DEXA2018InternationalWorkshops,BDMICS,BIOKDD,and 2021ConferenceonEmpiricalMethodsinNaturalLanguageProcessing.5954–5971. TIR(CommunicationsinComputerandInformationScience,Vol.903).265–277. [32] XueJiang,ZhuoranZheng,ChenLyu,LiangLi,andLeiLyu.2021.Treebert:A [52] RebeccaL.Russell,LouisY.Kim,LeiH.Hamilton,TomoLazovich,JacobHarer, Tree-BasedPre-TrainedModelforProgrammingLanguage.InUncertaintyin OnurOzdemir,PaulM.Ellingwood,andMarcW.McConley.2018.Automated ArtificialIntelligence.54–63. VulnerabilityDetectioninSourceCodeUsingDeepRepresentationLearning.In [33] AdityaKanade,PetrosManiatis,GogulBalakrishnan,andKensenShi.2020. Proceedingsofthe17thIEEEInternationalConferenceonMachineLearningand LearningandEvaluatingContextualEmbeddingofSourceCode.InInternational Applications.757–762. https://doi.org/10.1109/ICMLA.2018.00120 ConferenceonMachineLearning.5110–5121. [53] RicoSennrich,BarryHaddow,andAlexandraBirch.2016. NeuralMachine [34] Rafael-MichaelKarampatsis,HlibBabii,RomainRobbes,CharlesSutton,and TranslationofRareWordswithSubwordUnits.InProceedingsofthe54thAnnual AndreaJanes.2020. Bigcode!=bigvocabulary:open-vocabularymodelsfor MeetingoftheAssociationforComputationalLinguisticsVolume1:LongPapers. sourcecode.InProceedingsofthe42ndInternationalConferenceonSoftware [54] RobertShirey.2000.Internetsecurityglossary.TechnicalReport. Engineering.1073–1085. [55] RichardSocher,DanqiChen,ChristopherD.Manning,andAndrewY.Ng.2013. [35] DiederikP.KingmaandJimmyBa.2015.Adam:AMethodforStochasticOptimiza- ReasoningWithNeuralTensorNetworksforKnowledgeBaseCompletion.In tion.InProceedingsofthe3rdInternationalConferenceonLearningRepresentations, AdvancesinNeuralInformationProcessingSystems26:27thAnnualConferenceon YoshuaBengioandYannLeCun(Eds.). NeuralInformationProcessingSystems2013.926–934. [36] TrietHuynhMinhLeandM.AliBabar.2022. OntheUseofFine-Grained VulnerableCodeStatementsforSoftwareVulnerabilityAssessmentModels.InPre-trainingbyPredictingProgramDependenciesforVulnerabilityAnalysisTasks ICSE’24,April14–20,2024,Lisbon,Portugal [56] GeorgiosSpanosandLefterisAngelis.2018.AMulti-TargetApproachtoEstimate MethodsinNaturalLanguageProcessing.8696–8708. SoftwareVulnerabilityCharacteristicsandSeverityScores.JournalofSystems [64] FabianYamaguchi,NicoGolde,DanielArp,andKonradRieck.2014.Modeling andSoftware146(2018),152–166. andDiscoveringVulnerabilitieswithCodePropertyGraphs.InProceedingsof [57] AlexeySvyatkovskiy,ShaoKunDeng,ShengyuFu,andNeelSundaresan.2020. the2014IEEESymposiumonSecurityandPrivacy.590–604. IntellicodeCompose:CodeGenerationUsingTransformer.InProceedingsof [65] JiyangZhang,SheenaPanthaplackel,PengyuNie,JunyiJessyLi,andMilosGlig- the28thACMJointMeetingonEuropeanSoftwareEngineeringConferenceand oric.2022.CoditT5:PretrainingforSourceCodeandNaturalLanguageEditing.In SymposiumontheFoundationsofSoftwareEngineering.1433–1443. Proceedingsofthe37thIEEE/ACMInternationalConferenceonAutomatedSoftware [58] PatanamonThongtanunam,ChanathipPornprasit,andChakkritTantithamtha- Engineering.1–12. vorn.2022.AutoTransform:automatedcodetransformationtosupportmodern [66] JiayuanZhou,MichaelPacheco,ZhiyuanWan,XinXia,DavidLo,YuanWang, codereviewprocess.InProceedingsofthe44thInternationalConferenceonSoft- andAhmedE.Hassan.2021.FindingaNeedleinaHaystack:AutomatedMining wareEngineering.237–248. ofSilentVulnerabilityFixes.InProceedingsofthe36thIEEE/ACMInternational [59] AshishVaswani,NoamShazeer,NikiParmar,JakobUszkoreit,LlionJones, ConferenceonAutomatedSoftwareEngineering.705–716. AidanN.Gomez,LukaszKaiser,andIlliaPolosukhin.2017. AttentionisAll [67] XinZhou,DongGyunHan,andDavidLo.2021. Assessinggeneralizabilityof youNeed.InAdvancesinNeuralInformationProcessingSystems30:AnnualCon- codebert.InProceedingsofthe2021IEEEInternationalConferenceonSoftware ferenceonNeuralInformationProcessingSystems2017.5998–6008. MaintenanceandEvolution.IEEE,425–436. [60] MortezaVerdi,AshkanSami,JafarAkhondali,FoutseKhomh,GiasUddin,and [68] YaqinZhou,ShangqingLiu,JingKaiSiow,XiaoningDu,andYangLiu.2019.De- |
AlirezaKaramiMotlagh.2020. Anempiricalstudyofc++vulnerabilitiesin vign:EffectiveVulnerabilityIdentificationbyLearningComprehensiveProgram crowd-sourcedcodeexamples.IEEETransactionsonSoftwareEngineering48,5 SemanticsviaGraphNeuralNetworks.InAdvancesinNeuralInformationPro- (2020),1497–1514. cessingSystems32:AnnualConferenceonNeuralInformationProcessingSystems [61] YaoWan,WeiZhao,HongyuZhang,YuleiSui,GuandongXu,andHaiJin.2022. 2019.10197–10207. Whatdotheycapture?astructuralanalysisofpre-trainedlanguagemodels [69] YaqinZhou,JingKaiSiow,ChenyuWang,ShangqingLiu,andYangLiu.2021.Spi: forsourcecode.InProceedingsofthe44thInternationalConferenceonSoftware AutomatedIdentificationofSecurityPatchesviaCommits.ACMTransactionson Engineering.2377–2388. SoftwareEngineeringandMethodology(TOSEM)31,1(2021),1–27. [62] XinWang,YashengWang,YaoWan,JiaweiWang,PingyiZhou,LiLi,Hao [70] Deqing Zou, Sujuan Wang, Shouhuai Xu, Zhen Li, and Hai Jin. 2019. Wu,andJinLiu.2022. CODE-MVP:LearningtoRepresentSourceCodefrom 𝜇VulDeePecker:ADeepLearning-BasedSystemforMulticlassVulnerability MultipleViewswithContrastivePre-Training.InFindingsoftheAssociationfor Detection.IEEETransactionsonDependableandSecureComputing18,5(2019), ComputationalLinguistics:NAACL2022.1066–1077. 2224–2236. [63] YueWang,WeishiWang,ShafiqR.Joty,andStevenC.H.Hoi.2021. CodeT5: Identifier-awareUnifiedPre-trainedEncoder-DecoderModelsforCodeUnder- standingandGeneration.InProceedingsofthe2021ConferenceonEmpirical |
2402.02973 Are We There Yet? Unraveling the State-of-the-Art Smart Contract Fuzzers ShuohanWu ZihaoLi LuyiYan TheHongKongPolytechnic TheHongKongPolytechnic TheHongKongPolytechnic University University University HongKong,China HongKong,China HongKong,China csswu@comp.polyu.edu.hk cszhli@comp.polyu.edu.hk cslyan@comp.polyu.edu.hk WeiminChen MuhuiJiang ChenxuWang TheHongKongPolytechnic TheHongKongPolytechnic Xi’anJiaotongUniversity University University Xi’an,China HongKong,China HongKong,China cxwang@mail.xjtu.edu.cn cswchen@comp.polyu.edu.hk jiangmuhui@gmail.com XiapuLuo∗ HaoZhou TheHongKongPolytechnic TheHongKongPolytechnic University University HongKong,China HongKong,China csxluo@comp.polyu.edu.hk cshaoz@comp.polyu.edu.hk ABSTRACT Portugal.ACM,NewYork,NY,USA,13pages.https://doi.org/10.1145/3597 Giventhegrowingimportanceofsmartcontractsinvariousappli- 503.3639152 cations,ensuringtheirsecurityandreliabilityiscritical.Fuzzing, aneffectivevulnerabilitydetectiontechnique,hasrecentlybeen 1 INTRODUCTION widelyappliedtosmartcontracts.Despitenumerousstudies,asys- tematicinvestigationofsmartcontractfuzzingtechniquesremains Smartcontractshavewitnessedrapidgrowthandhavebeenwidely lacking.Inthispaper,wefillthisgapby:1)providingacompre- adoptedforvariousfinancialpurposes,suchastrading,investing, hensivereviewofcurrentresearchincontractfuzzing,and2)con- andborrowing[5,63].AsofFebruary2023,therearemorethan ductinganin-depthempiricalstudytoevaluatestate-of-the-art 44millioncontractsdeployedonEthereumandthemarketcapof contractfuzzers’usability.Toguaranteeafairevaluation,weem- Ethereumhasexceeded210billionUSD[11,79].However,despite ployacarefully-labeledbenchmarkandintroduceasetofpragmatic theirwidespreadadoption,manysmartcontractslackathorough performancemetrics,evaluatingfuzzersfromfivecomplementary securityaudit,makingthemvulnerabletopotentialattacks[86]. perspectives.Basedonourfindings,weprovidedirectionforthe In fact, a recent empirical study of 47,587 Ethereum smart con- futureresearchanddevelopmentofcontractfuzzers. tractsfoundpotentialvulnerabilitiesinasignificantnumberof them[41].Thisraisesconcerns,especiallyforcontractsthathandle KEYWORDS financialassetsofsignificantvalue,astheyareaprimetargetfor attackers[34,39].Forinstance,inJune2016,attackersexploiteda SmartContract,Fuzzing,Evaluation reentrancybuginEthereum’sdecentralizedautonomousorgani- ACMReferenceFormat: zation(DAO),resultinginthetheftof3.6millionEther[8].Given ShuohanWu,ZihaoLi,LuyiYan,WeiminChen,MuhuiJiang,ChenxuWang, thatsmartcontractscannotbemodifiedafterdeployment[61],itis XiapuLuo,andHaoZhou.2024.AreWeThereYet?UnravelingtheState- crucialtoensuretheirreliabilitybeforedeployment. of-the-ArtSmartContractFuzzers.In2024IEEE/ACM46thInternational Various approaches have been proposed to enhance the cor- ConferenceonSoftwareEngineering(ICSE’24),April14–20,2024,Lisbon, rectnessandsecurityofsmartcontracts,includingformalverifica- tion[19,24,27],symbolicexecution[49,69,73],andotherstatic ∗Thecorrespondingauthor. analysismethods[42,45,71,90].However,eachmethodfaceslimi- tations.Formalverificationreliesonmanualcustomizedspecifica- Permissiontomakedigitalorhardcopiesofallorpartofthisworkforpersonalor tion,whichmakesitdifficulttoautomaticallyscale[111].Symbolic classroomuseisgrantedwithoutfeeprovidedthatcopiesarenotmadeordistributed forprofitorcommercialadvantageandthatcopiesbearthisnoticeandthefullcitation execution is hindered by path explosion issues when exploring onthefirstpage.Copyrightsforcomponentsofthisworkownedbyothersthanthe complexcontracts[31,91].Staticanalysisapproachesanalyzethe author(s)mustbehonored.Abstractingwithcreditispermitted.Tocopyotherwise,or controlanddataflowinthecontract,buttheycanexhibithighfalse republish,topostonserversortoredistributetolists,requirespriorspecificpermission and/orafee.Requestpermissionsfrompermissions@acm.org. positivesbecausetheyover-approximatecontractbehavior[20,37]. ICSE’24,April14–20,2024,Lisbon,Portugal Fuzzinghasbeensuccessfulindiscoveringvulnerabilitiesintra- ©2024Copyrightheldbytheowner/author(s).PublicationrightslicensedtoACM. ditionalprogramsovertheyears[26,60,64].Owingtoitsdynamic ACMISBN979-8-4007-0217-4/24/04...$15.00 https://doi.org/10.1145/3597503.3639152 nature,itproducesfewfalsepositivesandcandetectunexpected 4202 beF 5 ]ES.sc[ 1v37920.2042:viXraICSE’24,April14–20,2024,Lisbon,Portugal ShuohanWu,ZihaoLi,LuyiYan,WeiminChen,MuhuiJiang,ChenxuWang,XiapuLuo,andHaoZhou vulnerabilitieswithoutpriorexpertknowledgeofvulnerablepat- graspthereal-worldneedsforcontractfuzzers,weconductsurveys terns[72].Since2018,fuzzinghasbeenextensivelyemployedinthe with16auditorsfromtheindustry.Thesurveyresultrevealsthat realmofsmartcontracts[55].Withtheemergenceofvarioussmart auditorsfavorfuzzersthatprovideconvenienceandflexibilityin |
contractfuzzers[58],itisessentialtosystematicallyevaluatetheir creatingcustomizedtestoracles.Thisbroadensthescopeofvulner- usabilityandeffectiveness.Thiscanhelppinpointthestrengths abilitydetection,enablingauditorstoidentifylogic-relatedbugs andweaknessesofeachtechnique,guideresearchersinselecting withintheircontracts.Ourstudyprovidesinsightsintothecurrent themostappropriatetechniquefortheirusecases,andinspirethe stateofsmartcontractfuzzingandsuggestspossibledirectionsfor developmentofnewfuzzersbasedonthegainedinsights. futurefuzzers.Insummary,weconductthefirstsystematicstudy However,toourbestknowledge,nostudyhassystematicallyin- ofsmartcontractfuzzers: vestigatedexistingcontractfuzzersandcomprehensivelyevaluated • We review the current research advances in smart contract theireffectiveness.Mostrelatedempiricalworks[32,57,78,107] fuzzingbasedonrelatedliteraturepublishedinrecentyears. focusoncomparingstaticanalysis-basedsmartcontractauditing • Wecreateabenchmarkwith2,000carefully-labeledcontracts tools, while two recent studies [47, 82] only evaluate a limited andutilizeittoperformacomprehensiveevaluationof11state- numberofsmartcontractfuzzers(i.e.,3),whichisfarfromcompre- of-the-artcontractfuzzers.Ourcodebaseandbenchmarkare hensive.Despitethefactthatmoststudiesthatproposecontract releasedathttps://github.com/SE2023Test/SCFuzzers. fuzzersexperimentallyvalidatetheireffectiveness,weidentifya • We unveil problems in existing fuzzers and explore potential lackofuniformperformancemetricsamongthem(e.g.,theyemploy directionsforthefuturedesignoffuzzers. differentcriterialikeinstructionandbranchcoveragetoevaluate codecoverage).Furthermore,thebenchmarksusedinthesestudies arenotunified.Theseissuesleadtoinconsistentandbiasedresults, 2 BACKGROUND makingitdifficulttoaccuratelycomparedifferentfuzzers. 2.1 SmartContract Tofillthisgap,inthepaper,wefirstconductasystematicreview Smart contracts are programs running on blockchains (e.g., ofallexistingsmartcontractfuzzingtechniques.Then,weperform Ethereum[9,36]).Smartcontractsbeginwithcompilingthesource acomparativestudyofstate-of-the-artsmartcontractfuzzers.In codeintobytecode,anddeployingitontoblockchain[7,38].Once ordertoperformafairandaccurateevaluation,weneedtomeet deployed,eachcontractisassignedauniqueaddress.Userscan thefollowingrequirements: interactwiththecontractbyencodingthefunctionsignatureand •(R1)Comprehensiveperformancemetrics:Existingcontract actual parameters [35, 106] into a transaction according to the fuzzingstudiesuselimitedperformancemetrics,leadingtobiased ApplicationBinaryInterface(ABI). The ABI is JSON data gener- orinaccuratecomparisons.Toavoidthis,weneedtodesignand ated in compilation, which describes methods to be invoked to usecomprehensiveperformancemetricsthatcanbeconsistently executesmartcontracts. appliedacrossdifferentfuzzers.Evaluatingfuzzer’sperformance Smartcontractsarestatefulprograms,andmaintainapersis- frommultipleperspectivescanprovideamorecompletepictureof tentstorageforglobalstates.Theterm"statevariable"incontracts itsstrengthsandweaknesses. referstotheglobalvariableofacontract.Forinstance,asshownin •(R2)Unifiedbenchmarksuite:Priorsmartcontractfuzzing studiesusedistinctbenchmarks,makingithardtocomparethe Line2ofFigure1,uBalancevariable,whichrecordsthebalanceof useraccounts,isastatevariabletobepermanentlystoredinthe effectivenessofdifferentfuzzers.Toensureourreliabilityandre- contract’sstorage.Thecontractexecutioncandependonthestate producibility,weneedtouseaunifiedbenchmarktotestallfuzzers variable.Intheexampleprovided,ausercanwithdrawfundsonly onthesamesetofcontracts.Thiswillcreatealevelplayingfield iftheuserhasasufficientbalance(Line8).Thestatevariablecan forcomparison,andensurethatourresultsarenotskewedbythe onlybealteredbytransactions,suchassendingatransactionto choiceofbenchmarkcontracts. •(R3)Correctlabels:Whiletherearesomeavailablecontract invokethedeposit()functiontoaddfundstotheaccount(Line5). benchmarks,ourexperimentsrevealthattheyarepartiallymisla- beled.Thisresultsinfalsepositives(non-existentvulnerabilities 2.2 SmartContractFuzzing flaggedasvulnerabilities)andfalsenegatives(actualvulnerabilities Fuzzingisawidelyadoptedtestingtechniquetoidentifydefectsin notdetected),makingitdifficulttodrawvalidconclusionsabout traditionalprograms[1,4,28,29,33,70].Itinvolvesgeneratingran- differentfuzzers’performance.Hence,weneedtouseacorrectly domorunexpecteddataasinputsandmonitoringtheeffectsduring labeledbenchmarktoensurethereliabilityofinsightsandfindings theprogram’sexecution[110].Since2018,fuzzinghasbeencom- fromourevaluation. monlyappliedtotestsmartcontracts[55].AsshowninFigure2, Basedontherequirements,webuildabenchmarkconsistingof 2,000carefully-labeledsmartcontractsanddesignfivecategories 1 contract Wallet{ ofperformancemetrics.Usingthisbenchmarkandmetrics,wecon- 2 mapping(address => uint) uBalance; 3 ductextensiveexperimentstocompare11state-of-the-artsmart 4 function deposit() public payable { //deposit money |
contractfuzzers.Theexperimentalresultsshowthatstate-of-the-art 5 uBalance[msg.sender] += msg.value; } 6 contractfuzzersarefarfromsatisfactoryintermsofvulnerability 7 function withdraw(uint amount) public { detection.Wesuggestthatfuturefuzzersshouldconsiderenhanc- 8 require(uBalance[msg.sender] >= amount, "Fail"); 9 require(now > 30); ingtheirthroughput,refiningtheirtestoracles,andoptimizingthe 10 msg.sender.call.value(amount)(); //reentrancy bug qualityofinitialseedsaspotentialareasofimprovement.Tofurther 11 uBalance[msg.sender] -= amount;}} Figure1:Avulnerablecontract.AreWeThereYet?UnravelingtheState-of-the-ArtSmartContractFuzzers ICSE’24,April14–20,2024,Lisbon,Portugal atypicalprocessofsmartcontractfuzzingbeginsbyconstruct- oftheirmethodologiesandfacilitatingathoroughunderstanding inganinitialcorpus,whereeachseedrepresentsasequenceof ofthecurrentlandscape. transactions.Next,thefuzzingloopselectsseedsinthecorpusto performmutations,generatingnewinputsthatareexecutedbythe executionengine(e.g.,EVM[3]).Duringtheexecution,theengine 3.1 SeedGeneration collectsruntimeinformation,suchascoverageandexecutionre- AshighlightedbyHerreraetal.[50],constructingtheinitialseed sults,asfeedback,whichcanbeusedtoevaluatethefitnessofthe isacriticalstep thatgreatlyimpactsfuzzingeffectiveness, asa input.Iftheinputachievesbetterfitness(e.g.,highercoverage),it goodseedcangeneratepotentialmutationstoreachdeeperpaths isretainedasaseed. anduncovervulnerabilities.Insmartcontractfuzzing,seedgen- Incontrasttotraditionalfuzzing,contractfuzzingmustconsider erationinvolvestwoprimarycomponents:generatingtransaction transactionsequencesforexecutingspecificfunctions.Thisisbe- argumentsanddeterminingtransactionsequences. causetheexecutionofafunctioninatransactionmaydependona •Transactionargumentsgeneration.Whengeneratingtrans- contractstate,whichisdeterminedbytheprevioustransactions actionarguments,fuzzers[66,76,96]builtongeneralfuzzers(e.g., executed.AsillustratedinFigure1,totriggerthevulnerablepath AFL[1])treateachargumentasabytestreamandgeneraterandom (i.e.,Line10),weneedfirstinvokedeposit()functiontodeposit bytesasinputs.Hence,theyoftenrequireadditionaltimetohit somefunds,thencallthewithdraw()function.Anotherdifference argumenttypes(e.g.,string).Incontrast,Type-awarefuzzers(e.g., isthatmostvulnerabilitiesinsmartcontractsarerelatedtobusiness echidna[2])extractparametertypesfromcontracts’ABIspecifica- logicandwillnotcrashtheprogram(i.e.,EVM)[92].Todetectvul- tion,enablingthemtogeneratevaluesaccordingly.Forfixed-length nerabilities,contractfuzzerstypicallyimplementtestoracleswithin arguments(e.g.,uint256),theyproducerandomvaluesbasedon theexecutionengine(e.g.,EVM).Testoraclesusepre-definedvul- theargument’stype.Fornon-fixed-lengtharguments(e.g.,string), nerabilitypatternstomatchcontractruntimebehaviors. theytypicallygenerateacorrespondingnumberofrandomele- mentswitharandomlength. Inadditiontorandomlygeneratinginputs,somefuzzers[55,109] 2.3 LiteratureSearchandScope alsoutilizeinputsfromreal-worldtransactions,astheseinputsare ThisstudyprimarilyfocusesonEthereumsmartcontractfuzzers, morelikelytotriggerbehaviorsthatmirroractualusecases.More- givenEthereum’spopularityasago-toplatformfordeveloping over,ILF[48]utilizesvaluesobtainedfromsymbolicexecutionas anddeployingcontractsandtheabundanceofresearchcentered itsarguments,whileETHPLOIT[103]gathersinputsandoutputsof onit.Toidentifyrelevantliteratureforoursystematicsurvey,we complexfunctions(e.g.,cryptographyfunctions),alongwithmagic setupthekeywordsas"smartcontractfuzzing",andsearchedfor numbersinfunctionbodies.Therationalebehindtheseapproaches relatedworkpublishedfrom2016to2023throughsevenacademic isthatsuchconstantsmayhavethepotentialtosatisfycertain databases,i.e.,IEEEExplore,ACMDigitalLibrary,GoogleScholar, branchconditions.However,inputsfromunrelatedcontractsmay SpringerLink,WebofScience,DBLPBibliography,andEICompen- notbeeffectiveinsatisfyingbranchconditionsinthetargetcon- dex.Thesedatabasesmaintainpaperspublishedintopconferences tract.Toaddressthisissue,Confuzzius[91]andBeak[104]leverage andjournalsinthefieldofcomputerscienceandengineering[51]. symbolicanalysistodetermineinputvalueswhenfuzzingstallsin Aftertheinitialsearch,weretrievedatotalof75papers.Wethen certainbranches.Similarly,Smartian[40]leveragesconcolictest- readtheabstractofeachpapertofilteroutthosethatdonotpropose ing,whichcombinesconcreteandsymbolicexecutiontogenerate asmartcontractfuzzer.Forexample,weremovedpapersfocusing argumentsthatsatisfycertainconstraints. ontheoreticalaspectsofcontractfuzzing.Eventually,weidenti- •TransactionSequenceGeneration.Aspreviouslymentioned, fied28papersmeetingourinclusioncriteria,withfivefocusingon theexecutionofsmartcontractsoftendependsonspecificstates, |
otherthreeplatforms,andtheremaining23onEthereum.Theyare whichcanonlybeachievedthroughasequenceoftransactions. summarizedandpresentedinTable1. Therefore,itiscrucialtocreatemeaningfultransactionsequencesin seed[97].Manyfuzzers[46,55,76,93,96,99,108]simplygenerate randomtransactionsequencesasinputs.Toensurealldesiredfunc- 3 SMARTCONTRACTFUZZINGAPPROACH tionsaretested,whengeneratingsequencesrandomly,acommon practiceistoenumeratethesefunctionsininitialseeds,asdoneby Researchersemployvarioustechniquestobuildeffectivefuzzers SynTest-Solidity[75],ConFuzzius[91],andsFuzz[74].Although fortriggeringvulnerablecode,includingsymbolicexecution[40, therandomstrategycanproducediversesequencesandmayyield 48,91],machinelearning[48,65,88,102,109],andstatic/dynamic unexpectedresults,itisnoteffectiveinfindingcriticalsequences datadependencyanalysis[40,74,93,99,100,103].Thissection duetothevastsearchspaceofallpossiblesequences. offersacomprehensivereviewofthecriticalcomponentsincon- Severalfuzzersemploymachinelearning(ML)techniquestogen- tractfuzzingapproaches.Besides,wesummarizethetechniques eratehigh-qualitytransactionsequences,e.g.,RLF[88],xFuzz[102], employedbydifferentfuzzersinTable1,providingaclearoverview andILF[48].ILFlearnssequencesfromcoverage-guidedsymbolic SchS ee de ud ling Seed MuS te ae td ion CT aes st e execution,inheritingtheabilitytoproducehigh-coverageinputs withreducedoverheadcomparedtopuresymbolicexecution[25]. xFuzzutilizesanMLmodeltrainedoncontracts,wherevulnerable GenS ee re ad tion Corpus SN ee ew d SeS lee ce td ion Execution functionsarelabeledusingstaticanalyzers(e.g.,Slither[42]).The BugDetector trainedmodelretainssuspiciousfunctions,effectivelyreducing Figure2:Overviewofsmartcontractfuzzing.ICSE’24,April14–20,2024,Lisbon,Portugal ShuohanWu,ZihaoLi,LuyiYan,WeiminChen,MuhuiJiang,ChenxuWang,XiapuLuo,andHaoZhou Table1:Summaryoftechnicaldetailsforexistingsmartcontractfuzzers. Tool ArgumentsGeneration SequenceGeneration SeedMutation SeedScheduling Name year OC G/M Type Ran Dict Prev ML Sym Ran Dyn Static ML Args Env Seq Cov Dis Bug SE ContractFuzzer[55] 2018 ✓ G ✓ ✓ ✓ ✓ Reguard[66] 2018 M ✓ ✓ ✓ ✓ ILF[48] 2019 ✓ G ✓ ✓ ✓ ✓ ✓ ✓ SoliAudit[65] 2019 ✓ G ✓ ✓ ✓ ✓ ✓ ContraMaster[93] 2019 ✓ M ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ Harvey[99] 2020 M ✓ ✓ ✓ ✓ ✓ ✓ ✓ ETHPlOIT[103] 2020 ✓ M ✓ ✓ ✓ ✓ ✓ ✓ ✓ sFuzz[74] 2020 ✓ M ✓ ✓ ✓ ✓ ✓ ✓ ✓ Echidna[46] 2020 ✓ M ✓ ✓ ✓ ✓ ✓ ✓ ✓ GasFuzzer[22] 2020 M ✓ ✓ ✓ ✓ ✓ Targy[53] 2021 M ✓ ✓ ✓ ✓ ✓ ✓ Smartian[40] 2021 ✓ M ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ SmartGift[109] 2021 ✓ G ✓ ✓ ✓ ✓ ConFuzzius[91] 2021 ✓ M ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ Beak[104] 2022 M ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ xFuzz[102] 2022 ✓ M ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ EtherFuzz[95] 2022 M ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ SynTest-S[75] 2022 ✓ M ✓ ✓ ✓ ✓ ✓ ✓ RLF[88] 2022 ✓ G ✓ ✓ ✓ ✓ ✓ ✓ ✓ effuzz[54] 2023 M ✓ ✓ ✓ ✓ ✓ ✓ ✓ IR-fuzz[67] 2023 ✓ M ✓ ✓ ✓ ✓ ✓ ✓ ✓ EF\CF[83] 2023 ✓ M ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ityfuzz[87] 2023 ✓ M ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ IntheToolcolumn,OC:open-sourced;IntheG/Mcolumn,G:generation-based,M:mutation-based;IntheArgumentsGenerationcolumn,Type:type-awareness,Ran:randomlygeneratedarguments,Dict: argumentsfromapre-prepareddictionary,Prev:valuespreviouslyencounteredorused,ML:argumentsobtainedthroughmachinelearning,Sym:argumentsobtainedviasymbolicexecution IntheSequenceGenerationcolumn,Ran:randomlygeneratedtransactionsequences,Dyn:sequencesgeneratedthroughanalyzingdynamicdataflow,Static:sequencesgeneratedbyanalyzingstaticdataflow, ML:sequencesgeneratedthroughmachinelearning;IntheSeedMutationcolumn,Args:mutatingtransactionarguments,Env:mutatingenvironmentproperties,Seq:mutatingtransactionsequences;Inthe SeedSchedulingcolumn,Cov:fitnessbycoverage,Dis:fitnessbydistance,Bug:fitnessbyvulnerability;TheSEcolumnindicateswhetherastandaloneEVMisused. thesearchspacebyexcludingbenignfunctions.RLFcategorizes • Fitness by Coverage. Code coverage has long been a corner- contractfunctionsintoclustersbasedonthefunctionalitiesthey stone metric in traditional fuzzing, and many smart contract provide,simplifyingthefunctionspace.Togeneratemeaningful fuzzers[40,66,74,76,88,93,95,96,102]haveadoptedittopriori- transactionsequences,itappliesdeepQ-learning[30],leveraging tizeseedsthatcanpotentiallyuncovernewcodepaths.Typically, bothpreviousexperiencesandcurrentcontractstatetodetermine thesefuzzerscalculatecodecoveragebasedonexecutedinstruc- theappropriateclusterfromwhichthenextfunctionisrandomly tionsandcoveredbasicblocks,whilesome[40,91]adoptamore selected.Althoughthesefuzzersmaygenerateeffectivesequences, fine-grainedapproachbycalculatingthecoveredbranches.Covered theirresultscanbenon-deterministicduetothechallengesofgen- branchesprovideadeeperunderstandingofacontract’sbehavior eralizingtounseencontracts[40]. byexaminingitsexecutionpaths[17].Inadditiontocodecoverage, |
Moreover,somefuzzersincorporatedataflowanalysistopredict Smartian[40]alsoincorporatesdata-flowcoverage.Thismetric feasibletransactionsequences.Forinstance,ETHPLOITemploys evaluatesthedynamicdataflowsbetweenstatevariablesduring statictaintanalysis.Itlabelsthereadstatevariablesandfunction thefuzzingprocess,providinginsightintohoweffectivelytheseed argumentsastaintsources,whilewrittenstatevariablesandexter- exercisesthecontract’sstatevariables.Ityfuzz[87]tracksthevalue nalcallsastaintsinks.Whengeneratingtransactionsequences,it ofstatevariables,identifyingthemas"interesting"iftheyarewrit- selectscandidatefunctionsthatcanextendthetaintpropagation. tenwithapreviouslyunseenvalue.BothSmartianandityFuzz Thisapproachallowsittogeneratetheseedsthatreflectthedata adoptthesestrategiesastheydeemasmartcontract’suniquestate flowofreal-worldattackscenarios,potentiallyexploitablebyat- vitalforfutureexploration. tackers.Similarly,IR-Fuzz[67]andBeakexamineread-after-write •FitnessbyDistance.Distanceisalsoapopularfitnessmetric, (RAW)dependenciesofglobalvariablesbetweenfunctionsthrough whichhelpsidentifyseedsthataremorelikelytoreachunexplored staticanalysis,whileConFuzziustracksdynamicdataflowonstate areas. Branch distance [53, 67, 74, 75, 87, 99, 104] and code dis- variablestoidentifyRAWdependenciesforarrangingtransaction tance[104]aretwocommonlyuseddistancemetricsinsmartcon- orders.BesidesRAWdependencies,Smartianuncoversuse-defre- tract fuzzing, Branch distance evaluates how close a seed is to lationshipsofstatevariablesusingstaticanalysis,helpingexplore satisfyamissedbranchbyassessingtheproximityoftheseedto relevantexecutionpathsincontracts. thebranchcondition.Forexample,inthecodebelow,ifthevalue ofmsg.valueinaseedis5,itsbranchdistanceforthethen-branch is95(100-5). 3.2 SeedScheduling 1 function test() public { Effectiveseedschedulingisessentialforsmartcontractfuzzing,asit 2 if (msg.value >= 100) { /**then*/}} determineswhichinputswillbeusedtoguidetheexplorationofthe Thisexamplehighlightshowbranchdistanceprovidesmorefine- contract’sstatespace.Toassessthequalityofseeds,fitnessmetrics grained feedback. When using code coverage, the seeds with such as code coverage are employed. Seeds with higher fitness msg.value = 99 and msg.value = 5 are considered equivalent. valuesaredeemedmorevaluable,warrantingahigherallocationof However,withbranchdistance,fuzzersprioritizetheseedwith fuzzingbudget.Next,wewilldelveintothedifferentfitnessmetrics msg.value=99,asitismorelikelytosatisfytheconditionsafter employedbysmartcontractfuzzerstooptimizeseedscheduling. mutation.Insteadofbranchdistance,Beak[104]evaluatesseedsAreWeThereYet?UnravelingtheState-of-the-ArtSmartContractFuzzers ICSE’24,April14–20,2024,Lisbon,Portugal usingcodedistance,whichcalculatesthedistanceofseedtouncov- satisfiedconditions(e.g.,Line2)tobeunsatisfied,preventingfuzzer eredcodeinthecontrol-flowgraph.Hence,aseedwithashorter fromexploringdeeperbranches.Moreover,mutatingallarguments codedistanceindicatesthatnewpathscanbereachedmoreeasily atoncemakesithardtoidentifywhichargumentchangeledtoa throughmutation. newpath.Toaddressthisissue,Harvey[99]choosestomutateonly Oncedistancemetricsareobtained,fuzzersneedtoselectthe asingleargumentatatime.Similarly,Targy[53]andeffuzz[54] next seed for mutation based on distance fitness. Some fuzzers usetaintanalysistofindtheargumentsthatarerelevanttothe likeIR-Fuzz[67]prioritizeseedsbytakingtheminimumdistances, targetconditionalbranchandmutateonlythosearguments(e.g., whileothersusemoresophisticatedmethodsthatconsiderdiffer- argumentbinexample). entobjectivesoroptimizationalgorithms.Forexample,SynTest- Traditionalmutationoperatorslackinsightintotheinput’sse- Solidity [75] employs a many-objective optimization algorithm, mantics,resultinginsyntacticallycorrectbutsemanticallyinvalid DynaMOSA [77], to promote inputs that get closer to the yet- mutations[80,94].Hence,somefuzzers[2,93,103]opportunisti- uncoveredbranchesandlines.Similarly,sFuzz[74]combinestwo callyreplacesomeargumentswithvaluesfromamutationpool. complementarystrategies:keepingseedsthatincreasecodecover- Thepoolcontainsinterestingoreffectiveinputsthathavebeendis- ageandselectingthebestseedforeachjust-missedbranchwith coveredduringthefuzzingprocess,whichcanbereusedtoenhance theclosestdistance.Beak[104]sortsseedsbytheirdistancesand theeffectivenessoffuzzing. allocatesenergyusingasimulatedannealingalgorithm[84]. • Environment Properties Mutation. The execution of smart •FitnessbyVulnerability.RLF[88]prioritizesseedsbasedon contractcanalsobeaffectedbyenvironmentalproperties.Forex- theirpotentialtouncovervulnerabilities.Itcountsthenumberofde- ample,inLine9ofFigure1,thebugpathcanonlybeexercised tectedvulnerabilitiestoprioritizetheseedthatcontainsvulnerable whenthetimestampisgreaterthan30.Therefore,manyfuzzersmu- function-callsequences.However,relyingsolelyonvulnerability tateenvironmentalpropertiestotriggermorecontractbehaviors. |
fitnessmightleadtolocaloptimatrapsduetothesparsityofbugs. Thesenderaddressiscommonlymutatedbyfuzzers,ascontract Therefore,RLFcombinesvulnerabilityfitnesswithcodecoverage usage rights can vary according to account permissions. Smar- whenschedulingseeds. tian[40]employsbit-flippingtomutatethesenderaddress.Most •OtherFitness.Smartcontractfuzzershaveadoptedvariousother fuzzers[65,74,83],selectthesenderaddressfromapredefined fitnessmetricsbeyondthosepreviouslydiscussed.Forexample, accountset,includingroleslikecreator,administrator,user,and Harvey[99]employstheMarkovChain-basedschedulingstrat- attacker.Gaslimitsandcallreturnvaluesarealsotargetedformu- egy,whichassignsmorefuzzingbudgettoseedsthattraversethe tationbysomefuzzers[91,93,103].Modifyinggaslimitsenables rarepaths.Therationaleisthatrarepathsrequiremoreresources theexplorationofout-of-gasexception-relatedbehaviors,while becausetheyarehardertoreachthaneasilyexploredpaths.In changingcontractcallreturnvaluessimulatesdiverseoutcomesof ConFuzzius,thenumberofstoragewritesperformedbytheseedis calledcontracts,whichhelpsidentifyunhandledexceptionswithin alsotakenintoaccountwhencalculatingitsfitness.Theintuition the contract under test. Overall, ConFuzzius [91] stands out by isthatinthesubsequentcrossover,aseedwithmorewriteopera- mutatingamorecomprehensivesetofenvironmentalproperties tionsmayexhibitahigherprobabilityofformingusefultransaction comparedtoothers.Thisincludessenderaddress,amounts,gaslim- sequences(i.e.sequenceswithRAWdependencies). its,timestamps,blocknumbers,sizeofexternalcodeandcontract callreturnvalues.Similartotransactionargumentmutation,the 3.3 SeedMutation mutationofenvironmentpropertiesismainlyachievedbyreplac- Seedmutationalsoplaysacrucialroleinfuzzing,enablingthe ingthemwitharandomvalueorreusingapreviouslyobserved generationofextensivetestinputsbymodifyingexistingseeds value.SometoolssuchassFuzz,treatthemasbytestreamsand randomlyorheuristically.Smartcontractfuzzerscanperformseed modifytheindividualbits. mutationatthreelevels:transactionargumentlevel,environment •Transactionsequencemutation.Thismutationgeneratesmore propertieslevel,transactionsequencelevel. diversetransactionsequencestochangecontract’sstates,poten- •Transactionargumentmutation.Thislevelofseedmutation tiallyunlockingbranchesguardedbythem.Smartiandefinesthree focusesonmodifyingtheargumentsofeachtransaction(i.e.,the mutationoperations:addingarandomfunction,pruningafunction, invokedfunction).Generation-basedfuzzers[2,48,55,65,85,89, andswappingtwofunctions.Inaddition,fuzzerslike[53,74,75] 109]havenoseedmutationprocessastheydonotusefeedback alsoemployacrossoveroperationtoevolvetransactionsequences. toimprovetheirseeds.Somefuzzers[22,74,76,93,95,108]utilize Crossovermergespartsoftwoexistinginputstoproduceanew mutationoperatorsfoundingeneralfuzzers(e.g.,AFL),suchasbit input,allowingthenewinputtoinherit"goodgenes". flipsandadditions,togeneratenewtestinputs(Forvariable-length BeakandConFuzziusappendtransactionsthatreadfromaspe- arguments,thesefuzzersmutatethembypruningorpaddingbits). cificstorageslottotransactionsthatwritetothesameslot.This However,manyfuzzersmutateallargumentsineachmutation, approachpreservesread-after-write(RAW)dependenciesandgen- whichmaylosepreviouslysatisfiedconditions[68,81,101].For eratesmorereliablesequences.Similarly,ContractMasterswitches example,thefunctionbelowhasmultipleconditionalstatements. theorderoftwofunctionsiftheyoperateonthesamestatevari- 1 function test(int256 a, int256 b, int256 c) public { able,potentiallysatisfyingRAWdependencies.Harveyproposes 2 if (a < 5) { auniqueapproach.Itfirstfuzzesthepersistentstatestofindthe 3 4 /r *e *qu bi ur ge ;*( */b }}> 10); functionswhosecoverageisaffectedbythem.Then,itprepends Ifthecurrentinputis(4,5,0),theexecutiongetsstuckatLine3. transactionsthatcanmodifythepersistentstatetothosefunctions. Mutatingallthreearguments(e.g.,to(11,22,0))causespreviouslyICSE’24,April14–20,2024,Lisbon,Portugal ShuohanWu,ZihaoLi,LuyiYan,WeiminChen,MuhuiJiang,ChenxuWang,XiapuLuo,andHaoZhou Byfocusingonfunctionsthataremostsensitivetochangesofcon- stack,andprogramcounter.Thiscollecteddataisusedtocheckthe tract’sstates,Harveyenablesamorethoroughexploration.Pivoting relevantrules.EchidnaandHarveyrequiredeveloperstowritethe fromthesemethods,ityfuzznotesthehighcostofre-executing oraclesintocontract’ssourcecode.Therefore,theirdetectioncapa- previoustransactions(inseed)tobuilduppreviousstates.Toad- bilitylargelyreliesonthequalityoforacleswrittenbydevelopers. dressthis,itdirectlysnapshotsthestateandthenexploresitusing •General-purposeoracle.General-purposeoraclesrelyonhigh- randomtransactions,thusenhancingtheefficiency. levelinvariantstocheckthecorrectnessofcontractoperations. Theseinvariantsarepropertiesthatholdtrueforallvalidexecu- 3.4 EngineeringOptimization tionsofthecontract.Forexample,ContraMaster[93]establishes |
invariantsoverthecontractbalanceandthesumofuserbalances Smartcontractfuzzershavemadevariousengineeringeffortstoim- tocheckthecorrectnessoftransfer-relatedoperations.Ifatransac- proveoverallfuzzingperformance.Oneapproachiscontracttrans- tionthattransfersetherfailsbuttheexceptionisnothandled,an formation,wherefuzzers[54,66,76,96]transformsmartcontract inconsistencywillemergebetweentherecordedamountandthe intonativeC++/Gocode.Thisoffersseveraloptimizations:First, actualamountincontract,thusdetectingtheexceptiondisorder theC++compilerinlinesopcodehandlersintheproducednative bug.However,Zhangetal.[105]pointoutthatmanymodernDeFi code,eliminatingtheinterpreterloopandminimizingcalloverhead. projectsusecomplexbusinessmodelsthatgobeyondtheproposed Second,thetransformationreducesEVMstackoperations,anditre- invariantsinContraMaster.Forinstance,manylendingprojects ducestheoverheadcausedbyboundsandstack-overflowchecking. typicallyinvolvemultipleassetsandthetotalbalancesofasingle Furthermore,byconvertingthecontractstoC++,fuzzerscanlever- assetcanbevolatile. agethematureecosystemoffuzzingandprogramanalysistools. Thisnotonlysavesengineeringeffortbutalsoallowsforthereuse Table2:Commonsmartcontractvulnerabilities. ofthesewell-refinedandefficienttools(e.g.,AFL,LibFuzzer[4]). BugName Description However,itisimportanttonotethat,exceptforEF\CF[83],which DangerousDelegatecall(DD) Contractusesdelegatecall()toexecuteanuntrustedcode. pairsthetranslatedC++programwithaC++variantofEVM,these BlockState ContractusesBlockstates(e.g.,timestamp,number)todecide fuzzersdonotusetheEVMruntimeastheexecutionenvironment. Dependency(BD) acriticaloperation(e.g.,ethertransfer). ContracthasnofunctionforsendingEther,oritallows Consequently,theymaynotaccuratelyreflectcontractbehavior, FreezingEther(FE) unauthorizeduseofcontractselfdestruction. potentiallyleadingtoFPsorFNs. EtherLeak(EL) Contractallowsarbitraryuserstoretrieveetherfromthecontract. Moreover,themajorityoffuzzers[40,74,91]simulateonlynec- GaslessSend(GS) C tho en et tr hac et r,m this eh aa tn tad cle ks ero su mt-o ayf-g ka es epex tc he ep uti no tn rs anw sh fee rn ret dra an ss sf ee tr sr .ing essarycomponentsoftheblockchainenvironmentthatarerele- UnhandledException(UE) Contractdoesn’tcheckforexceptionaftercallingexternalfunctions. vanttorunningsmartcontracts.Thischoicereducesthecostof Contractdoesn’tupdatestates(e.g.,balance)beforemakingan expensiveoperations,suchasminingnewblocksandvalidating Reentrancy(RE) externalcall,themaliciouscalleereentersitandleadstoa raceconditiononthestate. transactions.Takingitastepfurther,somefuzzers[83,87]adopt Contractcanbedestroyedbythearbitraryuserthrough standaloneEVMs,whichfocussolelyonexecutingcontractcode Suicidal(SC) selfdestructinterfacebecauseofmissingaccesscontrols. anddon’tneedtohandleothertasks.Anotheroptimizationisthe Integeroperationexceedstheintegerrange, IntegerBug(IB) itcanbeharmfulwhenmodifyingthecontract’sstatevariables. shifttowardsofflineprogramanalysisorvulnerabilitydetection, e.g.,sFuzz[74],whichconductsvulnerabilitydetectionofflineonce every500testcases.Finally,somefuzzersemploymoreefficient 4.2 VulnerabilityTypes programminglanguageslikeC/C++toimprovetheirexecution Ourstudytargetscontractlayervulnerabilities,whicharethefocus speed[54,74,76,102]. ofmostfuzzers[40,55].Inthiscontext,weidentifyninecommon vulnerabilitytypes,whichrepresenttherangeofvulnerabilities 4 VULNERABILITYDETECTION thatcurrentfuzzers(inTable1)candetect.Table2illustrateseach 4.1 TestOracles ofthem. Testoraclesarecriticalcomponentsindetectingvulnerabilities duringsmartcontractsfuzzing.Assmartcontractsdonotcrash 5 EVALUATION incaseofvulnerabilitiesandmanyvulnerabilitiesarerelatedto Inthissection,wepresentathoroughevaluationofstate-of-the-art businesslogic,theirdetectioncanbechallenging.Existingwork smartcontractfuzzers.Giventhedistinctdesignofeachfuzzer,itis reliesontestoraclestodetectsmartcontractvulnerabilities.There challengingtoevaluatetheirrespectivetechniquesandcomponents aretwomaintypesoftestoraclesusedinsmartcontractfuzzing: inafine-grainedmanner.Therefore,weturntousewidelyaccepted rule-basedoraclesandgeneral-purposeoracles. metrics to assess their performance across key dimensions. To •Rule-basedoracle.Rule-basedoraclesarewidelyusedbysmart provideacomprehensiveassessment,wefirststudythemetrics contractfuzzers[40,55,75,89,91,93,102,103].Theseoraclesuse adoptedbyprevioustraditionalfuzzing[56,62]andsmartcontract pre-definedrulesandpatternstoanalyzetheexecutiontraceof fuzzing studies [91], and then design five performance metrics contract(e.g.,opcodesequences),toidentifypotentialvulnerabili- thataretailoredtosmartcontractfuzzers,ensuringourmetrics ties.Forexample,todetect"IntegerOverflow,"rule-basedoracles encompassalldimensionsassessedinexistingstudies.Thefive |
analyzethetracetolookforthedataflowfromtheresultofan performancemetricsweproposeforevaluationareasfollows: arithmeticoverflowtotheCALLinstruction. • Throughput:measuresthenumberoftransactionsafuzzercan Toimplementrule-basedoracles,mostfuzzershookintoEVM generateandexecutepersecond,reflectingitsspeedandeffi- tolognecessaryinformation,likeexecutedopcodes,valuesofthe ciencyingeneratingtestscases.AreWeThereYet?UnravelingtheState-of-the-ArtSmartContractFuzzers ICSE’24,April14–20,2024,Lisbon,Portugal Table3:Thenumberofvulnerabilitiesdetectedbyfuzzers. Flaw ContractFuzzer ILF RLF ConFuzzius sFuzz xFuzz Smartian SmartGift DD(29) TP:8,FP:8 TP:18,FP:1 TP:20,FP:1 TP:22,FP:0 TP:25,FP:2 TP:25,FP:2 - TP:6,FP:8 BD(317) TP:39,FP:28 TP:81,FP:45 TP:93,FP:57 TP:175,FP:27 TP:223,FP:32 TP:184,FP:27 TP:102,FP:9 TP:38,FP:31 FE(80) TP:12,FP:4 TP:48,FP:0 TP:47,FP:0 TP:48,FP:0 TP:2,FP:20 TP:1,FP:14 - TP:12,FP:4 EL(43) - TP:32,FP:67 TP:38,FP:79 TP:37,FP:56 - - TP:26,FP:84 GS(122) TP:17,FP:5 - - - TP:48,FP:227 TP:55,FP:247 - TP:13,FP:4 UE(188) TP:14,FP:10 TP:13,FP:3 TP:13,FP:3 TP:76,FP:27 TP:41,FP:19 TP:38,FP:19 TP:76,FP:23 TP:13,FP:6 RE(121) TP:6,FP:11 TP:48,FP:9 TP:54,FP:11 TP:42,FP:21 TP:8,FP:8 TP:8,FP:7 TP:12,FP:3 TP:6,FP:9 SC(22) - TP:18,FP:20 TP:20,FP:22 TP:10,FP:4 - - TP:8,FP:0 - IB(581) - - - TP:566,FP:169 TP:129,FP:106 TP:112,FP:80 TP:413,FP:182 - InFlawcolumn,thenumber(e.g.,29)indicatesthecountofcontractswitheachvulnerabilitytypeinourbenchmark. -:suchbugoracleisnotsupportedbycorrespondingfuzzers.TP:truepositives,FP:falsepositives. • DetectedBugs:measurestheaveragenumberofvulnerabili- Bug",butwasfoundtohavea"BlockStateDependency"bugin tiesdetectedbythefuzzers,reflectingtheirabilitytoidentify Line76.Thisissueaffectsnotonlytheaccuracyofourevaluation vulnerabilitiesinthesmartcontract. butalsothevalidityofconclusions.Toensureapreciseevaluation, • Effectiveness:measuresthespeedoffuzzersinfindingbugs, itwasnecessarytorelabelthecontractsinourbenchmark.First, indicatinghowquicklyafuzzercandetectvulnerabilities. wepreservedtheiroriginallabels.Then,weappliedthe11fuzzers • Coverage:measuresthecontract’scodeexecutedduringfuzzing, tothebenchmark.Weperformmanualinspectionandrelabeltwo indicatingitsthoroughnessinexploringthecontract. groupsofcontracts:thosereportedwithavulnerabilitydifferent • Overhead: measures the system resources consumed by the fromtheiroriginallabelbytwoormorefuzzers,andthosewhose fuzzer during fuzzing, indicating its resource efficiency. This originallylabeledvulnerabilitytypewasnotidentifiedbyanyof metricisinstructivewhenusershavelimitedresources. fuzzersinourexperiment.Thisapproachwasinformedbythework ofDurieuxetal.[41],whichsuggestsaggregatingtheresultsof multipleanalysistoolscanproducemoreaccurateoutcomes. 5.1 ExperimentSetup We recruited three volunteers to manually inspect contracts ToolSelection.WeselectthetoolsinTable1basedontheavailabil- requiringre-labeling.Eachvolunteerinspectedthecontractsinde- ityoftheirsourcecode.Fiveopen-sourcetoolsareexcludedfrom pendentlyanddocumentedtheirfindings.Incaseswhereinconsis- ourselection.Echidnaisexcludedduetoitsproperty-basednature, tenciesaroseinthevolunteers’results,theyengagedinadiscussion whichrequirestesterstomanuallywritepropertytestswithinthe toreconciletheirfindingsandreachaconsensus.Thiscollaborative contracts,demandingagoodunderstandingofcontractlogic.Ad- approachensuredthattherelabeledbenchmarkaccuratelyreflected ditionally,weareunabletoexecutefourtoolsinourenvironment. thegroundtruth. SoliAudit[65]doesnotofferinstructionsforexecution.IR-fuzz cannotbeinstalledusingtheprovidedscript.SynTest-S[75]and 5.2 Throughput ContraMaster[93]haveruntimeexceptions.Notably,thesameex- ceptionpersistsforSynTest-SevenafterrunningtheDockerimage Wefirstevaluatethethroughputofthefuzzersbymeasuringthe providedbytheauthors.Despiteoureffortstoreporttheseexcep- averagenumberoftransactionsexecutedpersecond,asshown tionstoauthorsviaemail,wehaveunfortunatelynotreceivedtheir inFigure3.ContractFuzzerandSmartGiftexhibitlowthrough- responses.Therefore,ourexperimentsfocusontheremaining11 put,generatingandexecutingonly0.09and0.07transactionsper fuzzers(seetoolsinFigure3). second,respectively.Thisisduetotheirsimulationoftheentire Environments. All experiments were executed on a server blockchainnetwork,aprocessthatcontinuouslyappendsnewly |
equippedwithanAMDEPYC7H12CPU(64coresand128threads) minedblocks(includingverification,executionofeachtransac- runningat2.60GHz,512GBmemory,andoperatingonUbuntu tion,etc.).SmartGiftusesanNLP-modelfortransactionarguments 16.04LTS.Toensureacontrolledandreproducibleenvironment, generation,whichfurtherlowersitsthroughput.ETHPLOITalso eachfuzzerwasdeployedandexecutedwithinanindividualDocker showsalimitedthroughput,primarilyduetoitsrelianceonrun- container.Thecontainerswereallocatedwith8dedicatedCPU timeprogramanalysis,inducingheavyoverhead.RLF,whichis cores,16GBRAM,and2GBswapspace. builtuponILF,ismoreefficientintestgenerationsinceitproduces Benchmark.Weassembledabenchmarkof2,000smartcontracts transactionargumentsrandomly,whileILFusescomplexneural frompreviousresearchdatasets[40,44,55,82,91]andthird-party networkstogeneratethearguments.sFuzzandxFuzzcangenerate repositories[43],ensuringadiverseandrepresentativesample.The andexecutetestsatamorerapidpace,achievedthroughoffline averagenumberoflinesofcodewas224lines,whiletheaverage 105 numberoffunctionswas22.Toachieveareliableandimpartial 104 evaluation,eachfuzzerwassubjectedto20repetitionsonthebench- 103 102 mark.Eachsmartcontractwassubjectedto30minutesoftesting 101 byeachfuzzer. 100 101 Whilebuildingourbenchmark,weobservedthatsomecontracts in previous datasets were incorrectly labeled. For example, the Co Fn utr za zect r SmartGift ILF RLF ConFuzzius sFuzz xFuzz Smartian ethploit EF/CF ityfuzz buggy_41contractinSolidiFIDataset[6]waslabeledas"Integer snoitcasnart fo # 2390733024 512 507 419 57 94 107 12 0.09 0.07 Figure3:Fuzzers’throughput.ICSE’24,April14–20,2024,Lisbon,Portugal ShuohanWu,ZihaoLi,LuyiYan,WeiminChen,MuhuiJiang,ChenxuWang,XiapuLuo,andHaoZhou vulnerabilitydetection(i.e.,inbatches,onceevery500transactions) considerthecontextofcontracts(i.e.,theiractualsemantics),such andtheiravoidanceofprogramanalysistoassistfuzzing.Addi- asexistingaccesscontrolmechanismsinplace.Thismayproduce tionally,theirintegrationofthewell-optimizedtraditionalfuzzer, falsepositivessincetheymightflagabehaviorasvulnerableevenit AFL,mayalsohelp.BothEF\CFandityfuzzdemonstrateimpres- isnotexploitableinpractice.Additionally,lackofcontextprevents sivethroughput.Thisislargelyduetotheiradoptionofstandalone fuzzersfromunderstandingcontracts’higher-levelintentionsor EVMs(e.g.,revm[15]),whichprovidealightweightandsimplified purpose.Consequently,theystruggletodifferentiatebetweenthe executionenvironment.Furthermore,theyarecodedinmoreef- contract’sintendedbehaviorandactualmaliciousone.Forexample, ficientlanguagessuchasRustandC++,andtheirsuperiorcode fuzzersmayincorrectlylabellotterycontracts[23]as"EtherLeak". qualitymayalsocontributetothisperformance. Finding: Existingcontractfuzzersarefarfromsatisfactoryin Finding: ThisexperimentshowsthatusingstandaloneEVMs termsofvulnerabilitydetection.Theiradoptionofoverlygen- cansignificantlyimprovethefuzzers’throughput. eralizedoroverlyspecifictestoraclescontributestohighfalse positivesandnegatives. Calltoaction:Refinetestoracles.Toimprovevulnerabilityde- tection,fuzzersshouldimplementprecise,comprehensiverules 5.3 VulnerabilitiesDetectionComparison inoracles,rootedinthenatureofvulnerabilities.Thisinvolves Table3displaystheaveragenumberofvulnerabilitiesdetected discerningvulnerabilityindicatorsandimpacts.Theincorpo- byeachtool,eachrunningfor30minutespercontract,over20 rationofdataflowanalysisandmachinelearning(trainedon repetitions.Inthisexperiment,weexcludethreetools:ETHPLOIT, vulnerabilitydatasets)canfurtherenhanceoracles.Furthermore, ityfuzz,andEF\CF.ETHPLOIToffersonlyvagueoracles,suchas ongoingeffortisneededtoexpandunderstandingofvulnerabili- "BadAccessControl",whichcoverseveralvulnerabilitytypesin ties,uncovernewattackvectors,andadvancetestoracles. Table2.Ityfuzzrequirestesterstomanuallywritetestoracles,while EF\CFonlyhasageneraloraclethatchecksifthesender’sbalance hasincreased,withoutspecifyingtheexactvulnerabilitytype.We 5.4 SpeedofVulnerabilityDetection comparetheoutputofeachtoolagainstthelabeleddataset(§5.1) Weevaluatetheaveragetimetakenbyeachtooltodetectavulner- tomeasuretheirperformance.FromTable3,weobservethatmost ability(truepositive).Figure4(boxplotontheleftY-axis)presents fuzzersgeneratebothhighfalsepositivesandhighfalsenegatives, theresults,formostfuzzers,theycandetectatruevulnerability indicatingthatthestate-of-the-artfuzzersarefarfromsatisfactory within1second.ConFuzzius,withitssmallerupperoutliers,ben- invulnerabilitydetection. efitsfromsymbolictaintanalysisanddatadependencyanalysis, Falsenegative:Falsenegativesoccurduetoseveralreasons.Firstly, expeditingthediscoveryofvulnerablepathsincertaincases. somevulnerablepathsremainunexplored,protectedbyhard-to- Tofairlycomparefuzzingstrategiesofdifferentfuzzers,irrespec- |
satisfybranchconstraints.Second,manyfuzzers’testoraclesare tiveoftheirthroughput,wecomputethenumberoftransactions problematic(seeTable4).Forinstance,whendetecting"Freezing eachfuzzerrequirestofindavulnerability.Thisisachievedby Ether,"ConFuzziusdoesn’tfilteroutCALLinstructionsintroduced multiplyingthemediantimefromtheboxplotinFigure4bythe byswarmsourcecontractbytecode[10],causingittomissvulner- throughputinFigure3.Weusethemedianratherthanaverage,as abilitiesincontractsthathavenoethertransferfunction.Some itistypicallymorerepresentative,reducingtheimpactofextreme fuzzers’oraclesareoverlystrictandfailtocoverallvulnerable outliers.TheresultisshownbythelinegraphinFigure4(onthe contractbehaviors.OneexampleisSmartian,whichonlyreports rightY-axis).Thelastthree,sFuzz,xFuzz,andSmartian,require "FreezingEther"ifacontracthasnoethertransferinstructionsbut moretransactions.BecausesFuzzandxFuzzbatchthevulnerability containsadelegatecallthatcandestroyit.Moreover,wefindsome detection(roughly500transactions,thiscountmayvaryduetosen- contractscannotbeexecutedbyfuzzersduetovariousissues:1) sitiveinstructionspresent).Similarly,Smartianalsoexperiencesa codingerrorsintools(e.g.,xFuzz),2)compatibilityproblemswith detectiondelay.Forexample,inthecaseof"Reentrancy",theoracle outdatedTruffle(whichisusedtodeploycontracts),3)outdated onlytriggerstocheckforCALLinstructionsinprevioustransac- EVMversionsthatdon’tsupportnewSolidityinstructions,and tionsaftertheSSTORE(incurrenttransaction)isexecuted.For 4)outdatedSolccompilersunabletoprocesscontractswrittenin thefirstfivefuzzers,ContractFuzzerandSmartGiftwhichrequire newerSolidityversions. notablylongertimestodetectavulnerability,maintaintransaction Falsepositive:Themaincauseoffalsepositivesisproblematic countsforactualdetectioncomparabletootherthree.Thissuggests testoracles(seeTable4).Sometestoraclesaretoobroad,capturing thatthespeedofvulnerabilitydetectionislargelydependenton non-vulnerablecases.Forinstance,whendetecting"Reentrancy", fuzzers’throughput. ILFandRLFonlyexaminetheexistenceofanexternalcallfollowed Finding:Inourexperiment,thespeedofvulnerabilitydetection byastorageoperationinthetraces,neglectingthedatadependency islargelyinfluencedbyfuzzers’throughput. betweenthem."IntegerUnder/Overflow"isoftenfalselyreported Calltoaction:Improvethroughput.Tospeedupvulnerabil- bymostfuzzers,astheiroraclesonlysyntacticallycheckfordata itydetection,fuzzersshouldtakeactionstoincreasethroughput. flowbetweenarithmeticoperationsandstoreinstructions,ignoring Thiscouldbeachievedbyadoptinglightweight,standaloneEVM theactualeffectsoncontracts(astherecanbesomearithmetic frameworks,conductingcodeoptimizations,usingefficientlan- operationsintroducedbythemaskingoperationsfromthecompiler guagessuchasC,incorporatingoptimizedlibrarieslikelibAFL. orcompiler’soptimization).Furthermore,allthesefuzzersfailtoAreWeThereYet?UnravelingtheState-of-the-ArtSmartContractFuzzers ICSE’24,April14–20,2024,Lisbon,Portugal Table4:Problematictestoraclesinsmartcontractfuzzers. Oracle IssueDescription Fuzzers Fault Onlycheckforthepresenceofexternalcallfollowedbystorageoperation,ignoringdatadependency. ILF,RLF FP RE Onlyvalidateifamethodcanbere-enteredbutignorethestorageaccess. ContractFuzzer,SmartGift,sFuzz,xFuzz FP Taintanalysisover-approximatesthepossibleexecutionpaths. ConFuzzius FP Checkfortheuseofanystatevariablesincycliccalls,regardlessofwhetherthereareanywriteoperationsperformedonthem. Smartian FP Onlycheckforethertransferandblockstaterelatedinstructions(e.g.,TIMESTAMP),ignoringthedependencybetweenthem. ContractFuzzer,SmartGift,sFuzz,xFuzz FP Considerallblock-relatedinformationthatflowsintoconditionstatementsasvulnerable. ConFuzzius FP BD Failtoconsiderindirecttaintflowpropagation,onlylookfordirecttaintflow. ILF,RLF FN Failtocheckiftheblock-relatedinformationaffectstheconditionalbranch. Smartian FN Roughlyclassifyallexternalcallswith2300gaslimitasvulnerable,withoutcheckingtheoccurrenceofout-of-gasexception. sFuzz,xFuzz FP GS Failtorecognizethatthetransfer()automaticallyrevertstheprogramstatewhenthereisnotenoughgas. ContractFuzzer,SmartGift FP DD Notcheckifthedelegatecallparameterscanbecontrolledbyuserinput. ContractFuzzer,SmartGift FP Onlychecktheflowofarithmeticoperations,ignoringtheiractualeffectsonthecontract. Smartian,ConFuzzius,sFuzz,xFuzz FP IB FailtoconsiderMULinstruction. sFuzz,xFuzz FN EL Failtoconsiderthecasewheretheethertransfercanberevertedwhentheaccounthasnobalance. ConFuzzius FP Onlycheckfor’INVALID’intheexecutiontrace,withoutconsideringthrownexceptionwithaREVERTinstruction. sFuzz,xFuzz FN Onlyconsiderthecasewheretheexceptionishandledimmediately. ILF,RLF FP UE Z3[18]inaccuratelysolvestaintedstackitems,whichsymbolicallyrepresentexception-relatedvariables. ConFuzzius FP Roughlychecksforallunusedreturnvalues,eventhoughnoteveryreturnvalueindicatesafailedcall. Smartian FP |
Failtoconsidercaseswhereacontractcanreceiveetherbuthasnofunctionstosendether. sFuzz,xFuzz FN FE Failtoconsidercaseswhereacontractwithaself-destructoperationcanlocktheether. ILF,RLF FN FailtofilteroutCALLinstructionsintroducedbytheswarmsourceofbytecode,whicharewronglytreatedasethertransfer. ConFuzzius FN SC Incorrectlytreatstheowneraccountasanattackeraccount. ILF,RLF FP 103 102 101 100 101 102 Co Fn ut zr za ec rt SmartGift ILF RLF ConFuzzius sFUzz xFuzz Smartian gub a tceted ot emiT 400 300 200 100 gub a tceted ot snoitcasnarT# exploitedbyattackerssincetheyhavenoimpactonthecontract states[59].Similarly,xFuzzfiltersoutbenignfunctionsusingan MLmodeltoreducethesearchspaceoffunctioncallcombinations. Insomecases,allthesefuzzersstruggletoachieveeffectivecov- erage.Hardconstraintsincomplexcontracts,likethoserelyingon a’magic’number,aredifficulttosatisfy.Evenfuzzersincorporating Figure4:Timetofindvulnerability. symbolicexecutioncannotsolvetheseconstraintsduetothepath 5.5 CodeCoverage explosion.Additionally,reachingcontractstatesthatrequirelong Wenextevaluatethecodecoverageachievedby11contractfuzzers. transactionsequencesremainschallenging.Whilesomefuzzers Toensureaconsistentcomparison,weinstrumenttheEVMuni- analyzedataflowbetweentransactions(e.g.,RAW)toarrangetheir formlytomeasureinstructioncoverage.Figure5presentsthere- sequences,theyhavedifficultydeterminingtransactiondependen- sults.Notably,ityfuzz,Smartian,ILF,andConFuzziusachievehigher ciesduetopotentialcirculardependenciesbetweenstatevariables. codecoveragecomparedtoothertools.Thisislargelyduetotheir useofcoverage-guidedapproachesintestcasegeneration.Inad- Finding: Thequalityofinitialseedsandfuzzers’throughput dition,theyallmutatetheenvironmentalpropertiesduringseed arepivotalforquicklyreachinghighcoverage. mutation,whichcruciallysimulatesdiverseconditionsthattrigger Calltoaction: differentcontractbehaviors.ILFshowsthefastestcoverageincrease •Enhancinginitialseeds.Toenhancecoverageeffectiveness, ininitialstages,albeitwithalimitedincrementinlaterstages.This fuzzersshouldusetechniquessuchasprogramanalysisandma- islikelyduetoitshigh-qualityinitialseed.Itpre-trainsamodel chinelearningtobuildhigh-qualityinitialseeds,whichinturn, fromcoverage-guidedsymbolicexecutionexpertforgenerating generatemoreeffectiveandmeaningfultestinputs. transactionsequencesandarguments.ityfuzzalsoexhibitsrapid • Optimize mutation scheduling. Currently, all existing initialcoveragegrowth,duetoitsimpressivethroughput.EF\CF fuzzersrandomlychoosemutationsfromalistofmutational startswithaslowincreaseincoverage,asitbeginswithacoloriza- operators,whichmaywastefuzzingiterationsonineffectivemu- tionstage[21].Incontrast,ContractFuzzerandSmartGiftexhibit tations.Recentresearchintraditionalfuzzing[52,98]highlights thelowestcoverageduetotwomainfactors.First,theirthroughput thepivotalrolemutationschedulingplaysinenhancingcoverage ismarkedlyslow(§5.2).Second,asblack-boxfuzzers,theydonot effectiveness.Therefore,futurefuzzersshouldconsideradopt- usefeedbacktoguidetestgeneration.AlthoughSmartGiftutilizes ingadvancedmethodssuchasgeneticalgorithmsordifferential practicalinputsfromsimilarfunctionsintherealworld,itsMLap- evolutionalgorithmstoimprovemutationscheduling. proachsuffersfromgeneralizationissues(i.e.,manytestfunctions arenotwell-representedinthetrainingset). Thecodecoverageinsomefuzzersisloweredduetotheirfocus 5.6 Overhead onlyonpotentiallyvulnerablefunctionsduringtestgeneration. Wenextassesstheoverheadofeachfuzzer.Weusethe"dockerstats" Forexample,sFuzz,ETHPLOIT,andRLFexclude’View’and’Pure’ commandtomonitortheresourceusageintheDockercontainers. functionsincontractsthatdonotmodifystatevariables.Theratio- Figure6reportstheresult.Overall,ConFuzziusexhibitsthelowest nalebehindthisdecisionisthatthesefunctionsareunlikelytobe overhead,asitusesPyEVM,whichisalightweightimplementationICSE’24,April14–20,2024,Lisbon,Portugal ShuohanWu,ZihaoLi,LuyiYan,WeiminChen,MuhuiJiang,ChenxuWang,XiapuLuo,andHaoZhou 0.8 0.6 0.4 0.2 0.0 100 101 102 103 Time (second) egarevoC noitcurtsnI betweentheoreticalresearchadvancementsandtheactualneeds confuzzius sFuzz smartian SmartGift ef/cf ilf xFuzz ContractFuzzer ethploit ityfuzz forsecurityauditingintheindustry. rlf Table5:Fuzzingtoolsusedbydifferentauditingcompanies. Tools AuditingCompany ImmuneBytes,Halborn,TrailofBits,QuillAudits,Solidified, Echidna[2] Pessimistic,ChainSafe,yAcademy,yAudit,Truscova,Zellic, Zokyo,Cyfrin,ABDK. sfuzz[74] ImmuneBytes. Figure5:Instructioncoverage. Error Report Quality Code Coverage Online Fuzzing Active Community Maintenance Low False Positive/Negative Replay for Validation Customized Oracle Easy to Use Documentation Quality 0 2 4 6 8 10 12 14 16 18 (a)CPUConsumption. (b)MemoryConsumption. Figure6:Overhead. Figure7:Essentialfactorsinfuzzerselectionbyauditors;x-axis ofEVM.SmartGiftandContractFuzzerexhibithighCPUusage,pri- representsthenumberofparticipants. |
marilybecausetheysimulatetheentireblockchainnetwork,which canbeveryCPU-intensive.ILFemployscomplexneuralnetwork modelsforbothtransactionargumentandsequencegeneration, 6.2 SurveyonAuditors whichrequiressignificantcomputationalresources.Sincetheneu- Tofurtherunderstandthepracticalneedsforfuzzers,weconducted ralnetworkusedinILFismorecomplexthantheDNNusedin ananonymousonlinesurvey.Werecruited16participantsviaour RLF,moreprocessingpowerisneeded.Theuseofneuralnetworks professionalcontactswithinthecommunity,whoaveraged1.68 alsoresultsinbothRLFandILFconsumingmorememory,withan yearsofexperienceinsmartcontractreview.Ofthese,11were averagememoryconsumptionof5.83GB.Thisisduetothelarge soloauditors,withtherestworkinginthreeauditingcompanies. numberofparametersinvolvedinneuralnetworksthatneedtobe Oursurveyprimarilyquestionedtheparticipantsaboutthefuzzers storedinmemory. theyemployduringauditing,theirreasonforthesechoices,andthe essentialfactorstheyconsiderwhenusingfuzzers.Duetospace Conclusion:ConFuzziusstandsoutinmostkeymetrics.Itiden- constraints,thecompletesurveyquestionnaireisaccessibleinour tifiesthemostvulnerabilitiesanddoessoatthehighestspeed, GitHubrepository.Thesurveyresultechoedourpreviousfindings: achievingtop-tiercoverageandmaintainingmanageableover- 14participantsuseEchidnatoaidtheiraudits,twouseFoundry[12] head.However,there’sstillroomforConFuzziustoimprove.For whichsupportsfuzzinginitsunittests,andonealsousesityfuzz. example,usingastandaloneEVMcouldboostitsthroughput, Regardingtheirpreferences,allEchidnauserspraiseitshighly andrefiningitstestoraclecouldelevatethedetectionaccuracy. conducivesupportforcustomoracles.AsEchidnausesproperty- basedtesting[46],auditorscanwriteinvariants(constantsthat 6 INDUSTRIALPERSPECTIVES shouldperpetuallyholdtrue,see4.1)ascustomizedoracles.Thisal- 6.1 FuzzingToolsUsedbyAuditingCompanies lowsthemtoexaminethebusinesslogicintheircontracts,thereby Despiterecentresearch[32,105]suggestingthatacademicfuzzers makingEchidnaapplicabletologic-relatedvulnerabilities.Upon fallshortindetectingreal-worldcontractvulnerabilities,wefind reviewingthefuzzerslistedinTable1,it’sevidentthat,besides thattheindustrycontinuestorelyonfuzzingasanimportantcom- Echidna,onlyityfuzzofferssubstantialsupportforcustomoracles. ponentofcontractauditing[13].Tobetterunderstandthepractical Incontrast,therestrequiredeveloperstohaveadeepunderstand- usageofexistingsmartcontractfuzzersintheindustry,weexplore ingofthesourcecodeandtheabilitytomodifyit.Inaddition,three theirusebyvariousauditingcompanies.Specifically,wecollect participantsunderscoreEchidna’sstraightforwardintegrationinto publicauditreportsfromtheirrespectiveGitHubrepositories.The theauditworkflow.Thisimpliesthatauditorscanuseitwithother companiesconsideredaredrawnfromalistofnotableBlockchain toolsintheirtoolkittoextractcrucialinformationpriortoand Securityauditingcompanies[14].Byscanningthesereports,we duringthefuzzingcampaign.Forinstance,itcanbereadilypaired identifythefuzzersemployedintheirauditingprocess.Wealso withSlither[42],ahighlyrecognizedstaticanalysistoolforSolidity manuallyreadtheauditingmethodologysectionsontheirofficial contracts(aproductfromthesamecompany).Oneparticipant,a websitestoextendourknowledgeoftheemployedfuzzers.There- formerEchidnauser,shiftstoityFuzzduetoitssupportforon- sultisreportedinTable5.Echidna(aproductofTrailofBits[16])is linefuzzing–afeaturecurrentlyexclusivetoit.Thisfeatureis themostfavoredtoolamongthesecompanies.Conversely,mostof importantindetectingreal-worldvulnerabilities,givenitscapacity thefuzzerslistedinTable1fromacademicresearcharenotactively totestsmartcontractsinarealisticsettingcomparedtooffline employedinreal-worldsecurityaudits.Thishighlightsthegap fuzzing,whichstartsfromablankcontractstate.ThisparticipantAreWeThereYet?UnravelingtheState-of-the-ArtSmartContractFuzzers ICSE’24,April14–20,2024,Lisbon,Portugal alsonotesanotheradvantageofonlinefuzzing-theabilitytodetect Lastly,werecognizethatourliteraturereviewmaynotbeex- cross-contractvulnerabilities,asitinvolvesinteractingwithother haustive.However,ourstudyrepresentsthemostcomprehensive deployedcontracts,acomplexitythatofflinefuzzingfailstomimic. investigationofsmartcontractfuzzerstodate.Ourfuturework Asfortheessentialfactorsinselectingfuzzers,mostauditorsvalue willsystematicallycomparenewlyreleasedfuzzersondifferent easeofuse,qualityofdocumentation,oraclecustomizationflexibil- blockchainplatformsorcontractlanguages. ity,andcontinuouscodemaintenance.Detailedresultscanbeseen inFigure7. 8 CONCLUSION Inthispaper,weconductathoroughinvestigationofexistingsmart contractfuzzingtechniquesthroughaliteraturereviewandempir- Calltoaction: •PromotingCustomOracleCreation.Fuzzersshouldfacili- icalevaluation.Weassesstheusabilityof11state-of-the-artsmart tateusersincreatingcustomizedtestoraclesinastraightforward contract fuzzers using a carefully-labeled benchmark and com- |
andflexiblemanner.Thisfeatureallowsforthecaptureofcom- prehensiveperformancemetrics.Ourevaluationidentifiessome plexbusinessrules,broadeningthefuzzer’sapplicabilityand notableissuesandsuggestspossibledirectionsforfuturefuzzers. leadingtomoreaccurateaudits. Webelieveourresearchoffersthecommunityvaluableinsights, • Support online-fuzzing. Fuzzers should include online inspiringfuturebreakthroughsinthisfield. fuzzingfeaturesfortestinginrealsettings,facilitatingcross- contractvulnerabilitiesdetectionandincreasingchancesofdis- ACKNOWLEDGMENT coveringrealexploits. Wethanktheanonymousreviewersfortheirhelpfulcomments. •EmphasizeUsability.Fuzzersshouldupholdthequalityof ThisresearchispartiallysupportedbytheHongKongRGCProjects theirdocumentationandprovidecleartutorialsforquickuser (No.PolyU15219319,PolyU15222320,PolyU15224121),NationalNat- onboarding.Furthermore,anintuitiveinterfacethatprovides uralScienceFoundationofChina(No.62272379). comprehensiveinformationisalsocrucial.Itcanaidusersmoni- toringthefuzzingprocessandbetterinterprettheresults. REFERENCES •BuildStrongCommunity.Establishingastrongcommunity [1] 2022.Americanfuzzylop. https://lcamtuf.coredump.cx/afl/ can create a valuable space for users to ask questions, share [2] 2022.Ethereumsmartcontractfuzzer. https://github.com/crytic/echidna knowledge,andaddressencounteredissues. [3] 2022.GoEthereum. https://geth.ethereum.org/ [4] 2022.libFuzzer–alibraryforcoverage-guidedfuzztesting. https://llvm.org/d •ImplementReplayFeatures.Fuzzershouldincorporateex- ocs/LibFuzzer.html ploitsreplay,asseeninityFuzz,theonlytoolsupportingreplay [5] 2022.Real-WorldUseCasesforSmartContractsanddApps.https://www.gemi currently.Thereplayfeaturecanreproduceissues,eliminate ni.com/cryptopedia/smart-contract-examples-smart-contract-use-cases/. [6] 2022.SolidiFIBenchmark. https://github.com/DependableSystemsLab/SolidiFI- falsepositives,andvalidatefixespostbugrepairs. benchmark [7] 2022.SolidityProgrammingLanguage.https://soliditylang.org/. [8] 2022. The DAO Attacked: Code Issue Leads to $60 Million Ether Theft.https://www.coindesk.com/markets/2016/06/17/the-dao-attacked-code- 7 THREATTOVALIDITY issue-leads-to-60-million-ether-theft/. [9] 2022.WelcometoEthereum.https://ethereum.org/en/. Themainthreattoexternalvalidityinthisstudyisrelatedtothe [10] 2022. What is the cryptic part at the end of a solidity contract byte- benchmark.Firstly,theprocessofre-labelingcontractsinvolves code? https://ethereum.stackexchange.com/questions/23525/what-is-the-cr manualeffort,whichmayintroducepotentialsubjectivityandbias. yptic-part-at-the-end-of-a-solidity-contract-bytecode. [11] 2023.EthereumMarketCap. https://ycharts.com/indicators/ethereum_market Additionally,inthere-labelingprocess,somebugsincontractsmay _cap notbeidentifiedbyallfuzzersandthus,couldbemissedbyour [12] 2023. Foundryisablazingfast,portableandmodulartoolkitforEthereum volunteersduringmanualinspection.Second,thebenchmarkwe applicationdevelopmentwritteninRust. https://github.com/foundry-rs/fou ndry usedisbasedoncontractscollectedfrompreviousworks,which [13] 2023.Fuzzit. https://consensys.io/diligence/fuzzing/ maynotreflectthediversityandcomplexityofreal-worldcontracts. [14] 2023. AlistofnotableBlockchainSecurityauditcompaniesandlocationof publicaudits. https://github.com/0xNazgul/Blockchain-Security-Audit-List Therefore,thegeneralizabilityofourfindingsmaybeconstrained. [15] 2023. RustEthereumvirtualmachine(revm)IsEVMwritteninrustthatis However,theresultscanhelpresearchersgainagoodunderstand- focusedonspeedandsimplicity. https://github.com/bluealloy/revm ingofthelimitationsofexistingworksandmotivateresearchersto [16] 2023.TrailofBits. https://www.trailofbits.com/ [17] 2023. WhatIsBranchCoverageandWhatDoesItReallyTellYou? https: developmoreadvancedtools.Inourfuturework,wewillexpand //linearb.io/blog/what-is-branch-coverage/ ourbenchmarktoincludecontractsusedinproduction.Addition- [18] 2023.Z3TheoremProver. https://en.wikipedia.org/wiki/Z3_Theorem_Prover ally,wewillusemorediversesourcestocollectcontracts,suchas [19] TesnimAbdellatifandKei-LéoBrousmiche.2018.Formalverificationofsmart contractsbasedonusersandblockchainbehaviorsmodels.In20189thIFIP bugbountyprograms,securityaudits. InternationalConferenceonNewTechnologies,MobilityandSecurity(NTMS). Anotherthreattoexternalvalidityisthat,whiledifferentfuzzers IEEE,1–5. [20] KapilAnand,MatthewSmithson,KhaledElwazeer,AparnaKotha,JimGruen, adoptdifferenttechniques,weonlyevaluatedtheiroveralleffective- NathanGiles,andRajeevBarua.2013. Acompiler-levelintermediaterepre- ness,withoutevaluatingtheimportanceoftheirmaincomponents sentationbasedbinaryanalysisandrewritingsystem.InProceedingsofthe8th atamorefine-grainedlevel.Thiscouldleadtoalackofunderstand- ACMEuropeanConferenceonComputerSystems. |
[21] CorneliusAschermann,SergejSchumilo,TimBlazytko,RobertGawlik,and ingofhowtheuniquefeaturesofeachfuzzercontributetotheir ThorstenHolz.2019.REDQUEEN:FuzzingwithInput-to-StateCorrespondence.. overallperformance.Inourfuturework,wewillconductablation InNDSS,Vol.19.1–15. experimentstodemonstratetheeffectsofdifferentuniquefeatures [22] ImranAshraf,XiaoxueMa,BoJiang,andWingKwongChan.2020.GasFuzzer: Fuzzingethereumsmartcontractbinariestoexposegas-orientedexception ofthefuzzers. securityvulnerabilities.IEEEAccess8(2020),99552–99564.ICSE’24,April14–20,2024,Lisbon,Portugal ShuohanWu,ZihaoLi,LuyiYan,WeiminChen,MuhuiJiang,ChenxuWang,XiapuLuo,andHaoZhou [23] NicolaAtzei,MassimoBartoletti,andTizianaCimoli.2017.Asurveyofattacks 557–560. onethereumsmartcontracts(sok).InPrinciplesofSecurityandTrust:6thInter- [47] XiangGuo.2022.Analysisbetweendifferenttypesofsmartcontractfuzzing.In nationalConference,POST2017,HeldasPartoftheEuropeanJointConferences 20223rdInternationalConferenceonComputerVision,ImageandDeepLearning onTheoryandPracticeofSoftware,ETAPS2017,Uppsala,Sweden,April22-29, &InternationalConferenceonComputerEngineeringandApplications(CVIDL& 2017,Proceedings6.Springer,164–186. ICCEA).IEEE,882–886. [24] XiaominBai,ZijingCheng,ZhangboDuan,andKaiHu.2018.Formalmodeling [48] JingxuanHe,MislavBalunović,NodarAmbroladze,PetarTsankov,andMartin andverificationofsmartcontracts.InProceedingsofthe20187thinternational Vechev.2019. Learningtofuzzfromsymbolicexecutionwithapplicationto conferenceonsoftwareandcomputerapplications.322–326. smartcontracts.InProceedingsofthe2019ACMSIGSACConferenceonComputer [25] RobertoBaldoni,EmilioCoppa,DanieleConoD’Elia,CamilDemetrescu,and andCommunicationsSecurity.531–548. IreneFinocchi.[n.d.]. ASurveyofSymbolicExecutionTechniques. ACM [49] NingyuHe,RuiyiZhang,HaoyuWang,LeiWu,XiapuLuo,YaoGuo,Ting Comput.Surv.([n.d.]). Yu,andXuxianJiang.2021. {EOSAFE}:securityanalysisof{EOSIO}smart [26] SofiaBekrar,ChaoukiBekrar,RolandGroz,andLaurentMounier.2011.Finding contracts.In30thUSENIXSecuritySymposium(USENIXSecurity21).1271–1288. softwarevulnerabilitiesbysmartfuzzing.In2011FourthIEEEInternational [50] AdrianHerrera,HendraGunadi,ShaneMagrath,MichaelNorrish,Mathias ConferenceonSoftwareTesting,VerificationandValidation. Payer,andAntonyLHosking.2021.Seedselectionforsuccessfulfuzzing.In [27] KarthikeyanBhargavan,AntoineDelignat-Lavaud,CédricFournet,AnithaGol- Proceedingsofthe30thACMSIGSOFTinternationalsymposiumonsoftwaretesting lamudi,GeorgesGonthier,NadimKobeissi,NataliaKulatova,AseemRastogi, andanalysis.230–243. ThomasSibut-Pinote,NikhilSwamy,etal.2016.Formalverificationofsmart [51] Tian-YuanHU,ZeChengLi,BiXinLi,andQiHaoBAO.[n.d.]. Contractual contracts:Shortpaper.InProceedingsofthe2016ACMworkshoponprogramming SecurityandPrivacySecurityofSmartContract:ASystemMappingStudy.In languagesandanalysisforsecurity.91–96. CHINESEJOURNALOFCOMPUTERS. [28] MarcelBöhme,Van-ThuanPham,Manh-DungNguyen,andAbhikRoychoud- [52] PatrickJauernig,DomagojJakobovic,StjepanPicek,EmmanuelStapf,and hury.2017.Directedgreyboxfuzzing.InProceedingsofthe2017ACMSIGSAC Ahmad-RezaSadeghi.2022.DARWIN:SurvivaloftheFittestFuzzingMutators. conferenceoncomputerandcommunicationssecurity.2329–2344. arXivpreprintarXiv:2210.11783(2022). [29] MarcelBöhme,Van-ThuanPham,andAbhikRoychoudhury.2017.Coverage- [53] SongyanJi,JianDong,JunfuQiu,BowenGu,YeWang,andTongqiWang.2021. basedgreyboxfuzzingasmarkovchain.TSE45,5(2017),489–506. Increasingfuzztestingcoverageforsmartcontractswithdynamictaintanalysis. [30] KonstantinBöttinger,PatriceGodefroid,andRishabhSingh.[n.d.].Deeprein- In2021IEEE21stInternationalConferenceonSoftwareQuality,Reliabilityand forcementfuzzing.InIEEESPW. Security(QRS).IEEE,243–247. [31] LexiBrent,NevilleGrech,SifisLagouvardos,BernhardScholz,andYannis [54] SongyanJi,JinWu,JunfuQiu,andJianDong.2023.Effuzz:Efficientfuzzingby Smaragdakis.2020.Ethainter:asmartcontractsecurityanalyzerforcomposite directedsearchforsmartcontracts.InformationandSoftwareTechnology(2023), vulnerabilities.InPLDI. 107213. [32] Stefanos Chaliasos, Marcos Antonios Charalambous, Liyi Zhou, Rafaila [55] BoJiang,YeLiu,andWingKwongChan.2018.Contractfuzzer:Fuzzingsmart Galanopoulou,ArthurGervais,DimitrisMitropoulos,andBenLivshits.2023. contractsforvulnerabilitydetection.InProceedingsofthe33rdACM/IEEEInter- Smartcontractanddefisecurity:Insightsfromtoolevaluationsandpractitioner nationalConferenceonAutomatedSoftwareEngineering.259–269. surveys.arXivpreprintarXiv:2304.02981(2023). [56] GeorgeKlees,AndrewRuef,BenjiCooper,ShiyiWei,andMichaelHicks.2018. |
[33] PengChenandHaoChen.2018.Angora:Efficientfuzzingbyprincipledsearch. Evaluatingfuzztesting.InProceedingsofthe2018ACMSIGSACconferenceon In2018IEEESymposiumonSecurityandPrivacy(SP).IEEE,711–725. computerandcommunicationssecurity.2123–2138. [34] TingChen,XiaoqiLi,XiapuLuo,andXiaosongZhang.2017.Under-optimized [57] SatpalSinghKushwaha,SandeepJoshi,DilbagSingh,ManjitKaur,andHeung- smartcontractsdevouryourmoney.In2017IEEE24thinternationalconference NoLee.2022. Ethereumsmartcontractanalysistools:Asystematicreview. onsoftwareanalysis,evolutionandreengineering(SANER).IEEE,442–446. IEEEAccess(2022). [35] TingChen,ZihaoLi,XiapuLuo,XiaofengWang,TingWang,ZheyuanHe, [58] ChhaganLalandDusicaMarijan.2021.Blockchaintesting:Challenges,tech- KezhaoFang,YufeiZhang,HangZhu,HongweiLi,etal.2021.SigRec:Automatic niques,andresearchdirections.arXivpreprintarXiv:2103.10074(2021). recoveryoffunctionsignaturesinsmartcontracts.TSE(2021). [59] BixinLi,ZhenyuPan,andTianyuanHu.2022. ReDefender:DetectingReen- [36] TingChen,ZihaoLi,YufeiZhang,XiapuLuo,AngChen,KunYang,BinHu, trancyVulnerabilitiesinSmartContractsAutomatically.IEEETransactionson TongZhu,ShifangDeng,TengHu,etal.2019.Dataether:Dataexplorationframe- Reliability(2022). workforethereum.In2019IEEE39thInternationalConferenceonDistributed [60] JunLi,BodongZhao,andChaoZhang.2018.Fuzzing:asurvey.Cybersecurity ComputingSystems(ICDCS).IEEE,1369–1380. 1,1(2018),1–13. [37] TingChen,ZihaoLi,YufeiZhang,XiapuLuo,TingWang,TengHu,Xiuzhuo [61] XiaoqiLi,PengJiang,TingChen,XiapuLuo,andQiaoyanWen.2020.Asurvey Xiao,DongWang,JinHuang,andXiaosongZhang.2019.Alarge-scaleempirical onthesecurityofblockchainsystems.Futuregenerationcomputersystems107 studyoncontrolflowidentificationofsmartcontracts.InESEM. (2020),841–853. [38] TingChen,ZihaoLi,YuxiaoZhu,JiachiChen,XiapuLuo,JohnChi-ShingLui, [62] YuweiLi,ShoulingJi,YuanChen,SizhuangLiang,Wei-HanLee,YueyaoChen, XiaodongLin,andXiaosongZhang.2020.Understandingethereumviagraph ChenyangLyu,ChunmingWu,RaheemBeyah,PengCheng,etal.2021.UNI- analysis.ACMTransactionsonInternetTechnology(TOIT)20,2(2020),1–32. FUZZ:AHolisticandPragmaticMetrics-DrivenPlatformforEvaluatingFuzzers.. [39] TingChen,YufeiZhang,ZihaoLi,XiapuLuo,TingWang,RongCao,Xiuzhuo InUSENIXSecuritySymposium.2777–2794. Xiao,andXiaosongZhang.2019.Tokenscope:Automaticallydetectingincon- [63] ZihaoLi,JianfengLi,ZheyuanHe,XiapuLuo,TingWang,XiaozeNi,Wenwu sistentbehaviorsofcryptocurrencytokensinethereum.InACMCCS. Yang,ChenXi,andTingChen.2023.DemystifyingDeFiMEVActivitiesinFlash- [40] JaeseungChoi,DoyeonKim,SoominKim,GustavoGrieco,AlexGroce,and botsBundle.InProceedingsofthe2023ACMSIGSACConferenceonComputer SangKilCha.2021.Smartian:Enhancingsmartcontractfuzzingwithstaticand andCommunicationsSecurity(CCS). dynamicdata-flowanalyses.In202136thIEEE/ACMInternationalConferenceon [64] HongliangLiang,XiaoxiaoPei,XiaodongJia,WuweiShen,andJianZhang. AutomatedSoftwareEngineering(ASE).IEEE,227–239. 2018. Fuzzing:Stateoftheart. IEEETransactionsonReliability67,3(2018), [41] ThomasDurieux,JoãoFFerreira,RuiAbreu,andPedroCruz.2020.Empirical 1199–1218. reviewofautomatedanalysistoolson47,587Ethereumsmartcontracts.InPro- [65] Jian-WeiLiao,Tsung-TaTsai,Chia-KangHe,andChin-WeiTien.2019.Soliaudit: ceedingsoftheACM/IEEE42ndInternationalconferenceonsoftwareengineering. Smartcontractvulnerabilityassessmentbasedonmachinelearningandfuzz 530–541. testing.In2019SixthInternationalConferenceonInternetofThings:Systems, [42] JosselinFeist,GustavoGrieco,andAlexGroce.[n.d.].Slither:astaticanalysis ManagementandSecurity(IOTSMS).IEEE,458–465. frameworkforsmartcontracts.InProc.IEEE/ACMWETSEB. [66] ChaoLiu,HanLiu,ZhaoCao,ZhongChen,BangdaoChen,andBillRoscoe. [43] JoãoFFerreira,PedroCruz,ThomasDurieux,andRuiAbreu.2020.SmartBugs: 2018.Reguard:findingreentrancybugsinsmartcontracts.InProceedingsofthe Aframeworktoanalyzesoliditysmartcontracts.InProceedingsofthe35th 40thInternationalConferenceonSoftwareEngineering:CompanionProceeedings. IEEE/ACMInternationalConferenceonAutomatedSoftwareEngineering.1349– 65–68. 1352. [67] ZhenguangLiu,PengQian,JiaxuYang,LingfengLiu,XiaojunXu,Qinming [44] AsemGhalebandKarthikPattabiraman.2020.Howeffectivearesmartcontract He,andXiaosongZhang.2023.RethinkingSmartContractFuzzing:Fuzzing analysistools?evaluatingsmartcontractstaticanalysistoolsusingbuginjection. WithInvocationOrderingandImportantBranchRevisiting. arXivpreprint InProceedingsofthe29thACMSIGSOFTInternationalSymposiumonSoftware arXiv:2301.03943(2023). TestingandAnalysis.415–427. [68] Han-LinLu,GuanMingLin,andShih-KunHuang.2022.Guan-fuzz:Argument |
[45] NevilleGrech,MichaelKong,AntonJurisevic,LexiBrent,BernhardScholz, SelectionWithMeanShiftClusteringforMulti-argumentFuzzing.In20229th andYannisSmaragdakis.2018. Madmax:Survivingout-of-gasconditionsin InternationalConferenceonDependableSystemsandTheirApplications(DSA). ethereumsmartcontracts.OOPSLA(2018). IEEE,421–430. [46] GustavoGrieco,WillSong,ArturCygan,JosselinFeist,andAlexGroce.2020. [69] LoiLuu,Duc-HiepChu,HrishiOlickel,PrateekSaxena,andAquinasHobor. Echidna:effective,usable,andfastfuzzingforsmartcontracts.InProceedingsof [n.d.].Makingsmartcontractssmarter.InACMCCS. the29thACMSIGSOFTInternationalSymposiumonSoftwareTestingandAnalysis.AreWeThereYet?UnravelingtheState-of-the-ArtSmartContractFuzzers ICSE’24,April14–20,2024,Lisbon,Portugal [70] ChenyangLyu,ShoulingJi,ChaoZhang,YuweiLi,Wei-HanLee,YuSong,and [94] JunjieWang,BihuanChen,LeiWei,andYangLiu.2019.Superion:Grammar- RaheemBeyah.2019.{MOPT}:Optimizedmutationschedulingforfuzzers.In awaregreyboxfuzzing.In2019IEEE/ACM41stInternationalConferenceon 28thUSENIXSecuritySymposium(USENIXSecurity19).1949–1966. SoftwareEngineering(ICSE).IEEE,724–735. [71] FuchenMa,ZhenyangXu,MengRen,ZijingYin,YuanliangChen,LeiQiao,Bin [95] XiaoyinWang,JiazeSun,ChunyangHu,PanpanYu,BinZhang,andDonghai Gu,HuizhongLi,YuJiang,andJiaguangSun.2021.Pluto:Exposingvulnerabili- Hou.2022.EtherFuzz:MutationFuzzingSmartContractsforTODVulnerability tiesininter-contractscenarios.IEEETransactionsonSoftwareEngineering48, Detection.WirelessCommunicationsandMobileComputing2022(2022). 11(2021),4380–4396. [96] Scott Wesley, Maria Christakis, Jorge A Navas, Richard Trefler, Valentin [72] ValentinJMManès,HyungSeokHan,ChoongwooHan,SangKilCha,Manuel Wüstholz,andArieGurfinkel.2022. Verifyingsoliditysmartcontractsvia Egele,EdwardJSchwartz,andMaverickWoo.[n.d.]. Theart,science,and communicationabstractioninSmartACE.InVerification,ModelChecking,and engineeringoffuzzing:Asurvey.TSE([n.d.]). AbstractInterpretation:23rdInternationalConference,VMCAI2022,Philadelphia, [73] MarkMossberg,FelipeManzano,EricHennenfent,AlexGroce,GustavoGrieco, PA,USA,January16–18,2022,Proceedings23.Springer,425–449. JosselinFeist,TrentBrunson,andArtemDinaburg.2019.Manticore:Auser- [97] CanghaiWu,JieXiong,HuanliangXiong,YingdingZhao,andWenlongYi.2022. friendlysymbolicexecutionframeworkforbinariesandsmartcontracts.In AReviewonRecentProgressofSmartContractinBlockchain.IEEEAccess10 ASE. (2022),50839–50863. [74] TaiDNguyen,LongHPham,JunSun,YunLin,andQuangTranMinh.2020. [98] MingyuanWu,LingJiang,JiahongXiang,YanweiHuang,HemingCui,Lingming sfuzz:Anefficientadaptivefuzzerforsoliditysmartcontracts.InProceedingsof Zhang,andYuqunZhang.2022. Onefuzzingstrategytorulethemall.In theACM/IEEE42ndInternationalConferenceonSoftwareEngineering.778–788. Proceedingsofthe44thInternationalConferenceonSoftwareEngineering.1634– [75] MitchellOlsthoorn, DimitriStallenberg,ArieVanDeursen,andAnnibale 1645. Panichella.2022.Syntest-solidity:Automatedtestcasegenerationandfuzzing [99] ValentinWüstholzandMariaChristakis.2020.Harvey:Agreyboxfuzzerfor forsmartcontracts.InProceedingsoftheACM/IEEE44thInternationalConference smartcontracts.InProceedingsofthe28thACMJointMeetingonEuropean onSoftwareEngineering:CompanionProceedings.202–206. SoftwareEngineeringConferenceandSymposiumontheFoundationsofSoftware [76] SiddhasagarPani,HarshitaVaniNallagonda,SaumyaPrakash,RVigneswaran, Engineering.1398–1409. RaveendraKumarMedicherla,andMARajan.2022.Smartcontractfuzzingfor [100] ValentinWüstholzandMariaChristakis.2020.Targetedgreyboxfuzzingwith enterprises:thelanguageagnosticway.In202214thInternationalConferenceon staticlookaheadanalysis.In2020IEEE/ACM42ndInternationalConferenceon COMmunicationSystems&NETworkS(COMSNETS).IEEE,1–6. SoftwareEngineering(ICSE). [77] AnnibalePanichella,FitsumMesheshaKifetew,andPaoloTonella.[n.d.].Au- [101] WenXu,HyungonMoon,SanidhyaKashyap,Po-NingTseng,andTaesooKim. tomatedtestcasegenerationasamany-objectiveoptimisationproblemwith 2019.Fuzzingfilesystemsviatwo-dimensionalinputspaceexploration.In2019 dynamicselectionofthetargets.TSE([n.d.]). IEEESymposiumonSecurityandPrivacy(SP).IEEE,818–834. [78] PurathaniPraitheeshan,LeiPan,JiangshanYu,JosephLiu,andRobinDoss. [102] YinxingXue,JiamingYe,WeiZhang,JunSun,LeiMa,HaijunWang,andJianjun 2019.Securityanalysismethodsonethereumsmartcontractvulnerabilities:a Zhao.2022. xfuzz:Machinelearningguidedcross-contractfuzzing. IEEE survey.arXiv(2019). TransactionsonDependableandSecureComputing(2022). [79] IlhamQasse,MohammadHamdaqa,andBjörnÞórJónsson.2023.SmartContract [103] QingzhaoZhang,YizhuoWang,JuanruLi,andSiqiMa.2020.Ethploit:From |
UpgradeabilityontheEthereumBlockchainPlatform:AnExploratoryStudy. fuzzingtoefficientexploitgenerationagainstsmartcontracts.In2020IEEE arXivpreprintarXiv:2304.06568(2023). 27thInternationalConferenceonSoftwareAnalysis,EvolutionandReengineering [80] MohitRajpal,WilliamBlum,andRishabhSingh.2017.Notallbytesareequal: (SANER).IEEE,116–126. Neuralbytesieveforfuzzing.arXivpreprintarXiv:1711.04596(2017). [104] WeiZhang.2022.Beak:ADirectedHybridFuzzerforSmartContracts.Interna- [81] SanjayRawat,VivekJain,AshishKumar,LucianCojocar,CristianoGiuffrida, tionalCoreJournalofEngineering8,4(2022),480–498. andHerbertBos.2017.VUzzer:Application-awareEvolutionaryFuzzing..In [105] ZhuoZhang,BrianZhang,WenXu,andZhiqiangLin.2023. Demystifying NDSS,Vol.17.1–14. ExploitableBugsinSmartContracts.ICSE. [82] MengRen,ZijingYin,FuchenMa,ZhenyangXu,YuJiang,ChengnianSun, [106] KunsongZhao,ZihaoLi,JianfengLi,HeYe,XiapuLuo,andTingChen.2023. HuizhongLi,andYanCai.2021.Empiricalevaluationofsmartcontracttesting: DeepInfer:DeepTypeInferencefromSmartContractBytecode.InESEC/FSE. Whatisthebestchoice?.InProceedingsofthe30thACMSIGSOFTInternational [107] ZibinZheng,NengZhang,JianzhongSu,ZhijieZhong,MingxiYe,andJiachi SymposiumonSoftwareTestingandAnalysis.566–579. Chen.2023. TurntheRudder:ABeaconofReentrancyDetectionforSmart [83] MichaelRodler,DavidPaaßen,WentingLi,LukasBernhard,ThorstenHolz, ContractsonEthereum.arXivpreprintarXiv:2303.13770(2023). GhassanKarame,andLucasDavi.2023.EF/CF:HighPerformanceSmartCon- [108] JianfeiZhou,TianxingJiang,ShuweiSong,andTingChen.2022.AntFuzzer: tractFuzzingforExploitGeneration.arXivpreprintarXiv:2304.06341(2023). AGrey-BoxFuzzingFrameworkforEOSIOSmartContracts. arXivpreprint [84] RobARutenbar.[n.d.].Simulatedannealingalgorithms:Anoverview.Circuits arXiv:2211.02652(2022). andDevicesmagazine([n.d.]). [109] TengZhou,KuiLiu,LiLi,ZheLiu,JacquesKlein,andTegawendéFBissyandé. [85] NoamaFatimaSamreenandManarHAlalfi.2020. Reentrancyvulnerability [n.d.].SmartGift:Learningtogeneratepracticalinputsfortestingsmartcon- identificationinethereumsmartcontracts.In2020IEEEInternationalWorkshop tracts.InProc.IEEEICSME. onBlockchainOrientedSoftwareEngineering(IWBOSE).IEEE,22–29. [110] XiaogangZhu,ShengWen,SeyitCamtepe,andYangXiang.2022.Fuzzing:a [86] SarwarSayeed,HectorMarco-Gisbert,andTomCaira.2020.Smartcontract: surveyforroadmap.CSUR(2022). Attacksandprotections.IEEEAccess8(2020),24416–24427. [111] YuanZhuang,ZhenguangLiu,PengQian,QiLiu,XiangWang,andQinming [87] ChaofanShou,ShangyinTan,andKoushikSen.2023.ItyFuzz:Snapshot-Based He.2020.SmartContractVulnerabilityDetectionusingGraphNeuralNetwork.. FuzzerforSmartContract.InProceedingsofthe32ndACMSIGSOFTInternational InIJCAI.3283–3290. SymposiumonSoftwareTestingandAnalysis.322–333. [88] JianzhongSu,Hong-NingDai,LingjunZhao,ZibinZheng,andXiapuLuo.2022. EffectivelyGeneratingVulnerableTransactionSequencesinSmartContracts withReinforcementLearning-guidedFuzzing.In37thIEEE/ACMInternational ConferenceonAutomatedSoftwareEngineering.1–12. [89] JianzhongSu,Hong-NingDai,LingjunZhao,ZibinZheng,andXiapuLuo.2022. Effectivelygeneratingvulnerabletransactionsequencesinsmartcontractswith reinforcementlearning-guidedfuzzing.InProceedingsofthe37thIEEE/ACM InternationalConferenceonAutomatedSoftwareEngineering.1–12. [90] SergeiTikhomirov,EkaterinaVoskresenskaya,IvanIvanitskiy,RamilTakhaviev, EvgenyMarchenko,andYaroslavAlexandrov.2018.Smartcheck:Staticanalysis ofethereumsmartcontracts.InProceedingsofthe1stinternationalworkshopon emergingtrendsinsoftwareengineeringforblockchain.9–16. [91] ChristofFerreiraTorres,AntonioKenIannillo,ArthurGervais,etal.2021.CON- FUZZIUS:ADataDependency-AwareHybridFuzzerforSmartContracts.(2021). (2021). [92] HaijunWang,YiLi,Shang-WeiLin,LeiMa,andYangLiu.2019. Vultron: catchingvulnerablesmartcontractsonceandforall.In2019IEEE/ACM41st InternationalConferenceonSoftwareEngineering:NewIdeasandEmerging Results(ICSE-NIER).IEEE,1ś4(2019). [93] HaijunWang,YeLiu,YiLi,Shang-WeiLin,CyrilleArtho,LeiMa,andYangLiu. 2020.Oracle-supporteddynamicexploitgenerationforsmartcontracts.IEEE TransactionsonDependableandSecureComputing19,3(2020),1795–1809. |
2402.09094 1 Unity is Strength: Enhancing Precision in Reentrancy Vulnerability Detection of Smart Contract Analysis Tools Zexu Wang, Jiachi Chen, Zibin Zheng, Fellow, IEEE, Peilin Zheng, Yu Zhang, Weizhe Zhang Abstract—Reentrancyisoneofthemostnotoriousvulnerabilitiesinsmartcontracts,resultinginsignificantdigitalassetlosses.However, manypreviousworksindicatethatcurrentReentrancydetectiontoolssufferfromhighfalsepositiverates.Evenworse,recentyearshave witnessedtheemergenceofnewReentrancyattackpatternsfueledbyintricateanddiversevulnerabilityexploitmechanisms. Unfortunately,currenttoolsfaceasignificantlimitationintheircapacitytoadaptanddetecttheseevolvingReentrancypatterns. Consequently,ensuringpreciseandhighlyextensibleReentrancyvulnerabilitydetectionremainscriticalchallengesforexistingtools. Toaddressthisissue,weproposeatoolnamedReEP,designedtoreducethefalsepositivesforReentrancyvulnerabilitydetection. Additionally,ReEPcanintegratemultipletools,expandingitscapacityforvulnerabilitydetection.Itevaluatesresultsfromexistingtoolsto verifyvulnerabilitylikelihoodandreducefalsepositives.ReEPalsooffersexcellentextensibility,enablingtheintegrationofdifferent detectiontoolstoenhanceprecisionandcoverdifferentvulnerabilityattackpatterns.WeperformReEPtoeightexistingstate-of-the-art Reentrancydetectiontools.Theaverageprecisionoftheseeighttoolsincreasedfromtheoriginal0.5%to73%withoutsacrificingrecall. Furthermore,ReEPexhibitsrobustextensibility.Byintegratingmultipletools,theprecisionfurtherimprovedtoamaximumof83.6%. TheseresultsdemonstratethatReEPeffectivelyunitesthestrengthsofexistingworks,enhancestheprecisionofReentrancy vulnerabilitydetectiontools. IndexTerms—ReentrancyDetection,SymbolicExecution,PathPruning,SmartContracts ✦ 1 INTRODUCTION the loss of the state in contract interactions. Conversely, dynamicvulnerabilitydetectionmodelsfrequentlystruggle SMART contracts offer several unique features that with deep-state search and comprehensive state analysis distinguish them from traditional software programs, in cross-contract vulnerability scenarios. Zheng et al. [5] especially in finance and permission management scenar- conductedalarge-scaleempiricalstudyonexistingpopular ios [1], [2]. Numerous smart contract vulnerabilities have Reentrancyvulnerabilitydetectiontoolsandfoundthatthese been discovered through real-world attacks or theoretical tools produced false positives as high as 99.8%, with 55% analysis, including the Reentrancy vulnerability [3], [4]. caused by incorrect permission control verification and 41% Since the DAO attack in 2016, which resulted in the theft duetothelackofexternalcontractfunctionanalysis.Accurate of approximately 150 million dollars in digital assets, the permissioncontrolchecksandcross-contractstateanalysis Reentrancyvulnerabilityhascausedsignificantassetlosses. continuetochallengeexistingdetectiontools.Consequently, Reentrancyvulnerabilitiesarisewhenexternal(malicious) reducing the false positives of Reentrancy vulnerability contractsexploitreentrantfunctioncharacteristicstobypass detection remains a major topic in smart contract security permissioncontrolchecks[5].Thisallowsexternalcontracts research. to enter the same function multiple times, manipulate Despite significant research efforts directed toward de- contractlogic,andstealassets.Avarietyoftechnologieshave tecting reentrancy vulnerabilities, the constant evolution beendevelopedtodetectReentrancyvulnerabilitiesinsmart of exploitation mechanisms has led to the emergence of contracts,whichcanbebroadlydividedintotwocategories, new Reentrancy attack patterns in recent years. This com- i.e., static analysis [6]–[8] and dynamic analysis [9]–[12]. plexity and variability necessitate a continuous expansion Staticanalysistechniquesoftencollectincompleteprogram of vulnerability detection patterns within existing tools state information, which can lead to false positives due to to ensure reliable detection of Reentrancy vulnerabilities. Unfortunately,manyexistingtools(suchasOyente[9],Osiris • ZexuWang,JiachiChen,ZibinZheng,PeilinZhengarewithSchoolof [13],Manticore[10],etc.)havelongstruggledtoeffectively SoftwareEngineering,SunYat-senUniversity,China. detectReentrancyvulnerabilitiesinreal-worldcontractsdue E-mail:{wangzx97,zhengpl3}@mail2.sysu.edu.cn E-mail:{chenjch86,zhzibin}@mail.sysu.edu.cn totheiroutdateddetectionpatterns. • Weizhe Zhang, Yu Zhang are with School of Computer Science and To address the above challenges, we introduce ReEP, Technology,HarbinInstituteofTechnology,China. a tool designed to reduce false positives for Reentrancy E-mail:{yuzhang,wzzhang}@hit.edu.cn vulnerabilitydetection.ReEPevaluatesresultsfromexisting • ZexuWang,YuZhangandWeizheZhangarealsoaffiliatedwithPeng ChengLaboratory,China. toolsandvalidatesvulnerabilitylikelihoodtoreducefalse • ZibinZhengisthecorrespondingauthor. positives. When new vulnerability patterns emerge, ReEP Manuscriptreceived;revised integratesthecorrespondingdetectiontoolstocoverdifferent 4202 beF 51 ]RC.sc[ 2v49090.2042:viXra2 vulnerability patterns. ReEP consists of two phases: target ReEP.InSection4,weevaluatetheperformanceofReEP.We state search and symbolic execution verification. In the target discuss threats to validity in Section 5 and summarize the state search phase, ReEP uses program analysis to assist relatedworkinSection6.InSection7,weconcludethepaper |
in pruning paths. ReEP performs program dependency andoutlinefutureworks. analysisonthevulnerabilityfunctionsprovidedbyOrigin Tools,generatingfunctionsequencesrelatedtovulnerability 2 BACKGROUND AND MOTIVATION triggeringtoguidethesymbolexecution.OriginToolsrefer Inthissection,weprovidebackgroundknowledgeandthe to existing tools, such as Mythril [14], and Slither [15]. motivationbehindthedesignoftheReEPtool.Additionally, Meanwhile, ReEP utilizes CFG Pruner to construct SMC- wesummarizethechallengesfacedinReentrancyvulnera- CFG(StateMaximalCorrelationCFG),whichcanoptimize bilitydetection. path traversal. In the symbolic execution verification phase, ReEP implements program instrumentation to collect and analyze path constraints, enabling cross-contract symbolic 2.1 Reentrancy executionanalysis.Byutilizingsymbolicexecutiontoverify SincetheDAOattackin2016,detectingReentrancyvulnera- the reachability of these paths, we can reduce the false bilitieshasbeenacriticalresearchtopicinsmartcontractse- positivesandenhancetheprecisionofvulnerabilitydetection curity.AttackersoftenexploitReentrancyattackstoillegally fortheOriginTool.Moreover,ReEPboostsstrongextensibility acquire substantial amounts of digital assets, especially in byintegratingnewdetectiontoolstoexpandthecoverage DeFiapplicationswheresmartcontractsmanagesignificant of vulnerability detection patterns, thereby enhancing its volumes of digital assets. A Reentrancy attack is a type of capabilitytodetectReentrancyvulnerabilities. malicious behavior that exploits a vulnerability in smart We evaluated the effectiveness of ReEP by examing its contracts, in which permission controls are inadequately ability to improve the detection precision of Origin Tools, checked when called by an external (malicious) contract. itscapabilitytointegratemultipletools,andunderstanding Insuchattacks,theattackerrepeatedlyentersthefunction the impact of each stage within the ReEP framework. The throughonefunctioncalltoobtainconsiderableprofit. experimental results showed that when integrated with ReEP,theaverageprecisionofOriginToolsincreasedfrom function withdraw(uint _amount) public { 0.5% to 73%, significantly improving precision without1 require(balance[msg.sender] >= _amount); 2 sacrificingrecall.Furthermore,ReEPisabletomergemultiple (bool success, ) = msg.sender.call.value(_amount 3 Reentrancy detection tools to enhance its capabilities. By )(""); require(success); integrating six tools, ReEP achieves a peak precision of4 balance[msg.sender] -= _amount; 83.6%,whilethebestperformanceofthecurrentstate-of-the-5 } 6 arttoolsisonly31.8%,demonstratingitseffectivenessand extensibility in improving detection precision. In addition, Fig.1.SimpleexampleofReentrancy weconductedablationexperimentstounderstandhoweach Figure1showsafunctionnamedwithdrawthatcontains stage within ReEP affects overall effectiveness. In general, a Reentrancy vulnerability. In the withdraw function, the ReEPprovidesarobustsolutionforimprovingtheabilityto contract first checks whether the caller (represented by detectReentrancyvulnerabilitiesinsmartcontracts. msg.sender)hasasufficientbalance(inL2).Itthentransfers Themaincontributionsofourworkareasfollows: the requested ether to the caller (in L3) and deducts the • We designed a tool called ReEP to reduce the false transferredamountfromthecaller’sbalancerecordedinthe positives for Reentrancy vulnerability detection. At userbalancevariable(inL5).However,Solidityintroducesa thesametime,ithasstrongextensibilityinmerging specialmechanismcalledthefallbackfunction,whichcanbe multipletoolstoexpanditscapacityforvulnerability usedtoexecutecodewhenthecontractreceivesetherfrom detection. otheraddresses.Thefallbackfunctionprovidesanopportunity • Weproposeanapproachthatusessymbolicexecution forexploitingtheReentrancyvulnerability.InFigure1,the forverifyingvulnerabilitypathreachability.Itcom- call.value()(inL3)automaticallyinvokesthefallbackfunction bines program dependency analysis to guide path of the caller contract, allowing the caller to take control of pruning,achievingefficientReentrancyvulnerability thecontrolflow.Attackerscandeploymaliciouscodeinthe verification. fallback function to repeatedly call the withdraw() function. • WeappliedReEPtoeightstate-of-the-artReentrancy Note that in the second invocation of withdraw(), the code vulnerability detection tools, experimental results inL5hasnotbeenexecutedsincetheinvocationbeginsat showthatitcansignificantlyreducethefalsepositive thecall.value()inL3,andthustheuserbalancehasnotbeen ratesandimprovetheprecisionofexistingtools. changed at this time. As a result, the condition check (in • We publicize the ReEP’s source code and the L2)ofthesecondinvocationpasses,andthevictimcontract experimental dataset at https://github.com/ReEP- willrepeatedlytransferethertothecalleruntilthecontract’s SC/ReEP. balanceisdrained. Thispaperisorganizedasfollows.InSection2,wede- 2.2 Motivation scribesomenecessarybackgroundandexplainthechallenges facedinReentrancyvulnerabilitiesthroughmotivationex- We use the following example to illustrate practical appli- amples. In Section 3, we introduce the workflow of our cations and key contributions of ReEP to smart contract proposed method and delve into the technical details of development.3 Alice, a smart contract developer, rigorously assesses checksonthemsg.sender’spermissioncontrol.Smartcontract |
hercontractsforpotentialReentrancyvulnerabilitiesbefore permission control mainly includes who has the right to deploying them on Ethereum. To ensure a comprehensive call the function, what operations can be performed, and analysis,sheemploysavarietyofdetectiontools.However, restrictionsoncontractoperations.Theformofpermission differenttoolsoftenreportdifferentvulnerabilitylocations, control is diverse and extensive, mistakenly recognizing compellingAlicetoengageinthoroughmanualverification permissioncontrolscanreadilyleadtoinaccuratedetection (asstudiesindicatethatexistingtoolsproducefalsepositives results.Analyzingpermissioncontrolcorrectlyisthemain as high as 99.8% [5]). Furthermore, with the emergence of challengeinimprovingtooldetectionaccuracy. new Reentrancy attack patterns and corresponding tools, Motivation: Relying on matching the single pattern with- Aliceneedstoaddthemtoherdetectiontoolkit.However, out analyzing permission control can easily result in false theincorporationofthesenewtoolsmayalsogeneratenew positives. For example, in Figure 3, the execute function is falsepositives,furtherincreasingherworkload. modified with the onlyOwner modifier, which restricts its executiontothecontractowner.However,Mythril[14]and New Reentrancy Attack Patterns Manticore[10]didnotcapturethispermissioncontrollogic andassumedthatanyonecouldcallthisfunction,resulting Origin TNooel w1O Trigoino Tlool 1 Detection inafalsepositive. Results Challenge:Differentpermissioncontrolmechanismsincrease complexity and slow down path traversal. To improve Smart OriginO Troiogl i1nO rTigoino Tlo 1ol 1Detection Alice Contracts ... Results detectionefficiencyandaccuracywithsymbolicexecution, ReEP Precision it’s vital to optimize path selection and enable targeted OriginO Troiogl i1nO rTigoino Tlo oNl 1 D Re ete sc ut li to sn analysis. Fig.2.TheusecaseofReEP 2.3.2 Unabletoanalyzeexecutionlogicofexternalcontract Atthisstage,AlicecanemployReEPtoefficientlyreduce functions thefalsepositiveswhileconsistentlyaddressingthedetection Figure 4 presents an instance from [5] where false posi- ofnewReentrancyattackpatterns,ultimatelylighteningher tives occur due to the incapability of analyzing external workload.ReEPisanautomatedverificationtool,designed contract function logic. The getTokenBal function queries tovalidatetheresultsofReentrancyvulnerabilitydetections user balances through an external function balanceOf (in fromexistingtools.Itevaluatesfindingsfrommultipletools L9),nostatechangesortransfershappenwithinbalanceOf. (Origin Tools), verifying vulnerability likelihood to reduce However,manytoolscannotfullyanalyseexternalfunctions, false positives, thus enhancing precision. Moreover, ReEP theyoftenmisidentifysuchcontractsashavingreentrancy hasexcellentextensibility.Whennewvulnerabilitypatterns vulnerabilities. emerge,byincorporatingthecorrespondingdetectiontools (NewTools),AlicecanfurtherenhanceReentrancyvulnerabil- itydetectioncapabilities,ensuringbroadercoverage. 1 contract ForeignToken { function balanceOf(address _owner) constant ReEP alleviates the manual verification of false pos-2 public returns (uint256); itives, significantly cutting down Alice’s workload. This ... 3 underscoresReEP’spracticalityinstreamliningauditsand4 } contract Bitcash { enhancingprecision,suitableforlarge-scalesmartcontract5 ... detectiontasks. 6 function getTokenBal(address tokenAddr, address 7 who) constant public returns (uint){ ForeignToken t = ForeignToken(tokenAddr); 8 2.3 Challenges bal = t.balanceOf(who); 9 return bal; In this section, we investigate cases to uncover the main10 } 11 causes of false positives in current Reentrancy detection } 12 tools,andoutlinetherelatedchallenges. Fig.4.Unabletoanalyzeexecutionlogicofexternalcontractfunctions 2.3.1 Lackofpermissioncontrolcheck Motivation: Many detection tools struggle to understand modifier onlyOwner{ how external contract functions work, especially in cross- 1 2 require(msg.sender == owner); contractinteractions.Forinstance,inFigure4,thebalanceOf _; 3 function of the tokenAddr address is called to check the } 4 balanceofthewhoaddressandupdatethebalvariable(inL9). ... 5 function execute( address _to, uint _value, bytes However,thesetoolsoftencannotdetermineiftheexternal 6 _data) external onlyOwner { function involves transfers, so they rely on ”state changes ... 7 after external calls” to identify Reentrancy vulnerabilities, _to.call.value(_value)(data); 8 } leadingtofalsepositives. 9 Challenge: Cross-contract calls are challenging due to the Fig.3.Lackofpermissioncontrolcheck difficulties in analyzing external contract functions. This Figure3showsacodesnippetthatleadstofalsepositives oftenleadstoincompleteprogramstateanalysisandfalse inmanydetectiontools.Themainreasonisthatlackingof positives.4 3 METHODOLOGY 3.2.2 Step⃝2 FunctionSequenceGeneration In this section, we will introduce the workflow and delve Toguidesymbolicexecution,ReEPanalyzesfunctionsand intothetechnicaldetailsofReEP. variablesrelatedtovulnerablefunctionsthroughprogram dependencyanalysistogeneratethesequenceoffunctions. 3.1 Overview Therefore, it is necessary to collect information on the We propose ReEP to verify vulnerability information in functionsandvariablesrelatedtothevulnerablefunctions existing tools’ detection reports, enhancing precision in inthecontract.Thisinformationincludes: |
Reentrancy vulnerability detection. As shown in Figure 5, • Targetvariables:Variablesofthefunctionsfromvulner- the ReEP approach comprises two phases and four steps. abilitydetectionreports,denotedasV∗ . Target It takes smart contract source code as input and produces • Target-relatedvariables:Variablesthathaveaprogram detectionresultsasoutput.Inthefirstphase,OriginToolslike dependency relationship (including control depen- Mythril[14]areutilizedtoreportvulnerabilityinformation, denciesordatadependencies)withV∗ ,denoted Target which includes the vulnerability’s location and associated asV∗ . Target Related function. Program dependency analysis is then applied to • Target functions: Functions that read or assign vari- generatethesequenceoffunctionsrelatedtothevulnerability ablesthatcomefromTargetvariablesorTarget-related trigger,guidingtheprocessesofsymbolicexecution.Follow- variables,denotedasF∗ . Target ing this, the CFG Pruner constructs the SMC-CFG (State Maximum Correlation Control Flow Graph) to optimize Tocollectthefunctionsandvariablesrelatedtothevulner- pathtraversalforsymbolicexecution.Inthesecondphase, ablefunctioninthecontract,ReEPperformsasearchbased cross-contract interactions are monitored and analyzed to onwhethertheyhaveprogramdependencies,includingdata collectglobalpathconstraintsandverifythereachabilityof andcontroldependencies,withthevulnerablefunctions.The vulnerabilitypathsbyaccessingtheconstraintsolver(SMT). followingprocessisused: Thisenablesthedeterminationofwhetherthevulnerability • When V T∗ arget Related = ∅, search for the function exists. that operates on the variables in V∗ , write the Ingeneral,Step⃝1,Step⃝2,Step⃝3 contributetoimprove function into the set F∗ , andT war rg ie tt e the state the efficiency of search, and Step⃝4 aims to improve the variablesofthefunctionT toar tg he et setV∗ . Target Related accuracyofdetection. • WhenV T∗ arget Related ̸=∅,searchforthefunctionthat operatesonthevariablesinV∗ ,writethe Target Related StageⅠ:Target State Search StageⅡ: functiontothesetF T∗ arget,writethestatevariables Symbolic Execution of that function to the set V∗ , and con- Verification Target Related Step 3 tinue repeating until F∗ or V∗ has Target Target Related Step 4 no more new elements written to it, as V∗ ∩ Target Origin Tools V∗ =V∗ . SMC-CFG Target Related Target Step 1 CFG Pruner Vulnerability Smart Contracts Detection Reports Cross-contract Detection Results Symbolic Execution Step 2 Step 3 Algorithm 1: Generating Function Dependency Graph SMT Function Sequences Input:V T∗ arget Related asV,F T∗ arget asF Output:FDG Fig.5.TheworkflowofReEP 1 FunctionMain(V,F): 2 FDG←emptygraph; 3.2 StageI:TargetedStateSearch 3 foreachf i inFdo In order to improve the efficiency of state search, path 4 foreachf j inFdo pruningisessentialforsymbolicexecution.ReEPachieves 5 ifmodify(f i,V)∩modify(f j,V)̸=∅ then thisbyconductingprogramdependencyanalysisonvulner- FDG.add edge(f ,f ,weight) ablefunctionsandgeneratingfunctionsequencestoguide 6 i j 7 end symbolicexecution.Additionally,ReEPconstructstheSMC- 8 end CFGtoexpeditepathtraversalduringsymbolicexecution. 9 end 3.2.1 Step⃝1 ReentrancyReportCollection 10 returnFDG; Initially,ReEPutilizesOriginToolstodetectsmartcontracts and generate vulnerability detection reports, providing Tofacilitatesymbolicexecutionguidance,ReEPutilizes coarse-grained information on Reentrancy vulnerabilities. the FDG (Function Dependency Graph) to analyze the The information includes the location and function of the dependenciesbetweenvulnerablefunctionstogeneratethe vulnerability,whichReEPfurtheranalyzestogeneratethe sequenceoffunctions.ThealgorithmforgeneratingtheFDG functionsequence.TheselectionofOriginToolsisbasedon ispresentedinAlgorithm1,whichtakesV∗ and Target Related the tool selection criteria outlined by Zheng et al. [5], as F∗ as inputs and produces the FDG as output. The Target thesetoolsgeneratevulnerabilitydetectionreportscontain- algorithmexaminesthedependencyrelationshipsbetween ing the location and function information for Reentrancy thecontract’sfunctions.Specifically,itdetermineswhether vulnerabilities. therearesamevariablesbetweentwofunctionstogenerate5 the contract’s FDG, as depicted in L3–L9 of Algorithm 1. Algorithm2:GeneratingtheSMC-CFG Theedgeweightisdeterminedbythenumberofcommon Input:theCFGofthecontract variables shared by the two functions. By generating the Output:theSMC-CFGofthecontract FDG, ReEP can identify functions that are directly or in- 1 FunctionMain(CFG): directlyrelatedtothevulnerablefunctions,whichenables 2 SMC-CFG←[][]; the construction of a sequence of functions related to the 3 Blocks=getAllBlocks(CFG); executionofthevulnerablefunctions. 4 foreachblockinBlocksdo The sequence of functions is generated by sorting the 5 foreachblkinblock.successorsdo functions in the FDG (Function Dependency Graph) ac- 6 JUMP W,Fir pc=Count weight(blk); cording to their relevance. In the FDG, nodes represent 7 SMC-CFG[blk][Fir pc]=JUMP W; functions, edges represent the existence of the same state 8 end variables,andtheweightofedgesrepresentsthenumberof 9 end thecommonstatevariables.Theweightofanode(function) 10 returnSMC-CFG isthesumoftheweightsofallitsconnectingedges,which 11 FunctionCount_weight(blk): indicatethefrequencyofoperationsonthestatevariables. 12 JUMP W←0,PC←0; Sorting nodes according to their weights generates the 13 Key instrs=[SSTORE,SLOAD,CALL]; |
corresponding function sequence. To avoid uninitialized 14 foreachopcodeinblk.opcodesdo states, the nodes are sorted in ascending order of weight, 15 ifopcodeinKey instrsthen creating the corresponding function sequence. Combining 16 JUMP W++; symbolic execution with function sequence guiding, ReEP 17 PC=blk.first opcode.pc; achievesefficientaccesstofunctionsrelatedtotheexecution 18 end ofthevulnerablefunctions,facilitatingeffectiveanalysisof 19 end thecriticalpath. 20 returnJUMP W,PC 3.2.3 Step⃝3 PathPruning Toalleviatetheproblemofpathexplosioninsymbolicexecu- tion,ReEPemploysfunctionsequencesandtheCFGPruner represented by white rounded rectangles, while pruned to prune paths. Function sequences assist in eliminating blocks are shown as gray rectangles. The red solid lines irrelevantfunctionaccess,whiletheCFGPrunergenerates indicate jump relationships between basic blocks, and the theSMC-CFG(StateMaximumCorrelationCFG)toprune blue solid lines represent pruned jump relationships. The the CFG of the function, thus enhancing the efficiency of numberoneachedgedenotestheweightvalueofthejump symbolicexecution.TheSMC-CFGretainstheCFGbranch edge(JUMP W).InFigure6,forBlock4,theJUMPIopcode pointing to the key block where state updates occur and branchhasjumpedgeswithweightvaluesof0and2.During prunesirrelevantpathstominimizethecostofunnecessary symbolic execution using the SMC-CFG, the jump edge pathforking.AttheCFGbranch,itischeckedwhetherthe with a weight value of 2 is explored, and the block with succeedingblockisakeyblock.Ifallthesucceedingblocks a weight value of 0 is omitted. If both succeeding blocks arekeyblocks,thenallbranchesarepreserved.Specifically, possess non-zero weight values, they are explored in the theCFGPrunerassignsweightstothejumpedgesofeach order of their weight values. By utilizing the jump edge basic block in the contract, reflecting the block’s relevance weightvaluesassignedtoeachblockbytheSMC-CFG,the to state updates. Blocks containing instructions that write path branching is prioritized by selecting the jump edge or read state variables (SSTORE, SLOAD, and CALL) are withagreaterweightvalue,enablingsymbolicexecutionto consideredkeyblocks,whileblockswithoutstateoperations efficientlyaccesspathswithmorestateoperations. or containing REVERT/INVALID instructions are deemed irrelevant.Iftheconditionforthevulnerability’sexistence Block 1 is met, the symbolic execution path traversal is halted. By 1: JUMPDEST using the SMC-CFG, the path searching is accelerated by ... 9: JUMPI prioritizingtheexplorationofkeyblocks. JUMP_W : 1 JUMP_W : 0 ThealgorithmforgeneratingtheSMC-CFGispresented Block 2 Block 3 inAlgorithm2.Initially,inL3–L9,allbasicblocksintheCFG 13: SLOAD ... aretraversedtocalculatetheweightofeachedge,whichis ... 12: REVERT 25: JUMPI thenrecordedinthetwo-dimensionalarraySMC-CFG.Inthe JUMP_W : 2 JUMP_W : 1 Count weightfunctioninL12–L20,theopcodeineachbasic Block 4 Block 5 blockistraversed.IftheopcodeisaninstructioninKey instrs, 33: SLOAD 26: SLOAD 33: CALL ... theweightofthecorrespondingconnectingedge(JUMP W) ... 32: STOP for that basic block is incremented by 1. Simultaneously, 57: JUMPI JUMP_W : 2 JUMP_W : 0 thePCvalueisupdatedtothepositionofthefirstopcode Block 6 Block 7 in that block. By utilizing the Count weight function, each 63: SLOAD ... basicblockestablishesweightedconnectionstoitssuccessor ... 62: INVALID 85: SSTORE blocks,ultimatelygeneratingoftheSMC-CFG. Figure 6 illustrates a part of the SMC-CFG for the Fig.6.TheSMC-CFGofthefunctionwithdraw withdraw function. The basic blocks in the SMC-CFG are6 3.3 StageII:SymbolicExecutionVerification Call has no return value or does not use the return value: Toverifytheexistenceofvulnerabilities,ReEPemployscross- thefunctioncalledintheexternalcontract(callee)doesnot contractsymbolicexecutiontoverifythereachabilityofthe returnthevalue,orthereturnvalueisnotused.Thispattern vulnerabilitypath.ItcombinestheSMC-CFGtoaccelerate is shown in L5 of the function Collect1 in Figure 8. The pathtraversal.Additionally,itusesprograminstrumentation AddMessagefunctionintheLog1contractdoesnotreturna withfunctionsequencestoanalyzethelogicoffunctionsand value,andthereisnoneedtopassthevalueacrosscontracts. collectglobalpathconstraints.Moreover,theSMTisaccessed to validate the reachability of the path and determine the function Collect2(uint _am) public payable{ 1 presenceofanyvulnerabilities. 2 if(balances[msg.sender]>=_am){ if (Log2.AddMessage(msg.sender,"Collect2 3 3.3.1 Step⃝4 Cross-contractSymbolicExecution ")){ if(msg.sender.call.value(_am)()){ 4 Analyzing the execution logic of functions and collecting balances[msg.sender]-=_am; 5 global path constraints is of vital importance for cross-6 } } } } contractsymbolicexecution.Figure7illustratestheoverall Fig.9.Callusesthereturnvaluebutdoesnotassignit process of cross-contract symbolic execution in ReEP. To identifyandensurethecorrectswitchingofdifferentcontract Callusesthereturnvaluebutdoesnotassignit:thefunction contexts(includingmsgandstorage),ReEPemploystheCall- calledintheexternalcontract(callee)hasthereturnvalue, Return Monitor, a program instrumentation designed for which is used in the caller, but not assigned. This pattern cross-contractbytecodeanalysis,withfunctionsequencesto |
is shown in L3 of the function Collect2 in Figure 9. The guidetheswitchingofdifferentcontexts.TheGlobalStorage AddMessagefunctionintheLog2contractischeckedtoseeif ensurestheaccuratewritingandreadingofdistinctcontract itreturnsavalueofTrue,butitsreturnvalueisnotassigned. data,preventingdataconfusionarisingfromdifferentcon- tractssharingasinglestorage.TheSymbolicStatePropagation addresses the issue of symbolic parameters when calling1 function Collect3(uint _am) public payable{ if(balances[msg.sender]>=_am){ acrosscontracts.Thesemodulesworkcollectivelytoensure2 success = Log3.AddMessage(msg.sender,"Collect3"); 3 theanalysisofexternalfunctionsandaccuratecollectionof if (success){ 4 globalpathconstraints. 5 if(msg.sender.call.value(_am)()){ balances[msg.sender]-=_am; 6 } } } } 7 Global Storage Fig.10.Callusesareturnvalueandassignsit GOT Callusesareturnvalueandassignsit:thefunctioncalled intheexternalcontract(callee)hasthereturnvalue,which Symbolic State isusedinthecallerandassigned.Thispatternisshownin L3–L4ofthefunctionCollect3inFigure10.Thereturnvalue Caller's Code Callee's Code Symbolic Runtime oftheAddMessagefunctionintheLog3contractisassigned tothesuccessvariable. TheCall-ReturnMonitoremploysstaticanalysisofthe Call-Return Monitor (Instrumentation) bytecodestreamtoextractessentialinformationaboutcross- EVM contractfunctioncalls,includingstartandendpositions,as wellasthereturnvaluesofexternalfunctioncalls.TheCall- Legend: Instrumentation Code Execution SLOAD/SSTORE ReturnMonitorswitchesdifferentcontracts’storageandmsg Symbolic State Propagation informationbasedonthestartandendpositionsandtypes ofexternalcalls.Byanalyzingthereturnvalueinformation Fig.7.Theoverallprocessofcross-contractsymbolicexecution of external function calls, it determines whether there is Call-ReturnMonitor.Inordertoensurethatthecontext cross-contractparameterpassing,ensuringthecorrectcross- contractinteractionandavoidingstatelosscausedbycross- ofdifferentcontractsisswitchedcorrectlyincross-contractin- contractinteractions. teraction,theinstrumentationcodeneedstoreceivefeedback Global Storage. To ensure the accurate writing and onthestart,andendpositions,andreturnvalueinformation readingofdifferentcontractdataandtopreventdataconfu- of the cross-contract call. As smart contracts use the Call- sion,ReEPimplementsaglobalstoragemechanism.Many Return paradigm, which can be summarized into three dynamicdetectiontoolsfailtosavedataafterthefunction patternsbasedonreturnvalues: call, leading to data loss issues in subsequent transaction execution and global state analysis, as the triggering of function Collect1(uint _am) public payable { 1 vulnerabilitiesfrequentlystemsfrommultipletransactions. if(balances[msg.sender]>=_am){ 2 if(msg.sender.call.value(_am)()){ Moreover, different types of cross-contract calls, such as 3 balances[msg.sender]-=_am; CALL, DELEGATECALL, CALLCODE, and STATICCALL, 4 5 Log1.AddMessage(msg.sender,"Collect1 necessitatedifferentcontextsforthemsgandthestorage[16]. "); Insufficientdataorincorrectinformationcanresultinmissed } } } 6 criticalpathsandinaccuratedetectionoutcomes.Toaddress Fig.8.Callhasnoreturnvalueordoesnotuseareturnvalue thisissue,ReEPcombinestheGlobalStoragewiththeGOT7 (GlobalOffsetTable),whichassistsinlocatingandretrieving storesymbolicconstraintexpressions,andtheconstraintsof globaldata,therebyensuringthecorrectstorageandreading theinputparameters(symbolicvalues)ofthefunctionare ofvariouscontractdata. savedandpropagated,resolvingdependencyissuesofthe programonsymbolicparametersduringsymbolicexecution. Figure13illustratesasimpleexampleofacallerinvokinga Global storage Caller Callee ... Contract_n calleefunctionwithsymbolicparameters,wheretheprimary slot0 Value 1 slot0 Value 1 slot0 Value 1 slot0 Value 1 logic involves storing the sum of the transferred ether slot1 Value 2 slot1 Value 2 slot1 Value 2 slot1 Value 2 ... ... ... ... ... ... ... ... amount(CALLVALUE)and6.Thebytecodeblockdisplayed inFigure13storesthecaller addrinslot 0,callee addrinslot 1, CALL Callee and adds the transferred ether CALLVALUE (a symbolic DELEGATECALL initiator Caller CALLCODE GOT value) to 6, which is then stored in slot 2. ReEP converts Context Switch STATICCALL the calculation operations on CALLVALUE into constraint Executer expressionsforsubsequentcomputationandstorage.Since Fig.11.Globalstorageforcallcontextswitch CALLVALUE is a symbolic input parameter, the storage in slot 2 containsasymbolicconstraintexpression. Figure 11 illustrates the functioning of global storage in the call context switch. The instrumentation code dy- (store (store (store (store STORAGE_callee_addr #x0 #caller_addr) namically switches the contract context according to the #x1 #callee_addr) #x2 CALLVALUE) various types of function calls between the caller and #x2 (bvadd #x6 CALLVALUE)) callee(includingCALL,DELEGATECALL,CALLCODE,and Callee's Function STATICCALL) to ensure the accuracy of the cross-contract CALLER analysis. Additionally, the GOT is used to determine the Caller's contract PUSH1 0x0 Callee's Storage Return SSTORE position of different contract data in the global storage. Callee.Function() ADDRESS Slot0 caller_add The design of the GOT is depicted in Figure 12, which Call P SU STS OH R1 E0 x 1 Slot1 callee_addr CALLVALUE Slot2 (bvadd 0x6 CALLVALUE) aids in identifying the location of various contract data. PUSH1 0x2 |
SSTORE ... The storage location of the global variable is calculated P SU LOS AH D1 0 x 2 Slotn usingtheformulaL=GOT(Contract ID@Slot ID),where PUSH1 0x6 ADD Contract IDrepresentsthepartitionindexofthecontract PUSH1 0x2 SSTORE intheglobalstorage,andSlot ID correspondstotheslot STOP index in the storage of that contract. The instrumentation Fig.13.Storageandpropagationofsymbolicstates codemonitorstheSLOADandSTOREinstructionsandthe corresponding stack top value in the bytecode, which are Pathconstraintsareresolvedbyaccessingtheconstraint employedtocomputeSlot ID.ByusingGOT,truevalues solver (SMT). The completeness of the set of constraints canbeobtainedfromglobalstorage,avoidingissuessuchas during the path constraint search phase is essential for revertscausedbyincorrectorlostdata. verifying the reachability of path constraints to determine theexistenceofvulnerabilities. Global storage Caller Callee ... Contract_n 4 EVALUATION slot0 Value 1 slot0 Value 1 slot0 Value 1 slot0 Value 1 s.l.o.t1 Va.lu.e. 2 s.l.o.t1 Va.lu.e. 2 s.l.o.t1 Va.lu.e. 2 s.lo.t.1 Valu..e. 2 Inthissection,weevaluatedtheefficacyofReEPbyinvesti- gatingitsabilitytoenhancedetectionprecisionoftheOrigin Opcode:SLOAD/SSTORE Contract_ID: Contract_ID@Slot_ID Tools,itsabilitytointegratemultipletools,andunderstand contract's partition number the impact of each stage within the ReEP framework. We OPCODE Slot_ID:slot number GOT Before:Revert/Wrong Value address these considerations by answering the following After:True value researchquestions. Fig.12.GOT(GlobalOffsetTable) RQ1. How does ReEP improve the Reentrancy detection precisionoftheOriginTools? SymbolicStatePropagation.Toaddresstheissueofpass- RQ2. WhatistheimpactofReEPontherecallrate? ing symbolic parameters during cross-contract calls, ReEP RQ3. What is the extensibility of ReEP when merging returnstheassociatedconstraintsofsymbolicparametersas multipletools? return values and passes them to the caller. During cross- RQ4. WhatistheimpactofdifferentstageswithinReEP? contract transactions, the input parameters of the called functionmaybesymbolic,andtheoutputvaluesorupdated 4.1 EnvironmentSetupandDataset states may also be in the form of symbolic constraints, makingimmediatecomputationchallenging.Symbolicstate The experiments were conducted on a machine running propagationtransformsthecontractoperationsonsymbolic Ubuntu 18.04.1 LTS and equipped with 16 cores (Intel(R) parametersintoconstraintexpressions,whichcanbeutilized Xeon(R) Gold 5217). The experimental environment was forsubsequentcomputations. set up by either downloading Docker images or manually ReEPconvertstheexecutionlogicofexternalfunctions buildingthetoolwiththehelpofconstructionmanuals(for intosymbolicconstraintexpressionsandpropagatesthem Securify2andSmartian).Toensurethattheexperimentswere betweendifferentcontracts.Z3’sBit-vectorisemployedto not overly time-consuming, we followed the time budget8 TABLE1 StatisticsofdetectionresultsoftheOriginToolwithReEP Tool Oyente Mythril Securify1 Securify2 Smartian Saifish Slither EThor Origin Origin+ Origin Origin+ Origin Origin+ Origin Origin+ Origin Origin+ Origin Origin+ Origin Origin+ Origin Origin+ #TP 25 25 26 26 15 22 3 3 7 7 19 21 31 31 26 28 #FP 488 8 15474 8 2372 8 2489 5 15 5 2270 8 4587 9 3269 9 #Reported 513 33 15500 34 2387 30 2492 8 22 12 2289 29 4618 40 3295 37 Precision 4.9% 75.8% 0.2% 76.5% 0.6% 73.3% 0.1% 37.5% 31.8% 58.3% 0.8% 72.4% 0.7% 77.5% 0.8% 75.7% settings from [5], capping the maximum runtime to 120 Tools. At the same time, ReEP can significantly reduces secondsandkeepingtheparametersattheirdefaultsettings. falsepositivesforallofthem,especiallyforMythril,where We adopted two datasets in our study to comprehen- falsepositivesdroppedfrom15,474to8.Itisworthnoting sivelyevaluateReEP’sperformanceindetectingReentrancy that DB1(Zheng et al.’s Dataset) reported only 34 TP, while vulnerabilities.DatasetDB1wasemployedtovalidateRQ1, ReEPidentified7additionalcontracts(a20%increase)with RQ3,andRQ4,whiledatasetDB2wasusedtovalidateRQ2. Reentrancyvulnerabilitiesthatweremissedintheirmanual DB1. Zhengetal.’sDataset[5].Zhengetal.initiallycollected checks. We notified the authors of the dataset, and they 230,548verifiedcontractsfromEtherscan.Thesecon- confirmed that these 7 contracts do contain Reentrancy tractsweresubsequentlyanalyzedbyusingsixstate-of- vulnerabilitiesandsubsequentlyupdatedtheirdataset. the-artReentrancydetectiontools,and21,212contracts Inconclusion,theresultsshowthatReEPcansignificantly wereflaggedaspotentiallyvulnerabletoReentrancy improvetheprecisionoftheOriginTools.Mythrilexhibited vulnerabilities by at least one of the tools. After themaximumincreaseinprecisionfrom0.2%to76.5%,while that, two rounds of manual checks were conducted Smartian showed the minimum increase, also increasing involving 50 participants, resulting in 34 contracts from 31.8% to 58.3%. After using the ReEP, the average beingconfirmedastruepositives(TP). precisionofallOriginToolsroseby72.5%,from0.5%to73%, DB2. SmartBugs Dataset [17]. The widely used SmartBugs indicatingthatReEPsignificantlyimprovestheprecisionof |
datasetcontains143contracts.Amongthesecontracts, thedetectionresultsfortheOriginTools. 31wereidentifiedascontainingReentrancyvulnerabil- Answer to RQ1: The experimental results showed a ities.Theinclusionoftheselabeledcontractsallowed significantimprovementintheOriginTools’sprecisionwhen ustoevaluateReEP’srecallperformanceindetecting integratedwithReEP,escalatingfrom0.5%to73%onaverage. Reentrancyvulnerabilities. ThemostsignificantimprovementwasobservedinMythril, Fortoolselection,weadoptedthecriteriaproposedby witharemarkableincreaseof76.3%.Theseresultshighlight Zhengetal.[5],selectingthesamesetoftools:Oyente[9], ReEP’sexceptionalcapabilityinenhancingtheprecisionof Mythril [14], Securify [18] (both V1 and V2 versions), theOriginTools. Smartian [11], and Sailfish [19]. Additionally, we included Slither [15] and EThor [20] as part of the Origin Tools, 4.3 RQ2:ImpactonRecall making a total of eight tools used to generate Reentrancy DB1 comprises 21,212 suspicious contracts selected from vulnerabilitydetectionreports.Theseselectedtoolsareall 230,548 verified contracts. It may not provide a fair eval- presentedattopsoftwareengineeringorsecurityconferences uation of ReEP’s recall impact, as these 21,212 contracts and cover various techniques, such as symbolic execution, were marked as potentially susceptible to Reentrancy vul- formalverification,andfuzztesting.Itisworthnotingthat nerabilities by at least one of the Original tools. Therefore, some of these tools do not explicitly define Reentrancy we utilized DB2 (SmartBugs dataset), the most commonly vulnerabilitiesas“Reentrancy”.Forinstance,Mythrilreports useddataset,whichconsistsof31contractswithReentrancy two vulnerabilities related to Reentrancy, namely, external vulnerabilitiesand112contractswithoutsuchvulnerabilities, call to user-supplied address and state access after external call. toassesstherecallimpact.Therecallcalculationformulais: To ensure consistency, we adopted the same classification Recall=TP/(TP +FN). criteriaprovidedbyZhengetal.[5],whichoffersastandard- Table 2 presents the result statistics of ReEP on the izedclassificationschemefortheseReentrancyvulnerability SmartBugsdataset.Asshowninthetable,ReEPsignificantly detectiontools. enhancestheprecisionandF1scoresoftheOriginTools,while havingnoimpactonRecall.Amongtheeighttools,Slither 4.2 RQ1:ImprovementonPrecision demonstratedthehighestrecallperformanceonSmartBugs, We use DB1 to evaluate the impact of ReEP on improving remaining unchanged at 93.5%. This indicates that ReEP theReentrancydetectionprecision,wecomparedtheresults maintainstheRecallofOriginToolswhileimprovingpreci- ofeightOriginToolswithandwithoutReEP.Theresultsare sion. providedinTable1,whereTPandFPstandforTruePositive Warninginformationisutilizedforpathpruning,com- andFalsePositive,respectively.Origin+andOriginrepresent bined with symbolic execution to verify path reachability, theOriginToolwithorwithoutReEP,andPrecisionrefersto leading to fewer false positives and an enhanced F1 score, thedetectionprecision.Theprecisioncalculationformulais: withoutaffectingRecallofOriginTools. Precision(PRE)=TP/(TP +FP). AnswertoRQ2.ReEPutilizeswarninginformationfrom AsshowninTable1,atotalof21,212Reentrancycases Origin Tools to guide path pruning and targeted analysis, werereportedfrom230,548contractdetectionsbytheOrigin includingpathsymbolexecutionverification,resultingina9 TABLE2 StatisticsofDetectionResultsonSmartBugsDataset Tool Oyente Mythril Securify1 Securify2 Smartian Saifish Slither EThor Origin Origin+ Origin Origin+ Origin Origin+ Origin Origin+ Origin Origin+ Origin Origin+ Origin Origin+ Origin Origin+ #TP 21 21 10 10 17 17 6 6 15 15 19 19 29 29 25 25 #FP 43 8 48 7 31 8 47 9 19 7 24 8 38 9 55 9 #FN 10 10 21 21 14 14 25 25 16 16 12 12 2 2 6 6 #TN 69 104 64 105 81 104 65 103 93 105 88 104 74 103 57 103 Precision 32.8% 72.4% 17.2% 58.8% 35.4% 68.0% 11.3% 40.0% 44.1% 68.2% 44.2% 70.4% 43.3% 76.3% 31.3% 73.5% Recall 67.7% 67.7% 32.3% 32.3% 54.8% 54.8% 19.4% 19.4% 48.4% 48.4% 61.3% 61.3% 93.5% 93.5% 80.6% 80.6% F1 44.2% 70.0% 22.5% 41.7% 43.0% 60.7% 14.3% 26.1% 46.2% 56.6% 51.4% 65.5% 59.2% 84.1% 45.0% 76.9% significantreductionoffalsepositivesinthedetectionresults withoutcompromisingRecallrate. 4.4 RQ3:ExtensibilityofReEP TheprevioustwoRQsdemonstratedthatReEPcanimprove precision by decreasing false positives (FP), but it cannot increasetherecallrate.ThisisbecauseReEPreliesonOrigin Tools to provide vulnerability information, and thus the ability of a single tool to report Reentrancy vulnerability mightconstrainReEP’scapability.Fortunately,thislimitation canbecounteredbymergingmultipletools,asagreaternum- beroftoolscanprovidemorecomprehensivevulnerability information. InthisRQ,weexplorethedetectionefficiencyofReEP bycombiningmultipletools.TheextensibilityofReEPwas thoroughlyassessedbyfusingdifferentsetsofOriginTools, including combinations of two, four, six, and eight tools, |
resultinginatotalof127uniquecombinations.Allofthe127 combinations’experimentaldataareprovidedatourGitHub repository.Whencombiningmultipletools,ReEPanalyzes the detection outcomes of different tools, and determines theresultbyusingalogicalORoperation.Toclearlyshow the results, we categorized the results into three groups: Best combo, Worst combo, and Random combo. Specifically, Best combo represents the combination with the highest precision,Worst combocorrespondstothecombinationwith thelowestprecision,andRandom comboincludesrandomly selectedcombinations. 90% 82% 7 9 .08 %3 .6 % 7 9 .2 % 8 0 .7 % 8 2 .0 % 75% 7 4 .0 % 67% 6 7 .1 % 6 5 .0 % 6 1 .0 % 60% 52% 4 7 .9 % 45% B est_com bo W orst_com bo R andom _com bo M erge8 noisicer P Additionally, as the number of tools increases, the gap betweenthehighestandlowestprecisionvaluesdecreases, resultinginmoreconsistentresults.Overall,ReEPeffectively enhances detection precision by merging multiple tools, demonstratingitsimpressiveextensibility. AnswertoRQ3.Ourexperimentsdemonstratedthatas thenumberofmergedtoolsincreased,theimprovementin precision became more stable. The Best combo of Merger6 achievedapeakprecisionof83.6%.Addingmoretoolsdid not significantly enhance the performance. These findings highlight the extensibility and effectiveness of ReEP in detectingReentrancyvulnerabilities. 4.5 RQ4:ImpactofDifferentStagesinReEP ToevaluatetheimpactofeachstageofReEPontheoverall effectiveness, we conducted comparative experiments on precision and time consumption for Merge6’s Best combo using three modes: Stage1 (Target State Search), Stage2 (SymbolicExecutionVerification),andStage1&2.InStage1, ReEPintegratedtheFDGandSMC-CFGanalysistoprune the path, while Stage2 involved the verification of path accessibilitythroughsymbolicexecution. Table 3 presents a comparison of the detection results among the three modes, where Avg. Tool represents the averagedetectionresultsfromtheOriginTools.Asevidenced by the table data, we can observe that Stage1 detected all the 41 Reentrancy vulnerabilities (TP) but have 556 false positives (FP). Stage 1 does not significantly improve the precision(from0.5%to6.9%)duetoahighnumberoffalse positives.Incontrast,Stage2’ssymbolicexecutionverification M e rg e 2 M e rg e 4 remarkably reduces the number of false positives (FPs) to M e rg e 6 only 51, which is less than one-tenth of the FPs identified M e rg e 8 in Stage1. Nonetheless, the number of true positives (TPs) detectedbysymbolicexecutionis35,assymbolicexecution may encounter path explosion, leading to the omission of certainTPcontracts.TheStage1&2mode,whichcombines symbolicexecutionverificationwithpathpruning,achieves thehighestprecisionof83.6%. Fig.14.Comparisonformergingmultipletools Figure 14 compares the precision of Merger2, Merge4, TABLE3 Merger6,andMerger8withReEP.Amongthe28combina- ComparisonofDetectionResultsforDifferentModes tionsofMerger2,ReEPachievedtheBest combobymerging Mythril and Slither, resulting in the highest precision of Avg.Tool Stage1 Stage2 Stage1&2 #TP 19 41 35 41 67.1%. The Best combo of Merger6 achieved the highest #FP 3870 556 51 8 precision of 83.6% among all the combinations, indicating Precision 0.5% 6.9% 40.7% 83.6% theoptimalperformanceachievablewiththeseeighttools. Time(s) 8.18 1.3 23.8 15.6 ThissuggeststhatReEPhasalreadyreachedthebestpossible performance,andfurtherincreasingthenumberofmerged Table3providestheaveragetimeconsumptionforthe toolsdoesnotresultinanimprovementinoverallprecision. threemodes.WhileStage2takesanaverageof23.8seconds,10 theprocessingtimeisreducedto15.6secondsinStage1&2. Toenhancetheaccuracyofsymbolicexecutionindetect- ThepruninginStage1playsacrucialroleinenhancingReEP’s ingsecurityvulnerabilities,varioustoolssuchasOyente[9], overallperformance.Experimentalobservationsindicatethat Mythril [14], Manticore [10], and Maian [21] conduct thor- Stage1 combines FDG with SMC-CFG pruning, leading to ough analyses of contracts, and explore all possible paths reducedrunningtimeandmitigatedsymbolicexecutionpath to generate vulnerability reports. Sailfish [19] utilizes a explosion.Furthermore,itfacilitatestargetedpathanalysis, storage dependency graph (SDG) to detect vulnerabilities enabling deeper path searches and improving the overall incontracts.SmartDagger[7]constructsthecross-contract precisionofvulnerabilitydetection. controlflowgraphfrombytecodes,facilitatingcross-contract Answer to RQ4. The Stage1&2 mode, by combining vulnerability detection. However, blind path traversal can pathpruningwithsymbolicexecutionverification,achieves easilyleadtoimpreciseresults,especiallyincross-contract efficientpathreachabilityvalidation,resultinginthehighest analysis where different contracts sharing the same stor- precisionof83.6%whenmergingmultipletools. age may cause data confusion. More importantly, many existing tools face a significant limitation in their capacity to adapt and detect evolving vulnerability patterns. ReEP 5 THREATS TO VALIDITY distinguishes itself by validating existing tool results to 5.1 InternalValidity reducefalsepositivesandintegratingdifferenttoolstodetect variousvulnerabilitypatterns. The internal validity threats stem from the reliance on |
Toimprovetheefficiencyofsymbolicexecutionforfaster existingReentrancydetectiontools.WhileReEPcanimprove detection of security vulnerabilities, tools like Oyente [9], precision by reducing false positives, it is not capable of MPro[22],andMythril[14]analyzesmartcontractbytecode. detecting more Reentrancy cases than Origin Tools (as ob- They apply corresponding path pruning strategies to ex- servedintheexperimentalresultsofRQ2).Consequently,the pedite path traversal and mitigate path explosion issues. capbilitiesofReEPmightbeconstrainedbythelimitationsof Smartian [11], SmartDagger [7], and Smartest [12] have OriginTools.Fortunately,akeystrengthofReEPisitsability optimizedspeedbyrefiningsearchstrategiesforsymbolic tomergemultipletoolstoenhanceitsdetectioncapabilities. executionpathsoremployingheuristicalgorithms.Addition- As demonstrated in RQ3, ReEP achieved a peak precision ally,Park[23]proposesaparallelsymbolicexecution-based of 83.6% by merging six tools, underlining its impressive approachtoacceleratevulnerabilitydetection.ReEPstands extensibility.EventhoughnewReentrancyattackpatterns out by guiding symbolic execution based on the detection emerge,ReEPcanintegratenewdetectiontools,effectively results of existing tools, achieving efficient path searching extendingitscapabilitiestoadapttoevolvingvulnerabilities. andtraversal. 5.2 ExternalValidity 6.2 DynamicandStaticDetection. Theexternalvaliditythreatsprimarilystemfromtheinherent Traditionalstaticdetectiontechniquesforsmartcontracts[6]– challenges associated with manual inspections of large [8],[24],[25]haveseverallimitations.Thepublicnatureof datasets, which tend to be highly error-prone and time- the blockchain, along with diverse permission control in consuming. The dataset DB1 comprises 230,548 verified smart contracts, complicates static vulnerability detection, contracts obtained from Etherscan, among which 21,212 oftenresultinginfalsepositives.Additionally,unclearcallre- contractswereflaggedforReentrancyvulnerabilitiesbystate- lationshipsbetweenfunctionsrequirefurtherexplorationfor of-the-artautomateddetectiontools.Manuallyverifyingthis effectivetargetstateidentificationtriggeringvulnerabilities. large-scalereal-worlddatasetcanbeerror-proneandtime- Dynamic detection methods [9]–[13], [22], [26]–[28], re- consuming. To ensure the dataset’s accuracy, the authors lying on program testing and verification, enhance result adopted a rigorous approach involving 50 participants, reliability. However, the inability to access global state includinggraduatestudentsandPhDswithextensiveexperi- information and perform cross-contract analysis leads to enceinsmartcontractresearch.Theseparticipantsconducted path explosion and high resource consumption. Resource two rounds of thorough checks on the detection results, constraints and vulnerability identification limitations can minimizingbiasandensuringthereliabilityofthemanual furthercontributetodetectionfailuresorerrors. examinationofthereentrantcontractsdetectedbythetools. FuzzSlice[29]performsfuzztestingwithinagiventime Utilizing tool evaluation to enhance the accuracy of the budgettoeliminatepotentialfalsepositivesinstaticanalysis. results. Additionally, by employing ReEP and integrating ThedifferenceisthatReEPvalidatessuspiciousvulnerability eight state-of-the-art tools, we successfully identified 7 informationfromexistingtoolsthroughsymbolicexecution, additional real-world Reentrancy vulnerability contracts showcasing strong extensibilit andavoiding detection fail- thatwereinitiallymissed.Therefore,thisdatasetrepresents uresduetotheinabilitytogenerateeffectiveinputs.During real-worldsmartcontractsdeployedontheEthereumwith the Target state search phase, ReEP explores vulnerability Reentrancyvulnerabilities. target states to guide path pruning, eliminating irrelevant path branches to enhance path traversal efficiency. In the 6 RELATED WORK Symbolicexecutionverificationphase,combinedwithprogram instrumentationtoachievevulnerabilitystate-guidedsym- 6.1 SymbolicExecution. bolicexecutionverification.ReEPcombinesthestrengthsof Recent research on symbolic execution in smart contract both static and dynamic methods, which enables efficient securitycanbedividedintotwocategoriesaccordingtothe identification and analysis of critical states to improve effect:accuracyandefficiency. precision.11 7 CONCLUSION AND FUTURE WORKS [11] J. Choi, D. Kim, S. Kim, G. Grieco, A. Groce, and S. K. Cha, “Smartian: Enhancing smart contract fuzzing with static and In this paper, we present ReEP, a tool that effectively dynamicdata-flowanalyses,”in202136thIEEE/ACMInternational improvestheprecisionofexistingReentrancyvulnerability ConferenceonAutomatedSoftwareEngineering(ASE). IEEE,2021, detectiontools.ReEPevaluatesresultsfromexistingtoolsand pp.227–239. [12] S.So,S.Hong,andH.Oh,“Smartest:Effectivelyhuntingvulnerable validates vulnerability likelihood to reduce false positives. transactionsequencesinsmartcontractsthroughlanguagemodel- When new vulnerability patterns emerge, integrating the guidedsymbolicexecution.”inUSENIXSecuritySymposium,2021, corresponding detection tools ensures reliable Reentrancy pp.1361–1378. detection. By analyzing the program dependency relation- [13] C.F.Torres,J.Schu¨tte,andR.State,“Osiris:Huntingforinteger |
bugsinethereumsmartcontracts,”inProceedingsofthe34thAnnual shipsofvulnerabilityfunctionsinthedetectionreport,ReEP ComputerSecurityApplicationsConference,2018,pp.664–676. guidespathpruningandsymbolicexecution.Cross-contract [14] ConsenSys,“Mythril,”2020.[Online].Available:https://github. symbolicexecutionisemployedtoverifythereachabilityof com/ConsenSys/mythril vulnerabilitypathsandconfirmtheexistenceofvulnerabili- [15] J. Feist, G. Grieco, and A. Groce, “Slither Analyzer,” Jun. 2023. [Online].Available:https://github.com/crytic/slither ties.Weimplementedandvalidatedourtoolwitheightstate- [16] Solidity, “Solidity compiler,” 2013. [Online]. Available: https: of-the-artdetectiontools.AfterapplyingReEP,theaverage //docs.soliditylang.org/en/latest/installing-solidity.html precision of these eight tools increased significantly from [17] T.Durieux,J.F.Ferreira,R.Abreu,andP.Cruz,“Empiricalreview ofautomatedanalysistoolson47,587ethereumsmartcontracts,”in 0.5%to73%.Furthermore,bymergingsixtools,theprecision ProceedingsoftheACM/IEEE42ndInternationalconferenceonsoftware further improved, reaching a maximum of 83.6%, while engineering,2020,pp.530–541. thebestperformanceofthecurrentstate-of-the-arttoolsis [18] P.Tsankov,A.Dan,D.Drachsler-Cohen,A.Gervais,F.Buenzli,and only31.8%.TheseresultsdemonstratethatReEPeffectively M.Vechev,“Securify:Practicalsecurityanalysisofsmartcontracts,” inProceedingsofthe2018ACMSIGSACConferenceonComputerand unitesthestrengthsofexistingworks,enhancestheprecision CommunicationsSecurity,2018,pp.67–82. of Reentrancy vulnerability detection tools, and efficiently [19] P.Bose,D.Das,Y.Chen,Y.Feng,C.Kruegel,andG.Vigna,“Sailfish: identifiesReentrancyvulnerabilitiesinreal-worldscenarios. Vettingsmartcontractstate-inconsistencybugsinseconds,”in2022 IEEE Symposium on Security and Privacy (SP). IEEE, 2022, pp. In future work, we aim to expand the scope and ca- 161–178. pabilitiesofvulnerabilitydetectionbycombiningmultiple [20] C. Schneidewind, I. Grishchenko, M. Scherer, and M. Maffei, technologies,coveringabroaderrangeofvulnerabilitytypes, “Ethor:Practicalandprovablysoundstaticanalysisofethereum and supporting the detection of bytecode, among other smartcontracts,”inProceedingsofthe2020ACMSIGSACConference onComputerandCommunicationsSecurity,ser.CCS’20. NewYork, enhancements. NY,USA:AssociationforComputingMachinery,2020,p.621–640. [Online].Available:https://doi.org/10.1145/3372297.3417250 [21] I.Nikolic´,A.Kolluri,I.Sergey,P.Saxena,andA.Hobor,“Finding REFERENCES thegreedy,prodigal,andsuicidalcontractsatscale,”inProceedings ofthe34thannualcomputersecurityapplicationsconference,2018,pp. [1] PeckShield, “Web3 industry security report,” 2022. [Online]. 653–663. Available:https://peckshield.com/static/pdf/2023.pdf [22] W.Zhang,S.Banescu,L.Pasos,S.Stewart,andV.Ganesh,“Mpro: [2] SLOWMIST,“2022-blockchain-security-and-aml-analysis-annual- Combiningstaticandsymbolicanalysisforscalabletestingofsmart report(en),”2023.[Online].Available:https://www.slowmist.com/ contract,”in2019IEEE30thInternationalSymposiumonSoftware report ReliabilityEngineering(ISSRE). IEEE,2019,pp.456–462. [3] C. Ferreira Torres, M. Baden, R. Norvill, B. B. Fiz Pontiveros, [23] P.Zheng,Z.Zheng,andX.Luo,“Park:acceleratingsmartcontract H. Jonker, and S. Mauw, “Ægis: Shielding vulnerable smart vulnerabilitydetectionviaparallel-forksymbolicexecution,”in contracts against attacks,” in Proceedings of the 15th ACM Asia Proceedingsofthe31stACMSIGSOFTInternationalSymposiumon ConferenceonComputerandCommunicationsSecurity,2020,pp.584– SoftwareTestingandAnalysis,2022,pp.740–751. 597. [24] C.Schneidewind,I.Grishchenko,M.Scherer,andM.Maffei,“ethor: [4] R. Ji, N. He, L. Wu, H. Wang, G. Bai, and Y. Guo, “Deposafe: Practicalandprovablysoundstaticanalysisofethereumsmart Demystifying the fake deposit vulnerability in ethereum smart contracts,”inProceedingsofthe2020ACMSIGSACConferenceon contracts,”in202025thInternationalConferenceonEngineeringof ComputerandCommunicationsSecurity,2020,pp.621–640. ComplexComputerSystems(ICECCS). IEEE,2020,pp.125–134. [25] S. Tikhomirov, E. Voskresenskaya, I. Ivanitskiy, R. Takhaviev, [5] Z.Zheng,N.Zhang,J.Su,Z.Zhong,M.Ye,andJ.Chen,“Turn E. Marchenko, and Y. Alexandrov, “Smartcheck: Static analysis therudder:Abeaconofreentrancydetectionforsmartcontracts ofethereumsmartcontracts,”inProceedingsofthe1stinternational onethereum,”in2023IEEE/ACM45thInternationalConferenceon workshop on emerging trends in software engineering for blockchain, SoftwareEngineering(ICSE),2023,pp.295–306. 2018,pp.9–16. [6] J.Feist,G.Grieco,andA.Groce,“Slither:astaticanalysisframe- |
[26] F. Ma, Z. Xu, M. Ren, Z. Yin, Y. Chen, L. Qiao, B. Gu, H. Li, work for smart contracts,” in 2019 IEEE/ACM 2nd International Y. Jiang, and J. Sun, “Pluto: Exposing vulnerabilities in inter- WorkshoponEmergingTrendsinSoftwareEngineeringforBlockchain contract scenarios,” IEEE Transactions on Software Engineering, (WETSEB). IEEE,2019,pp.8–15. vol.48,no.11,pp.4380–4396,2021. [7] Z.Liao,Z.Zheng,X.Chen,andY.Nan,“Smartdagger:abytecode- [27] J. Su, H.-N. Dai, L. Zhao, Z. Zheng, and X. Luo, “Effectively based static analysis approach for detecting cross-contract vul- generating vulnerable transaction sequences in smart contracts nerability,”inProceedingsofthe31stACMSIGSOFTInternational withreinforcementlearning-guidedfuzzing,”in37thIEEE/ACM SymposiumonSoftwareTestingandAnalysis,2022,pp.752–764. InternationalConferenceonAutomatedSoftwareEngineering,2022,pp. [8] J. Ye, M. Ma, Y. Lin, Y. Sui, and Y. Xue, “Clairvoyance: Cross- 1–12. contract static analysis for detecting practical reentrancy vul- [28] T. D. Nguyen, L. H. Pham, J. Sun, Y. Lin, and Q. T. Minh, nerabilities in smart contracts,” in Proceedings of the ACM/IEEE “sfuzz:Anefficientadaptivefuzzerforsoliditysmartcontracts,”in 42nd International Conference on Software Engineering: Companion ProceedingsoftheACM/IEEE42ndInternationalConferenceonSoftware Proceedings,2020,pp.274–275. Engineering,2020,pp.778–788. [9] L.Luu,D.-H.Chu,H.Olickel,P.Saxena,andA.Hobor,“Making [29] A.Murali,N.S.Mathews,M.Alfadel,M.Nagappan,andM.Xu, smartcontractssmarter,”inProceedingsofthe2016ACMSIGSAC “Fuzzslice: Pruning false positives in static analysis warnings conferenceoncomputerandcommunicationssecurity,2016,pp.254–269. through function-level fuzzing,” in 2024 IEEE/ACM 46th Inter- [10] M. Mossberg, F. Manzano, E. Hennenfent, A. Groce, G. Grieco, nationalConferenceonSoftwareEngineering(ICSE). IEEEComputer J.Feist,T.Brunson,andA.Dinaburg,“Manticore:Auser-friendly Society,2023,pp.767–779. symbolicexecutionframeworkforbinariesandsmartcontracts,”in [30] G. Wood et al., “Ethereum: A secure decentralised generalised 201934thIEEE/ACMInternationalConferenceonAutomatedSoftware transactionledger,”Ethereumprojectyellowpaper,vol.151,no.2014, Engineering(ASE). IEEE,2019,pp.1186–1189. pp.1–32,2014.12 [31] Z. Zheng, S. Xie, H.-N. Dai, W. Chen, X. Chen, J. Weng, and [42] P.Godefroid,N.Klarlund,andK.Sen,“Dart:Directedautomated M.Imran,“Anoverviewonsmartcontracts:Challenges,advances randomtesting,”inProceedingsofthe2005ACMSIGPLANconference andplatforms,”FutureGenerationComputerSystems,vol.105,pp. onProgramminglanguagedesignandimplementation,2005,pp.213– 475–491,2020. 223. [32] J.Chen,X.Xia,D.Lo,J.Grundy,X.Luo,andT.Chen,“Defining [43] F.VictorandA.M.Weintraud,“Detectingandquantifyingwash smartcontractdefectsonethereum,”IEEETransactionsonSoftware tradingondecentralizedcryptocurrencyexchanges,”inProceedings Engineering,vol.48,no.1,pp.327–345,2020. oftheWebConference2021,2021,pp.23–32. [33] S.So,M.Lee,J.Park,H.Lee,andH.Oh,“Verismart:Ahighly [44] T.Chen,Z.Li,Y.Zhang,X.Luo,T.Wang,T.Hu,X.Xiao,D.Wang, precisesafetyverifierforethereumsmartcontracts,”in2020IEEE J.Huang,andX.Zhang,“Alarge-scaleempiricalstudyoncontrol SymposiumonSecurityandPrivacy(SP). IEEE,2020,pp.1678–1694. flowidentificationofsmartcontracts,”in2019ACM/IEEEInterna- [34] smartbugs,“Smartbugswilddataset,”2020.[Online].Available: tionalSymposiumonEmpiricalSoftwareEngineeringandMeasurement https://github.com/smartbugs/smartbugs-wild (ESEM). IEEE,2019,pp.1–11. [35] B. Zhao, Z. Li, S. Qin, Z. Ma, M. Yuan, W. Zhu, Z. Tian, and [45] T. Chen, Y. Zhang, Z. Li, X. Luo, T. Wang, R. Cao, X. Xiao, C.Zhang,“{StateFuzz}:System{Call-Based}{State-Aware}linux andX.Zhang,“Tokenscope:Automaticallydetectinginconsistent driver fuzzing,” in 31st USENIX Security Symposium (USENIX behaviorsofcryptocurrencytokensinethereum,”inProceedingsof Security22),2022,pp.3273–3289. the2019ACMSIGSACconferenceoncomputerandcommunications [36] M.R.Parvez,“Combiningstaticanalysisandtargetedsymbolic security,2019,pp.1503–1520. executionforscalablebug-findinginapplicationbinaries,”Master’s [46] E.Coppa,H.Yin,andC.Demetrescu,“Symfusion:Hybridinstru- thesis,UniversityofWaterloo,2016. mentationforconcolicexecution,”in37thIEEE/ACMInternational [37] S.Arzt,S.Rasthofer,R.Hahn,andE.Bodden,“Usingtargetedsym- ConferenceonAutomatedSoftwareEngineering,2022,pp.1–12. bolicexecutionforreducingfalse-positivesindataflowanalysis,” [47] Inpluslab, “Smart contract dataset,” 2021. [Online]. Available: inProceedingsofthe4thACMSIGPLANInternationalWorkshopon http://xblock.pro/#/ |
StateoftheArtinProgramAnalysis,2015,pp.1–6. [48] etherscan, “A block explorer for ethereum,” 2020. [Online]. [38] Y.Xue,M.Ma,Y.Lin,Y.Sui,J.Ye,andT.Peng,“Cross-contract Available:https://etherscan.io/ staticanalysisfordetectingpracticalreentrancyvulnerabilitiesin [49] Remix, “Ethereum ide and tools for the web,” 2023. [Online]. smartcontracts,”inProceedingsofthe35thIEEE/ACMInternational Available:https://github.com/ethereum/remix ConferenceonAutomatedSoftwareEngineering,2020,pp.1029–1040. [50] E.Org,“Ethereum,”2023.[Online].Available:https://ethereum. [39] F.Contro,M.Crosara,M.Ceccato,andM.DallaPreda,“Ethersolve: org/en/ Computinganaccuratecontrol-flowgraphfromethereumbyte- [51] P.Tsankov,A.Dan,D.Drachsler-Cohen,A.Gervais,F.Buenzli,and code,”in2021IEEE/ACM29thInternationalConferenceonProgram M.Vechev,“Securify:Practicalsecurityanalysisofsmartcontracts,” Comprehension(ICPC). IEEE,2021,pp.127–137. inProceedingsofthe2018ACMSIGSACConferenceonComputerand [40] X. Rival and K. Yi, Introduction to static analysis: an abstract CommunicationsSecurity,2018,pp.67–82. interpretationperspective. MitPress,2020. [52] C.F.Torres,A.K.Iannillo,A.Gervais,andR.State,“Confuzzius: [41] J. Krupp and C. Rossow, “teether: Gnawing at ethereum to Adatadependency-awarehybridfuzzerforsmartcontracts,”in automaticallyexploitsmartcontracts,”in27th{USENIX}Security 2021IEEEEuropeanSymposiumonSecurityandPrivacy(EuroS&P), Symposium({USENIX}Security18),2018,pp.1317–1333. 2021,pp.103–119. |
2402.09299 TrainedWithoutMyConsent:DetectingCodeInclusionInLanguageModels TrainedonCode VAHIDMAJDINASAB,PolytechniqueMontreal,Canada AMINNIKANJAM,PolytechniqueMontreal,Canada FOUTSEKHOMH,PolytechniqueMontreal,Canada Codeauditingensuresthatthedevelopedcodeadherestostandards,regulations,andcopyrightprotectionbyverifyingthatitdoes notcontaincodefromprotectedsources.TherecentadventofLargeLanguageModels(LLMs)ascodingassistantsinthesoftware developmentprocessposesnewchallengesforcodeauditing.Thedatasetfortrainingthesemodelsismainlycollectedfrompublicly availablesources.Thisraisestheissueofintellectualpropertyinfringementasdevelopers’codesarealreadyincludedinthedataset. Therefore,auditingcodedevelopedusingLLMsischallenging,asitisdifficulttoreliablyassertifanLLMusedduringdevelopment hasbeentrainedonspecificcopyrightedcodes,giventhatwedonothaveaccesstothetrainingdatasetsofthesemodels.Giventhe non-disclosureofthetrainingdatasets,traditionalapproachessuchascodeclonedetectionareinsufficientforassertingcopyright infringement.Toaddressthischallenge,weproposeanewapproach,TraWiC;amodel-agnosticandinterpretablemethodbasedon membershipinferencefordetectingcodeinclusioninanLLM’strainingdataset.Weextractsyntacticandsemanticidentifiersunique toeachprogramtotrainaclassifierfordetectingcodeinclusion.Inourexperiments,weobservethatTraWiCiscapableofdetecting 83.87%ofcodesthatwereusedtotrainanLLM.Incomparison,theprevalentclonedetectiontoolNiCadisonlycapableofdetecting 47.64%.Inadditiontoitsremarkableperformance,TraWiChaslowresourceoverheadincontrasttopair-wiseclonedetectionthatis conductedduringtheauditingprocessoftoolslikeCodeWhispererreferencetracker,acrossthousandsofcodesnippets. CCSConcepts:•Softwareanditsengineering→Traceability;Softwarelibrariesandrepositories;Softwareprototyping. AdditionalKeyWordsandPhrases:LargeLanguageModels,IntellectualPropertyInfringement,CodeLicensing,DatasetInclusion Detection,MembershipInferenceAttack ACMReferenceFormat: VahidMajdinasab,AminNikanjam,andFoutseKhomh.2024.TrainedWithoutMyConsent:DetectingCodeInclusionInLanguage ModelsTrainedonCode. 1,1(October2024),46pages.https://doi.org/XXXXXXX.XXXXXXX 1 INTRODUCTION MachineLearning(ML)modelshavebeenusedinvariousindustrialsectorsrangingfromhealthcaretofinance[4]. Thesemodelsallowforeasierandfasterdataanalysisanddecision-making.AsignificantdevelopmentinMListhe recentriseofLargeLanguageModels(LLM)whicharenowbeingusedforvariousSoftwareEngineering(SE)taskssuch assoftwaredevelopment,maintenance,anddeployment.LLMsarereportedtoincreasedevelopers’productivityand reducesoftwaredevelopmenttime[10,20].Thesemodelsaretrainedoncodecollectedfrompubliclyavailablesources Authors’addresses:VahidMajdinasab,vahid.majdinasab@polymtl.ca,PolytechniqueMontreal,,Montreal,Quebec,Canada,;AminNikanjam,Polytech- niqueMontreal,P.O.Box1212,Montreal,Quebec,Canada,43017-6221,amin.nikanjam@polymtl.ca;FoutseKhomh,PolytechniqueMontreal,,Montreal, Quebec,Canada,,foutse.khomh@polymtl.ca. Permissiontomakedigitalorhardcopiesofallorpartofthisworkforpersonalorclassroomuseisgrantedwithoutfeeprovidedthatcopiesarenot madeordistributedforprofitorcommercialadvantageandthatcopiesbearthisnoticeandthefullcitationonthefirstpage.Copyrightsforcomponents ofthisworkownedbyothersthanACMmustbehonored.Abstractingwithcreditispermitted.Tocopyotherwise,orrepublish,topostonserversorto redistributetolists,requirespriorspecificpermissionand/orafee.Requestpermissionsfrompermissions@acm.org. ©2024AssociationforComputingMachinery. ManuscriptsubmittedtoACM ManuscriptsubmittedtoACM 1 4202 peS 03 ]ES.sc[ 3v99290.2042:viXra2 Majdinasabetal. suchasGitHubwhichcontainslicensedcode.However,theinaccessibilityofthesemodels’trainingdatasetsraisesthe importantchallengeofauditingthegeneratedcode.Codeclonedetectiontechniquesareoneofthemoreprominent approachesusedtosafeguardagainstusingcopyrightedcodeduringsoftwaredevelopment[56,62].However,todoso, havingaccesstothecodesusedfortrainingthemodels(i.e.,originalcodes)isrequiredwhichisnotpossiblewhen analyzingcodesgeneratedbyanLLMasthetrainingdatasetsofthesemodelsarenotmadepubliclyavailable. PreviousworkshaveshownthatLLMscanre-createinstancesfromtheirtrainingdata[17–19].Thisisknownas thememorizationissueinwhichMLmodelsre-createtheirtrainingdatasetinsteadofgeneralizing[81].Byexploiting memorization,MembershipInferenceAttacks(MIA)[68]haveshowntobeeffectiveatbothextractinginformationfrom MLmodelsandinferringthepresenceofspecificinstancesinamodel’strainingdataset[32,42,44,49,64].Inspiredby theseapproaches,wepresentTraWiC:Amodel-agnostic,interpretableapproachthatexploitsthememorizationability ofLLMstrainedoncodetodetectwhethercodesfromaproject(collectionofcodes)wereincludedinamodel’straining dataset.Toassesswhetheragivenprojectwasincludedinthetrainingdatasetofamodel𝑀,TraWicparsesthecodes intheprojecttoextractuniquetextualelements(variablenames,comments,etc.)fromeachofthem.Afterwards,these elementsaremaskedandthemodelisqueriedtopredictwhatthemaskedelementis.Finally,thegeneratedoutputsof themodel𝑀arecomparedwiththeoriginalmaskedelementstodetermineifthecodeunderanalysiswasincludedin themodel’strainingdataset. Toevaluateourapproach,weconstructedtwodatasetsofprojectswhichwereusedtotrainthreedistinctLLMs. |
Thesedatasetswillactasthegroundtruthforinclusiondetectionandarecomprisedofover10,700codefilesfrom 314projects.OurresultsindicatethatTraWiCachievesanaccuracyof83.87%fordatasetinclusiondetection.Ascode clonedetectorsaretraditionallyusedforcodeauditing[56,66],wecompareTraWiCagainsttwoofthewidelyused open-sourcecodeclonedetectors,NiCad[59]andJPlag[75],tohaveanappropriatebaselineforourevaluations.Our analysisshowsthatNiCadisonlycapableofachievinganaccuracyof47.64%fordetectingdatasetinclusioninanLLM’s trainingdatasetwhileJPlagachievesanaccuracyof55%.Furthermore,unlikecodeclonedetectionapproaches,TraWiC doesnotrequirepair-wisecomparisonbetweenthecodestodetectinclusionwhichmakesitmorecomputationally efficientcomparedtoclonedetection. Briefly,thispapermakesthefollowingcontributions: • Weproposeamodel-agnostic,interpretable,andefficientapproachfordatasetinclusiondetectionforLLMs. • OurapproachoutperformsthewidelyusedcodeclonedetectiontoolsNiCadandJPlag. • WeconductthorougherrorandsensitivityanalysesonTraWiC’sperformance.Wedemonstratehowtargeting MIAsatnon-syntacticalpartsofcode(strings,documentation)canhelpdetectdatasetinclusioneveninthe presenceofdataobfuscation.Furthermore,weshowhowconductingMIAsonlessprevalenttextualelements withinthecodesuchasinfrequentvariablenames,canbeusefulfordatasetinclusiondetection. • WedemonstrateourTraWiC’seffectivenessonmultipleLLMs.Namely,SantaCoder,Llama-2,andMistral. • WereleaseTraWiC’scodealongsideourdatatobeusedbyotherresearchersandstudies[72]. Therestofthepaperisorganizedasfollows:InSection2,wepresentbackgroundconceptsandtherelatedliterature necessaryforunderstandingtheworkpresentedinthispaper.Section3introducesourmethodologyanddetailsof TraWiC’simplementation.WepresentourexperimentdesigninSection4andourresultsinSection5.Wereviewthe relatedworksinsection6,discussthelimitationsofourapproachinSection7,anddiscussthethreatstoourwork’s validityinSection8.Finally,weconcludethepaperinSection9. ManuscriptsubmittedtoACMDetectingCodeInclusionInLanguageModelsTrainedonCode 3 2 BACKGROUND Inthissection,wewillreviewthenecessarybackgroundandconceptsrelatedtoourapproach.TraWiCisdesignedto workwithLLMstrainedoncode,therefore,wewillfirstreviewtheliteratureonlargelanguagemodels.Afterward,we willgoovercodeclonedetectionassuchapproachescanbeusedfordatasetinclusiondetectionintrainingdatasetsof LLMs(albeitwithhighcomputecosts).Finally,wewillreviewthecurrentworksonMIAs. 2.1 LargeLanguageModels OneofthehighlyactivefieldsofresearchinMLisNaturalLanguageProcessing(NLP).Here,researchisfocused ondesigningapproachesthatarecapableofunderstandinglanguageandgeneratingresponsesthatarecontextually relevant,accurate,andcoherent.NeuralLanguageModels(NLMs)weretrainedinordertopredicttheprobabilityof thenextword(token)inasequencebyconsideringtheprevioustokens[54].ModelssuchasELMo[51]andBERT[25] wereintroducedbytrainingamodelonalargedatasetofunlabeledcorporatolearnwordrepresentationsbasedon thecontext(othertokensinthesequence).Thisprocessisknownaspre-training;amodelistrainedonanunlabeled datasettolearnwordrepresentations,languagestructure,syntax,andsemantics[28,52].Pre-trainedmodelscanbe furtherfine-tunedforspecifictasksbycontinuingtheirtrainingonalabeleddatasetdesignedforthetaskathand.By incorporatingtheTransformerarchitectureandusingtheself-attentionmechanism[73],modelssuchasGPT-2[55] weretrainedusingthepre-training/fine-tuningparadigmalongsideintroducingnewarchitectures.Modelssuchas GPT-3[15]weretrainedbymainlyusingsimilararchitecturesbutscalingthesizeofthemodelalongsideitstraining data.Asthenumberofthesemodels’parametersisextremelylarge,theyarecalledLargeLanguageModels.Codeisalso usedinthetrainingdatasetofLLMsthatarenotfine-tunedforprogramsynthesisasithasbeenshowntoincreasea model’scapabilityinreasoning[11,20].LLMsfine-tunedonlargedatasetsofcodehavedisplayedhighperformancein codingtasks.Currently,enterprisesareofferingLLMsfine-tunedforcodingtaskssuchasGitHub’sCopilot(basedon Codex[20]),Amazon’sCodeWhisperer,andGoogle’sCodey.Alongsidetheseenterpriseservices,manyopen-source modelssuchasSantaCoder[9],WizardCoder[45],CodeLlama[61],etc.,areavailable.TheseLLMsweretrainedusing differentapproaches,arebasedonvariousarchitectures,andaredesignedtoachievedifferentobjectives(program synthesis,codesummarization,commentgeneration,etc.).However,regardlessoftheirintendeduse,allthesemodels requirealargecorpusofhigh-qualitycode(codethatcontainsdocumentationexplainingthefunctionalityofitsdifferent parts)duringtheirpre-trainingandfine-tuningphases.TheprimaryresourcesforcollectingsuchdataareGitHuband StackOverflow,whichcontainvastamountsofopen-sourcecodes,andmostoftheopen-sourcedatasetsonline(e.g., TheStack[40])arejustanaccumulationofdatacollectedfromtheseresources. 2.2 CodeCloneDetection CodeCloneDetection(CCD)isawidelyresearchedareaandisanimportantaspectofsoftwarequalityassurance.A fault/bug/vulnerabilitythatispresentinoneprojectcanexistinotherprojectsthatcontainsimilarcodesandcorrection |
isrequiredforallclones.Othersoftwareengineeringactivitiessuchasplagiarismdetection,programunderstanding, codecompaction,etc.requireidentifyingcodesthataresimilartoeachothereithersemanticallyorsyntacticallyaswell [60].CCDtoolssuchasNiCad[59],SorcererCC[63],Simian[3],etc.,areavailableaseitheropen-sourceorenterprise toolstofacilitateCCDactivities.InCCDliterature,codeclonesarecategorizedintofourcategories[58]: • Type-1:Anexactcopywithoutmodifications(exceptforwhitespaceandcomments). • Type-2:Asyntacticallyidenticalcopywithdifferencesinvariable/type/functionidentifiers. ManuscriptsubmittedtoACM4 Majdinasabetal. • Type-3:Similarcodefragmentswithextraadditionordeletionofstatements. • Type-4:Codefragmentswithdifferentsyntax,butsimilarsemantics. ByexploitingthehighlearningcapacityofDeepLearning(DL),alargeamountofresearchhasbeenconductedon usingDL-basedapproachesforCCD.Todetectclones,thesemodelsusevariouscoderepresentationssuchasAbstract SyntaxTrees(AST),ControlFlowGraphs(CFG),codemetrics(numberoffunctions,numberofvariablecalls,etc.), andrawcodeitself.DL-basedapproachessuchasCoCoNut[46],AST-NN[83],FCCA[35],etc.havebeenableto achievehighperformanceonclonedetectionbenchmarks.However,thereexistsomeproblemswithusingCCDmodels fordatasetinclusiondetection.Namely,thehighcostoftraining/re-training/inferenceoflargeDLmodels,andthe computationalinfeasibilityofattemptingclonedetectioninapair-wisemannerforallofthecodesthatexistina dataset. 2.3 MembershipInferenceAttacks Memorizationisalong-standingprobleminMLwhereinanMLmodelreproducesinstancesfromitstrainingdata insteadofgeneralizing[81].Byexploitingthememorizationproblem,MIAs[68]haveshowntobeabletoextract informationfromMLmodelsregardingtheirtrainingdata.Formally,givenaccesstoqueryonamodel𝑀,andadata record𝐷,anMIAisconsideredsuccessfulifanattackercansuccessfullyidentify𝐷beingin𝑀’strainingdataornot. Shokrietal.[68]studiedhowanattackercanidentifythepresenceofarecordinamodel’strainingdatasetbyanalyzing themodel’soutputprobabilities.BasedonShokrietal.’swork,manyapproacheshavebeenproposedforbothattackson amodelforextractinginformationaboutitstrainingdatasetanddefensesinordertopreventthemodelfromrecreating sensitiverecords[33,36,37,68].Carlinietal.[18]attemptedtoextractthetrainingdataofGPT-2byinitializingthe modelwithasequencestarttokenandhavingthemodelgeneratetokensuntilitgeneratesanend-of-sequencetoken. Theygeneratealargenumberofsequencesinthisfashionandremovesamplesthatcontainlowlikelihood.Their approachisbasedontheideathatsampleswithhighlikelihoodaregeneratedeitherbecauseoftrivialmemorization orrepeatedsub-stringsintheirdataset.Bydoingso,theywereabletoextractinformationaboutindividuals’contact information,newspieces,tweets,etc.Inanotherwork,Carlinietal.[17]studythequantityofmemorizationinLLMs. Theyshowthatbyincreasingthenumberofparametersofamodel,theamountofmemorizeddataincreasesaswell. Changetal.[19]attempttoprobeChatGPTandGPT-4byusing“namecloze”membershipinferencequeries.Intheir approach,theyqueryChatGPT/GPT-4bygivingitacontext(whichintheirworkaretextsextractedfromnovels), maskthetokenwhichcontainsaname,andinspecttheoutput’ssimilaritytotheoriginaltoken.Theunderlyingideais thatwithoutthemodelseeingtheinputduringitstraining,thenameshouldbeimpossibletopredict.“Counterfactual Memorization”[82]isanotherapproachproposedforstudyingthememorizationissueinLLMs.Here,thechangesina model’spredictionsarecharacterizedbyomittingaparticulardocumentduringitstraining.Eventhoughthisapproach showspromiseinstudyingamodel’sdegreeofmemorization,itisverycomputationallyexpensive. 3 TRAWIC Inthissection,wefirstdefinetheconceptsandterminologyusedthroughouttherestofthispapertodescribeTraWiC, discussthemotivatingexamplebehindTraWiC’sdesign,andthenshowanexampleofhowcodeisprocessedfor datasetinclusiondetectioninourapproach.WewillthenexplainTraWiC’sdatasetinclusiondetectionpipelinein detail. ManuscriptsubmittedtoACMDetectingCodeInclusionInLanguageModelsTrainedonCode 5 3.1 Definitions AsexplainedinSection2.3,memorizationcanbeleveragedinMIAstoinvestigatethepresenceofdatarecordsina model’strainingdataset.Previousworkshaveusedexactmemorization[70]andnamecloze[19]approachestoanalyze theoutputsofamodelfordetectingdatasetinclusion.Inbothworks,partsoftheinput(i.e.,token(s))tothemodelare masked,andthemodelistaskedwithpredictingthemaskedtokensgiventherestoftheinputascontext.Afterwards, themodel’soutputsarecomparedtotheoriginalmaskedtoken.Intheseapproaches,themodelshouldnotbeableto predicttheexacttoken(giventhatitdoesnotexistanywhereelseintheinputandisuniqueenough)unlessithasseen theinputduringitstraining.Inourwork,wefollowasimilarrationalewithmodificationsforcode.Weconsidercode asconsistingoftwodistinctparts:syntaxanddocumentation;withsyntaxbeingthecodeitself(whichfollowsthe programminglanguage’sstructuralrules)anddocumentationbeinganexplanationofthesyntaxforfuturereference. Fromhereon,weusetheword“script” todenoteasinglefileina“project” (whichcanconsistofmultiplescripts)that containsthecodealongsideitsdocumentation.Weformallydefinemaskingasfollows: Definition1(masking):Giveninput𝐼 whichconsistsoftokens[𝑡 1,𝑡 2,...,𝑡 𝑛],andachosentoken𝑇,masking |
𝑇 entailsreplacingallthetokensin𝐼 whichmatch𝑇 withanothertokennamed,𝑀𝐴𝑆𝐾.Themodel𝑀 is queriedtopredictwhat𝑀𝐴𝑆𝐾 is,given𝐼 asinput. Wealsodefineexactandpartialmatchingofmaskedtokensasfollows: Definition2(exactmatch):Let𝐼 beaninputcomposedofasequenceoftokens[𝑡 1,𝑡 2,...,𝑡 𝑛]withaspecific token𝑇 masked.Anexactmatchisdetectediftheoutputofmodel𝑀forthemaskedtoken𝑇 isidenticalto theactualtoken𝑇: 𝑀(𝐼)=𝑇 Definition3(partialmatch):Let𝐼 beaninputcomposedofasequenceoftokens[𝑡 1,𝑡 2,...,𝑡 𝑛]withaspecific token𝑇 masked.Givenasimilarityfunction𝑆,apartialmatchisdetectedifthesimilarityscore𝑆(𝑀(𝐼),𝑇), betweentheoutputofthemodel𝑀andthemaskedtoken𝑇 surpassesaspecifiedthreshold,𝐿. Inalignmentwithestablishedcodingstandards,suchasthosedescribedin[48],variable/function/classnamesmust beselectedinamannerthatreflectstheirpurposeandcontextwithinthecodebase.Theseidentifiers,beyondthe commonnamesthatareuniversallyused(e.g.,using“set”foraccessorsor“get”formutators),areuniquetothescript anditscorrespondingproject,andreflectthedevelopers’individualcodingstyle.Additionally,documentation,whichis animportantpartoftheprogram,isclosertonaturallanguageasitisnotconstrainedtotheprogramminglanguage’s syntaxandassuch,allowsforagreaterexpressionofthedevelopers’individualstyles[29,79].Therefore,inorderto detectwhetheraprojectwasusedinamodel’strainingdataset,webreakeachscriptintheprojectinto2different identifiergroupsandcomparethemodel’sgenerationswiththeinputsasfollows: • Syntacticidentifiersarenamesorsymbolsusedtorepresentvariouselementsinprogramminglanguages. Theseidentifiersareusedtonamevariables,functions,classes,andotherentitieswithinthecode.Excludingcode reuse(i.e.,usingcodewrittenforonescriptinanother)withinsimilarprojects,theseidentifiersaregenerally ManuscriptsubmittedtoACM6 Majdinasabetal. uniquetotheircodebases.Infact,Feitelsonetal.[27]haveshownthatthereisonlya6.9%probabilitythattwo developerschoosethesamenameforthesamevariable.Therefore,giventheuniquenessofsuchidentifiers,we lookforexactmatchesbetweenthemaskedtokensintheoriginalscriptandthemodel’soutputs.Specifically, weextractthefollowingidentifiersfromeachscript: (1) Variablenames (2) Functionnames (3) Classnames • Semanticidentifiersareexpressionsthatareeitherusedtoexplainthelogicofcodeintohuman-readable, naturallanguageterms(i.e.,documentation)orcontainnaturallanguageelements(e.g.,strings).Documentation servestoclarifycodeforfuturereferenceorotherdevelopers,whilestringsareaspecificdatatypeusedwithin thecode.AsAghajanietal.[6]show,developershavedifferentstandardsfor“what”todocumentand“how”to conveytheunderlyinginformation.Theyreportthatdifferentdevelopersapplytheirownterminologyandstyle basedonexperience,individualstyle,andthecodebase’scontext.Therefore,theseidentifiersarenotstrictly uniformbetweendifferentprojectsdevelopedbydifferentdevelopers.Assuch,welookforpartialmatches betweentheoriginalscript’sidentifiersandthemodel’soutputs,giventheindividualandcontextualnatureof howdevelopersusebothdocumentationandstrings. (4) Strings:datastructuresthatareusedtohandletextualdata.Forexample,inListing1thephrase“HelloWorld!” isastring. (5) Statement-leveldocumentation(e.g.,comments):explanationofsingleormultiplestatements’functionalities. (6) Method/Class-leveldocumentation(e.g.,Docstrings,Javadocs):explanationsofamethod’sorclass’sfunction- ality.SomeprogramminglanguageslikePythonorLispfollowspecificconventionsandsyntaxforseparating method/class-leveldocumentationfromstatement-leveldocumentation[1,2],whileotherlanguagessuchas CorJavaonlyestablishastyleguideforhowtheyshouldbewrittenanddonotprovideaseparatesyntaxfor them.Regardlessoftheprogramminglanguage,establishedbestpracticesrequiredeveloperstothoroughly documenttheinputs,operations,andoutputsofamethod/class[5]andtheyarelongerandmoredetailed thanstatement-leveldocumentation[31].Therefore,wecategorizemethod/class-leveldocumentationina differentgroupthanstatement-leveldocumentation. Fromhereon,weusetheword“element” todenoteasingleextractedsyntactic/semanticidentifier.Outsideofthese twoidentifiergroupsandtheirconstituentelements,whatremainsinthecodeiseitherrelatedtotheprogramming language’ssyntaxoroperationsthatexecutethelogicofthecode. Finally,Wedefinetheprefixandsuffixofanelementasfollows: Definition4(prefixandsuffix):Let𝑆 beascriptcomposedofasequenceoftokens [𝑡 1,𝑡 2,...,𝑡 𝑛] witha specifictoken𝑇 masked.Weconsiderallthetokensin𝑆thatcomebefore𝑇 from𝑡 1asprefixandallthetokens thatimmediatelycomeafter𝑇 until𝑡 𝑛assuffix. Todeterminewhetherascriptwasincludedinthetrainingdataofagivenmodel,weapplytheFill-In-the-Middle (FIM)techniquewhichiscommonlyusedinbothpre-trainingandfine-tuningofLLMs[12].Thistechniqueconsistsof breakingascriptintothreeparts:prefix,maskedelement,andsuffix.Themodelisthentaskedtopredictthemasked ManuscriptsubmittedtoACMDetectingCodeInclusionInLanguageModelsTrainedonCode 7 elementgiventheprefixandsuffixasinput.Inthenextsection,wepresentanexampleofhowanincomingscriptis processedfordetectingdatasetinclusion. 3.2 MotivatingExample Inthissection,wepresenthowTraWiCdiffersfromandimprovesthepreviousMIAapproachesforcode.Consider model𝑀tobeamodeltrainedoncodewithproject𝑃 thatcontainsthescriptpresentedinListing1(𝐶)beingincluded |
in𝑀’strainingdataset.Detectingboth𝑃 and𝐶’sinclusionin𝑀’strainingdatasetusingthepreviousapproaches(exact memorization[70],namecloze[19])wouldrequirethefollowingsteps: • Ifthetaskisnexttokenpredictionusingexactmemorizationorothersimilarapproaches,thenallscriptsin𝑃 wouldbebrokendownintomultiple,separateparts.Foreachgeneratedpart,themodelwouldbetaskedwith predictingonlythenexttoken. • Ifthetaskisnameclozedetectionorothersimilarapproaches,eachscriptin𝑃 wouldbebrokendowninto multipleprompts,andforeachprompt,themodelwillbetaskedwithpredictingthemaskedtoken. Bothapproachesintroducemultiplepointsoffailureforinclusiondetectionandrequirealargenumberofcostly inferencecallsontheLLM.Wefurtherdescribethedetailsintherestofthissection. 3.2.1 Nexttokenprediction. Asanexample,letusconsidertheexactmemorizationapproach.Here,wewouldbe requiredtobreakdowneachscriptin𝑃 into𝑛differentpartswith𝑛beingthenumberoftokensinthescript,andcall themodel𝑛timesinordertogetthemodels’predictionsforeachtoken.Aftercollectingtheoutputsof𝑀foreachinput, oneneedstocheckforexactmatchesforeachofthepredictedtokens.Whilethisapproachmightbeusefulformodels thatareonlytrainedonnon-codetextualinputs(e.g.blogs,books,forums,etc.),thiswouldnotbeusefuloncode, especiallyonsemanticelementsgivenhowdevelopersfollowdifferentstylesforwritingthem[6].Furthermore,given thenumberofparametersoftheLLMunderstudy,inferenceon𝑀canbecomeextremelycostlyandtime-consuming. FollowingtherunningexampleinListing1,considerthedocstringforthefunctionadd_variables,if𝑀generates outputsthatdifferfromthetargettokens,theexactmemorizationapproachwouldreturnanegativeresponse.However, 𝑀haspredictedatokenforasemanticidentifierwhichisclosertonaturallanguage,canbewordedinsimilarways withoutbreakingthecode,andisclosetowhatwasincludedintheoriginalscript.Furthermore,employingnexttoken predictionresultsinlosingallthecontextthatcomesafterthetokenthatistobepredictedby𝑀.Therefore,fortokens thatareatthebeginningandnearthemiddleofthecode,theinputtothemodelwillnotcontaintheentiretyofthe code,andtherefore𝑀willbepronetogeneratingtokensthatarenotsimilartowhatispresentinthetrainingdataand resultinfalsenegatives. 3.2.2 Nameclozeprediction. Usingnameclozeprediction,similartoexactmemorization,wouldentailbreakingdown thecodeinto𝑛parts,constructing𝑛prompts,andcallingthemodeloneachprompt.Whilethisapproachcanbeuseful fordetectingrarenamesinthemodelandprovidescontextaboutwhatcomesafterthetargettokenprediction,itis basedonaninstructpromptandthereforesensitivetohowthepromptisdesigned.AsshownbySrivastavaetal.[69], LLMsaresensitivetochangesinprompts.Suchsensitivitymaycausethemodeltooutputfalsenegativesorpositives. Furthermore,similartotheexactmemorizationapproach,nameclozeapproachescheckforexactmatchesbetween whatthemodelpredictsandthetargettokenandthereforefacethesamechallengeofresultinginfalsenegativeswhen 𝑀’soutputsdifferslightlyinwordingcomparedtowhatwasoriginallyobservedduringtraining. ManuscriptsubmittedtoACM8 Majdinasabetal. Incomparison,TraWiCbenefitsfromthebestaspectsofbothapproaches.Specifically,TraWiCisnotdependenton thepromptsimilartoexactmatchingapproacheswhileprovidingthecontextoftheentirecodetothemodelsimilarto nameclozeapproaches.Furthermore,TraWiCalsointroducesanewmethodforensuringtheminimizationoffalse negativesasmuchaspossibleduringthecheckingprocessandperformsfewercallsontheLLM,makingitmore efficient.FollowingtherunningexamplepresentedinListing1,TraWiCrequiresonly13callsonLLMsincontrast to82callsofthepreviousapproaches(onecallforeachtoken).Specifically,bylookingforexactsimilaritiesforparts oftheinputthatarerequiredtobeexactlythesame(syntacticelements)andfuzzymatchingforpartsoftheinputs thatcandiffer(semanticelements)wecheckthemodel’soutputsforcodeinclusionon6differentlevels.Furthermore, previousapproachesrelysolelyonthenumberofmatcheswithoutemployinganycomparisonoftheobtainedresults withpreviousobservations.Therefore,byusingaseparateclassifiertrainedonpreviousobservations,weaddanother levelofaccuracycomparedtoexactmatchingmodels’outputsbyidentifyingpatternsandvariationsthatmaynotbe capturedthroughexactmatchingalone.Asaresult,TraWiCismuchmoreefficient,accurate,andsuitedfordetecting codeinclusioninamodel’strainingdatasetcomparedtothepreviouslyproposedapproaches. 3.3 End-to-EndDataProcessingExample Inthissection,wepresentanexampleofhowTraWiCprocessesthescriptsinaprojectfordatasetinclusiondetection. Listing 1, shows an example of a Python script. This script consists of 2 functions, 3 variable declarations, and correspondingdocstringsandcommentsinsideeachfunction.Listing2displaysallthesyntacticandsemanticidentifiers extractedfrom1.FollowingtheexampleinListing1,Listings3and4showtheconstructedprefixandsuffixforthe variabledummy_variable,respectively.Notethatwekeepalltheinformationfromtheoriginalscriptandonlymask theelementitself.Thisprocessisrepeatedforeverysemanticandsyntacticelementextractedfromtheinputscript. Bybreakingeachscriptintomultiple(prefix,suffix)pairs,weaimtoleveragethememorizationcapabilityofLLMs. Morespecifically,asreportedbyCarlinietal.[17],asthecapacity(i.e.,numberofparameters)ofalanguagemodel increases,sodoesitsabilitytomemorizeitstrainingdatasetinsteadofgeneralizing.Therefore,byusingalargepart ofthescriptthatwaspresentinthemodel’strainingdatasetasinput,andonlymaskingasmallpiece(i.e.,asingle element)tobeusedwiththeFIMtechniqueasdefinedinSection3.1,themodelwouldlikelybepushedinthedirection |
ofgeneratinganoutputthatisthesameorhighlysimilartotheoriginalmaskedtokenifthescriptwasincludedinits trainingdataset[70]. Oncewehavecollectedallthemodel’soutputsfortheextractedelementsofascript,welookforthedegreeof similaritybetweenthegeneratedoutputsandtheoriginalmaskedelementstodeterminedatasetinclusionwhichwe willexplainindetailinSection3.4.3.Afterobtainingtheresultsofsimilaritycomparison,aclassificationmodelpredicts whetheraprojectwasincludedinamodel’strainingdatasetgiventheresultsofsimilaritycomparisonasinput. 3.4 Methodology Inthissection,wewillexplainhowTraWiCcanbeusedforthetaskofdatasetinclusiondetectionindetail.TraWiC’s pipelineconsistsoffourstages:pre-processing,inference,comparison,andclassification.Figure1providesanoverview ofourapproach. 3.4.1 Pre-processing. AsstatedinSection3.1,webreakeachscriptintoitssixconstituentelements(i.e.,syntactic/se- manticidentifiers).Fromeachscript,weextracttheelementsandforeachextractedelement,webreakdownthescript intotwoparts:prefixandsuffix.Bycreatingaprefixandsuffixpairforeachextractedelement,wewillbeabletoquery ManuscriptsubmittedtoACMDetectingCodeInclusionInLanguageModelsTrainedonCode 9 { "Variable names": [ "dummy_variable", "a", def print_input_string(input_string): "b", """ "c", This function prints the input string. "result" Args: ], input_string(str): the string to be printed "Function names": [ """ "print_input_string", # store the input string in a variable "add_variables" dummy_variable = input_string ], # print the input string "Class names": [], print(input_string) "Strings": [ "'Hello World!'" def add_variable(a, b): ], """ "Statement-level documentation": [ This function adds two variables. "store the input string in a variable", Args: "print the variable", a (float): first variable "add the two variables and store the result in a new variable", b (float): second variable "return the result" Returns: ], float: result of the addition "Method/Class-level documentation": [ """ "This function prints the input string. # add the two variables and store the result in a new variable c = a + b Args: # return the result input_string (str): the string to be printed", return c "This function adds two variables. print_input_string("hello World!") Args: a (float): first variable result = add_variables(a=1, b=2) b (float): second variable Returns: float: result of the addition" ] } Listing1. AnexampleofaPythonscriptanditsextractediden- Listing2. JSONrepresentationofextractedidentifiersanddoc- tifiers. umentation themodeltopredictwhatcomesinbetweentheprefixandsuffix.Algorithm1describesthispre-processingstepin detail. Algorithm1:Pre-processing Input:SCRIPT; /* SCRIPT is an entire script from a project. A project can contain many scripts. */ Output:ProcessedScript; /* A list containing all the (prefix, suffix) tuples generated for each element from the input script. */ // Extract all syntactic and semantic identifiers from SCRIPT identifiers←ExtractIdentifiers(SCRIPT); // Generate prefix and suffix tuple for each identifier foreach𝑖 ∈𝑖𝑑𝑒𝑛𝑡𝑖𝑓𝑖𝑒𝑟𝑠do (𝑝𝑖,𝑠𝑖)←GeneratePrefixAndSuffix(𝑖); store(𝑝𝑖,𝑠𝑖,𝑖)inProcessedScript; end returnProcessedScript; 3.4.2 Inference. Afterbreakingthescriptinto(prefix,suffix,maskedelement)tuplesasshowninAlgorithm1,the givenmodelisqueriedtopredictthemaskedelementgivenprefixandsuffixasinput.Fromthistuple,theprefixand ManuscriptsubmittedtoACM10 Majdinasabetal. = input_string # print the input string print(input_string) def add_variable(a, b): """ This function adds two variables. def print_input_string(input_string): Args: """ a (float): first variable This function prints the input string. b (float): second variable Args: Returns: input_string(str): the string to be printed float: result of the addition """ """ # store the input string in a variable # add the two variables and store the result in a new variable c = a + b # return the result return c print_input_string("hello World!") result = add_variables(a=1, b=2) Listing3. Prefixgeneratedforvariabledummy_variable Listing4. Suffixgeneratedforvariabledummy_variable Fig.1. TraWiC’sdatasetinclusiondetectionpipeline suffixareusedforinferenceonthemodelwhilethemaskedelementwillbeusedinthenextstepforcomparingthe model’soutputstotheoriginalmaskedelement.Thisprocessisrepeatedforallofthetuplesextractedfromascript. Algorithm2describesthisprocess.Oncewehavecollectedallthemodel’spredictionsforeachmaskedelement,we movetothenextstepinwhichwecomparethemodel’spredictionswiththeoriginalelements. 3.4.3 Comparison. Giventheoutputsofthemodelforeachelement,wedefinetwodifferentcomparisoncriteriagiven anelement’stype: • Forsyntacticidentifiers(Variable/Function/Classnames),welookforanexactmatch.Ifthemodelgeneratesthe exactsametoken(s),wecountitasa“hit”,whichisabinaryflagthatindicateswhetherthegeneratedoutput matchesthedesiredoutput. ManuscriptsubmittedtoACMDetectingCodeInclusionInLanguageModelsTrainedonCode 11 Algorithm2:Inference Input:ProcessedScript; /* Output of Algorithm 1. */ Output:ModelPredictions; /* A list containing all of the given model’s predictions for the masked element. */ // Predict the masked element given prefix and suffix for the corresponding element as input |
foreach(𝑝𝑖,𝑠𝑖,𝑖) ∈𝑃𝑟𝑜𝑐𝑒𝑠𝑠𝑒𝑑𝑆𝑐𝑟𝑖𝑝𝑡do 𝑜𝑖 ←Predict(𝑝𝑖,𝑠𝑖); store(𝑜𝑖,𝑖,𝑡𝑦𝑝𝑒𝑖)inModelPredictions; end returnModelPredictions; • Forsemanticidentifiers(strings/documentation),toassessthesimilaritybetweentwosequences(seriesoftokens), weusetheLevenshteindistance(editdistance)score[43].Thismetricmeasuresthenumberofsingle-character editsthatarerequiredtochangeonesequencetotheother.Theseelementsareclosertonaturallanguageand donotrequirethesyntacticrestrictionsrequiredforcode(incorrect/inaccurategenerationswillnotresultin syntaxerrors).Therefore,wecomparethegeneratedoutputwiththeoriginalelementsandcalculatetheedit distancebetweenthetwo.Here,wedefineathreshold,andeditdistanceshigherthanthisthresholdwillbe consideredasahit.Thefinalthresholdwaschosenempiricallytoprovideabalancebetweensemanticsimilarity andperformancegains.WepresenttheresultswithdifferentthresholdsinSection5. Theothermetricsthatcanbeusedforthisobjectivearebasedonsemanticsimilaritywhereasemanticscoreis calculatedbetweenthevectorrepresentationsoftwostrings(e.g.,cosinesimilarity,TF-IDFdistance,etc.)[74]. Wechoosetheeditdistancemetricaswedonotaimtoinspectwhetherthemodel’soutputissimilartothe originalmaskedelementinmeaningbutinsyntacticstructure.Therefore,analyzingthesemanticdistancewould notbehelpfulforthistask. Afterprocessingthemodel’soutputsasdescribedabove,wenormalizethehitnumbersbydividingthenumberof hitsagainstthetotalnumberofchecksforeachtypeofidentifier.Algorithm3showsthecomparisonprocessindetail. Table1showsasampleofthedatasetthatisconstructedfordatasetinclusiondetectionasanoutputofthecomparison processfortheexamplepresentedinSection3.3. Table1. DatarepresentationsamplefortheexamplepresentedinSection3.3 ScriptName ClassHits FunctionHits VariableHits StringHits CommentHits DocstringHits sample_project/sample.py 0 1.0 0.08 0.4 0.66 0.0 3.4.4 Classification. Withourextractedfeatures,theclassifier’staskistodetectwhetherascriptwasincludedinthe LLM’strainingdataset;whichmakesthisabinaryclassificationproblem.Anyclassificationmethod,suchasdecision treesorDeepNeuralNetworks(DNN),canbeusedbasedonthedesiredperformance.Weexperimentwithdifferent classificationmethodsasexplainedinSection4.4 ManuscriptsubmittedtoACM12 Majdinasabetal. Algorithm3:Comparison Input:ModelPredictions; /* Output of Algorithm 2. */ Output:(Hits) // Function to update hits based on the result FunctionUpdateHits(𝐻𝑖𝑡𝑠,𝑘𝑒𝑦,𝑟𝑒𝑠𝑢𝑙𝑡) 𝐻𝑖𝑡𝑠[𝑘𝑒𝑦]←𝐻𝑖𝑡𝑠[𝑘𝑒𝑦]+𝑟𝑒𝑠𝑢𝑙𝑡; // Initialize Hits as a dictionary for each identifier type 𝐻𝑖𝑡𝑠←{𝑉𝑎𝑟𝑖𝑎𝑏𝑙𝑒𝐻𝑖𝑡𝑠:0,𝐹𝑢𝑛𝑐𝑡𝑖𝑜𝑛𝐻𝑖𝑡𝑠:0,𝐶𝑙𝑎𝑠𝑠𝐻𝑖𝑡𝑠:0,𝑆𝑡𝑟𝑖𝑛𝑔𝐻𝑖𝑡𝑠:0,𝐶𝑜𝑚𝑚𝑒𝑛𝑡𝐻𝑖𝑡𝑠:0,𝐷𝑜𝑐𝑆𝑡𝑟𝑖𝑛𝑔𝐻𝑖𝑡𝑠:0}; // Compare the given model’s outputs (𝑜𝑖) to the masked element (𝑖) depending on its 𝑡𝑦𝑝𝑒 (i.e., variable/function/class name or string/comment/doscstring) foreach(𝑜𝑖,𝑖,𝑡𝑦𝑝𝑒𝑖) ∈𝑀𝑜𝑑𝑒𝑙𝑃𝑟𝑒𝑑𝑖𝑐𝑡𝑖𝑜𝑛𝑠do if𝑡𝑦𝑝𝑒𝑖isvariable/function/classnamethen if𝑜𝑖 =𝑖then 𝑟𝑒𝑠𝑢𝑙𝑡 ←1; else 𝑟𝑒𝑠𝑢𝑙𝑡 ←0; end end if𝑡𝑦𝑝𝑒𝑖isstring/comment/docstringthen if(𝑆𝑒𝑚𝑎𝑛𝑡𝑖𝑐𝐶𝑜𝑚𝑝𝑎𝑟𝑖𝑠𝑜𝑛(𝑜𝑖,𝑖) ≥𝑆𝑒𝑚𝑎𝑛𝑡𝑖𝑐𝑇ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑then 𝑟𝑒𝑠𝑢𝑙𝑡 ←1; else 𝑟𝑒𝑠𝑢𝑙𝑡 ←0; end end UpdateHits(𝐻𝑖𝑡𝑠,𝑡𝑦𝑝𝑒𝑖,𝑟𝑒𝑠𝑢𝑙𝑡); end returnHits; Algorithm4:Classification Input:Hits; /* Output of Algorithm 3. */ Output:ClassificationResults; /* A binary variable with 1 indicating inclusion in the dataset and 0 otherwise. */ // Predict dataset inclusion 𝐶𝑙𝑎𝑠𝑠𝑖𝑓𝑖𝑐𝑎𝑡𝑖𝑜𝑛𝑅𝑒𝑠𝑢𝑙𝑡𝑠←ClassificationModel(ℎ𝑖𝑡𝑇𝑦𝑝𝑒,ℎ𝑖𝑡𝐶𝑜𝑢𝑛𝑡); returnClassificationResults; 4 EXPERIMENTALDESIGN Inthissection,weexplainourexperimentaldesignincludingtheconstructionofthedatasetusedasthegroundtruth forvalidatingourapproach,theLLMunderstudy,theclassifierusedfordetectingdatasetinclusion,anddetailsof TraWiC’sperformancecomparisonwithastateoftheartCCDapproach.Specifically,weaimtoanswerthefollowing ResearchQuestions(RQs): • RQ1[Effectiveness] – RQ1a:WhatisTraWiC’sperformanceonthedatasetinclusiondetectiontask? – RQ1b:HowdoesTraWiCcompareagainsttraditionalCCDapproachesfordatasetinclusiondetection? ManuscriptsubmittedtoACMDetectingCodeInclusionInLanguageModelsTrainedonCode 13 – RQ1c:WhatistheeffectofusingdifferentclassificationmethodsonTraWiC’sperformance? • RQ2[SensitivityAnalysis] – RQ2a:HowrobustisTraWiCagainstdataobfuscationtechniques? – RQ2b:Whatistheimportanceofeachfeatureindetectingdatasetinclusion? • RQ3[ErrorAnalysis]Howdoesthemodelunderstudymakemistakesinpredictingthemaskedelement? 4.1 ModelsUnderStudy Forevaluatingourapproach,withoutlossofgenerality,wehaveselectedthreedistinctLLMs.Namely,SantaCoder[9], Mistral7B[38],andLlama-2[71].Weexpandmoreoneachmodelinthefollowing. 4.1.1 SantaCoder. SantaCoderisanLLMtrainedbyHuggingFaceonTheStackforprogramsynthesisandsupports Python,Java,andJavaScriptprogramminglanguages.ThismodelwaschosenasoneoftheLLMsunderstudyforthe followingreasons: • TheStack is publicly available. Therefore, we can inspect the data and confirm the presence of a script in SantaCoder’strainingdataset. • SantaCoder’sdatacleaningprocedureisclearlyexplainedandreplicablewithdifferentcleaningcriteriabeing applied[9].TocleanthedataforSantaCoder’strainingdataset,itsdevelopershaveused:1)“GitHubstars”which filtersrepositoriesbasedonthenumberoftheirstars,2)“Comment-to-code-ratio”whichconsidersthescript’s inclusionbasedontheratioofcommentcharacterstocodecharacters,and3)“morenear-deduplication”which |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.