text
stringlengths
64
2.99M
ductedacomprehensiveanalysisoftheseskillsusingSkillScanner. [28] TravisDBreaux,HananHibshi,andAshwiniRao. Eddy,aformallanguage SkillScannereffectivelyidentified1,328violationsamong786skills. forspecifyingandanalyzingdataflowspecificationsforconflictingprivacy requirements.RequirementsEngineering,19(3):281–307,2014. 694ofthemareaboutprivacynon-complianceand298ofthem [29] NicholasCarlini,PratyushMishra,TavishVaidya,YuankaiZhang,MicahSherr, violatecontentguidelinesimposedbytheAmazonAlexaplatform. ClayShields,DavidWagner,andWenchaoZhou.Hiddenvoicecommands.In Wefoundthat32%ofpolicyviolationsareintroducedthroughcode USENIXSecuritySymposium(USENIXSecurity),pages513–530,2016. [30] GuangkeChen,SenChen,LinglingFan,XiaoningDu,ZheZhao,FuSong,and copyandpaste.ThepolicyviolationsinAlexa’sofficialaccountsled YangLiu.Whoisrealbob?adversarialattacksonspeakerrecognitionsystems. to81policyviolationsinotherskills.Asourfuturework,weplan InIEEESymposiumonSecurityandPrivacy(SP),2021. [31] YanjiaoChen,YijieBai,RichardMitev,KaiboWang,Ahmad-RezaSadeghi,and toconductuserstudiestoevaluatetheusability(e.g.,acceptance WenyuanXu.Fakewake:Understandingandmitigatingfakewake-upwordsof anduser-friendliness)of SkillScannerbyskilldevelopers. voiceassistants.InProceedingsofthe2021ACMSIGSACConferenceonComputer andCommunicationsSecurity,pages1861–1883,2021. ACKNOWLEDGMENT [32] LongCheng,ChristinWilson,SongLiao,JeffreyYoung,DanielDong,and HongxinHu. Dangerousskillsgotcertified:Measuringthetrustworthiness TheworkofL.ChengissupportedbyNationalScienceFoundation ofskillcertificationinvoicepersonalassistantplatforms.InACMSIGSACCon- ferenceonComputerandCommunicationsSecurity(CCS),2020. (NSF) under the Grant No. 2239605, 2228616 and 2114920. The [33] PengChengandUtzRoedig.Personalvoiceassistantsecurityandprivacy—a workofH.HuissupportedbyNSFundertheGrantNo.2228617, survey.ProceedingsoftheIEEE,110(4):476–507,2022. 2120369,2129164,and2114982.TheworkofL.Guoissupportedby [34] H.Chung,M.Iorga,J.Voas,andS.Lee.“alexa,canitrustyou?”.IEEEComputer, 50(9):100–104,2017. NSFundergrantIIS-1949640,CNS-2008049,andCCF-2312616. [35] JideEdu,XaviFerrerAran,JoseSuch,andGuillermoSuarez-Tangil. Skillvet: Automatedtraceabilityanalysisofamazonalexaskills. IEEETransactionson REFERENCES DependableandSecureComputing,2021. [1] AlexaCertificationTestsforVUIandUX. https://developer.amazon.com/fr- FR/docs/alexa/custom-skills/voice-interface-and-user-experience-testing-for-a- custom-skill.html.SkillScanner:DetectingPolicy-ViolatingVoiceApplicationsThroughStaticAnalysisattheDevelopmentPhase CCS’23,November26–30,2023,Copenhagen,Denmark #ofskillswith Problem Detailedpolicy Datasource policyviolation Output 240 Datacollection/storagebut Providealegallyadequateprivacynoticethatwillbedisplayed Permission 94 missingaprivacypolicy toendusersonyourskill’sdetailpage. Database 126 DisclosuretoAlexa 4 Output 19 Datacollection/storagebut Ensurethatyourcollectionanduseofthatinformation Permission 23 Privacy havinganincompleteprivacy complieswithyourprivacynoticeandallapplicablelaws. Database 10 Violations policy DisclosuretoAlexa 19 Collectandusethedataonlyifitisrequiredtosupportandimprove Over-privilegeddatarequests Permission 42 thefeaturesandservicesyourskillprovides. Not-askedpermissionusage Bug Permission 18 Incorrectdatacollection Wronganswerto“DoesthisAlexaskillcollectusers’personal DisclosuretoAlexa 99 disclosuretoAlexa information?” Contentsafety Containsexcessiveprofanity. Output 1 Askingforpositiverating Cannotexplicitlyrequeststhatusersleaveapositiveratingoftheskill. Description 1 DoesnotadheretoAmazonInvocationNameRequirements.Please Violationsof Invocationnamerequirements consulttheserequirementstoensureyourskilliscompliantwithour Invocationname 281 Content invocationnamepolicies. Guidelines Kidcategorypolicy Itdoesn’tcollectanypersonalinformationfromendusers. Output 1 Askillthatprovideshealth-relatedinformation,news,factsortips Healthcategorypolicy mustincludeadisclaimerintheskilldescriptionstatingthattheskillis Description 13 notasubstituteforprofessionalmedicaladvice. Datacollectionrequestbut Bug Output 124 missingaslot Datacollectionslotbutmissing Code Vulnerability Intent 147 arequest Inconsistency Datacollectionslotbutmissing Bug Slot 24 anutterance Intentbutmissinganutterance Bug Intent 40 Table9:Policyviolationsandcodeinconsistencyinskillcodeanddetailedpoliciestheyviolate.Wehaveremovedthefalse positivesbyourmanualverification. [36] JideEdu,XavierFerrer-Aran,JoseSuch,andGuillermoSuarez-Tangil.Measuring [49] AafaqSabir,EvanLafontaine,andAnupamDas.Heyalexa,whoamitalkingto?: alexaskillprivacypracticesacrossthreeyears.InProceedingsoftheACMWeb Analyzingusers’perceptionandawarenessregardingthird-partyalexaskills.In Conference(WWW),page670–680,2022. CHIConferenceonHumanFactorsinComputingSystems,pages1–15,2022.
[37] JideS.Edu,JoseM.Such,andGuillermoSuarez-Tangil.Smarthomepersonal [50] AwanthikaSenarathandNalinA.G.Arachchilage. Whydeveloperscannot assistants:Asecurityandprivacyreview.ACMComputingSurveys,53(6),2020. embedprivacyintosoftwaresystems?anempiricalinvestigation.InProceedings [38] SergioEsposito,DanieleSgandurra,andGiampaoloBella.Alexaversusalexa: ofthe22ndInternationalConferenceonEvaluationandAssessmentinSoftware Controllingsmartspeakersbyself-issuingvoicecommands.InProceedingsofthe Engineering2018,page211–216,2018. 2022ACMonAsiaConferenceonComputerandCommunicationsSecurity,pages [51] VanditSharmaandMainackMondal.Understandingandimprovingusability 1064–1078,2022. ofdatadashboardsforsimplifiedprivacycontrolofvoiceassistantdata.In31st [39] ZhixiuGuo,ZijinLin,PanLi,andKaiChen.Skillexplorer:Understandingthe USENIXSecuritySymposium(USENIXSecurity22),pages3379–3395,2022. behaviorofskillsinlargescale.In29th{USENIX}SecuritySymposium({USENIX} [52] SwapneelSheth,GailKaiser,andWalidMaalej.Usandthem:Astudyofprivacy Security20),pages2649–2666,2020. requirementsacrossnorthamerica,asia,andeurope.InProceedingsofthe36th [40] DeepakKumar,RiccardoPaccagnella,PaulMurley,EricHennenfent,Joshua InternationalConferenceonSoftwareEngineering(ICSE),page859–870,2014. Mason,AdamBates,andMichaelBailey. SkillSquattingAttacksonAmazon [53] FaysalHossainShezan,HangHu,GangWang,andYuanTian.Verhealth:Vetting Alexa.In27thUSENIXSecuritySymposium(USENIXSecurity),pages33–47,2018. medicalvoiceapplicationsthroughpolicyenforcement.Proc.ACMInteract.Mob. [41] ChristopherLentzsch,SheelJayeshShah,BenjaminAndow,MartinDegeling, WearableUbiquitousTechnol.,2020. AnupamDas,andWilliamEnck.HeyAlexa,isthisskillsafe?:Takingacloser [54] DanSu,JiqiangLiu,SencunZhu,XiaoyangWang,andWeiWang. "areyou lookattheAlexaskillecosystem.InProceedingsofthe28thISOCAnnualNetwork homealone?""yes"disclosingsecurityandprivacyvulnerabilitiesinalexaskills. andDistributedSystemsSymposium(NDSS),2021. arXivpreprintarXiv:2010.10788,2020. [42] SuwanLi,LeiBu,GuangdongBai,ZhixiuGuo,KaiChen,andHanlinWei.Vitas: [55] DaweiWang,KaiChen,andWeiWang. Demystifyingthevettingprocessof Guidedmodel-basedvuitestingofvpaapps. In37thIEEE/ACMInternational voice-controlledskillsonmarkets.Proc.ACMInteract.Mob.WearableUbiquitous ConferenceonAutomatedSoftwareEngineering,pages1–12,2022. Technol.,5(3),2021. [43] SongLiao,ChristinWilson,LongCheng,HongxinHu,andHuixingDeng.Mea- [56] YuandaWang,HanqingGuo,andQibenYan.Ghosttalk:Interactiveattackon suringtheeffectivenessofprivacypoliciesforvoiceassistantapplications.In smartphonevoicesystemthroughpowerline.arXivpreprintarXiv:2202.02585, AnnualComputerSecurityApplicationsConference(ACSAC),page856–869,2020. 2022. [44] GaryLiuandNathanMalkin. Effectsofprivacypermissionsonuserchoices [57] ChenYan,XiaoyuJi,KaiWang,QinhongJiang,ZizhiJin,andWenyuanXu.A invoiceassistantappstores. ProceedingsonPrivacyEnhancingTechnologies, surveyonvoiceassistantsecurity:Attacksandcountermeasures.ACMComputing 4:421–439,2022. Surveys,2022. [45] NathanMalkin,DavidWagner,andSergeEgelman. Runtimepermissionsfor [58] QibenYan,KehaiLiu,QinZhou,HanqingGuo,andNingZhang.Surfingattack: privacyinproactiveintelligentassistants.InEighteenthSymposiumonUsable Interactivehiddenattackonvoiceassistantsusingultrasonicguidedwave.In PrivacyandSecurity(SOUPS2022),pages633–651,2022. NetworkandDistributedSystemsSecurity(NDSS)Symposium,2020. [46] ErikaMcCallister.Guidetoprotectingtheconfidentialityofpersonallyidentifiable [59] JeffreyYoung,SongLiao,LongCheng,HongxinHu,andHuixingDeng.SkillDe- information,volume800.DianePublishing,2010. tective:Automatedpolicy-violationdetectionofvoiceassistantapplicationsin [47] YanMeng,JiachunLi,MatthewPillari,ArjunDeopujari,LiamBrennan,Hafsah thewild.InUSENIXSecuritySymposium(USENIXSecurity),2022. Shamsie,HaojinZhu,andYuanTian.Yourmicrophonearrayretainsyouridentity: [60] LeYu,XiapuLuo,XuleLiu,andTaoZhang.Canwetrusttheprivacypolicies Arobustvoicelivenessdetectionsystemforsmartspeaker.InUSENIXSecurity, ofandroidapps? In201646thAnnualIEEE/IFIPInternationalConferenceon 2022. DependableSystemsandNetworks(DSN),pages538–549.IEEE,2016. [48] LisaParker,TanyaKarliychuk,DonnaGillies,BarbaraMintzes,MelissaRaven, andQuinnGrundy.Ahealthappdeveloper’sguidetolawandpolicy:amulti- sectorpolicyanalysis.BMCMedicalInformaticsandDecisionMaking,2017.CCS’23,November26–30,2023,Copenhagen,Denmark SongLiao,LongCheng,HaipengCai,LinkeGuo,andHongxinHu [61] GuomingZhang,ChenYan,XiaoyuJi,TianchenZhang,TaiminZhang,and IEEESymposiumonSecurityandPrivacy(SP),pages1381–1396.IEEE,2019. WenyuanXu. Dolphinattack:Inaudiblevoicecommands. InACMSIGSAC [63] YangyongZhang,LeiXu,AbnerMendoza,GuangliangYang,PhakpoomChin-
ConferenceonComputerandCommunicationsSecurity(CCS),pages103–117, prutthiwong,andGuofeiGu. Lifeafterspeechrecognition:Fuzzingsemantic 2017. misinterpretationforvoiceassistantapplications. InNetworkandDistributed [62] NanZhang,XianghangMi,XuanFeng,XiaoFengWang,YuanTian,andFeng SystemSecuritySymposium(NDSS),2019. Qian.Dangerousskills:Understandingandmitigatingsecurityrisksofvoice- [64] SerenaZheng,NoahApthorpe,MarshiniChetty,andNickFeamster.Userper- controlledthird-partyfunctionsonvirtualpersonalassistantsystems.In2019 ceptionsofsmarthomeiotprivacy.Proc.ACMHum.-Comput.Interact.,2018.
2309.08474 ffi VulnSense: E cient Vulnerability Detection in Ethereum Smart Contracts by Multimodal Learning with Graph Neural Network and Language Model PhanTheDuya,b,NghiHoangKhoaa,b,NguyenHuuQuyena,b,LeCongTrinha,b,VuTrungKiena,b,TrinhMinhHoanga,b, Van-HauPhama,b aInformationSecurityLaboratory,UniversityofInformationTechnology,HoChiMinhcity,Vietnam bVietnamNationalUniversityHoChiMinhCity,HochiminhCity,Vietnam Abstract ThispaperpresentsVulnSenseframework, acomprehensiveapproachtoefficientlydetectvulnerabilitiesinEthereumsmartcon- tractsusingamultimodallearningapproachongraph-basedandnaturallanguageprocessing(NLP)models. Ourproposedframe- work combines three types of features from smart contracts comprising source code, opcode sequences, and control flow graph (CFG)extractedfrombytecode.WeemployBidirectionalEncoderRepresentationsfromTransformers(BERT),BidirectionalLong Short-TermMemory(BiLSTM)andGraphNeuralNetwork(GNN)modelstoextractandanalyzethesefeatures. Thefinallayerof ourmultimodalapproachconsistsofafullyconnectedlayerusedtopredictvulnerabilitiesinEthereumsmartcontracts. Address- inglimitationsofexistingvulnerabilitydetectionmethodsrelyingonsingle-featureorsingle-modeldeeplearningtechniques,our methodsurpassesaccuracyandeffectivenessconstraints. WeassessVulnSenseusingacollectionof1.769smartcontractsderived from the combination of three datasets: Curated, SolidiFI-Benchmark, and Smartbugs Wild. We then make a comparison with variousunimodalandmultimodallearningtechniquescontributedbyGNN,BiLSTMandBERTarchitectures. Theexperimental outcomes demonstrate the superior performance of our proposed approach, achieving an average accuracy of 77.96% across all threecategoriesofvulnerablesmartcontracts. Keywords: VulnerabilityDetection,SmartContract,DeepLearning,GraphNeuralNetworks,Multimodal 1. Introduction Blockchainecosystemenablesattackerstoexploitflawedsmart contracts,therebyaffectingtheassetsofindividuals,organiza- TheBlockchainkeywordhasbecomeincreasinglymorepop- tions,aswellasthestabilitywithintheBlockchainecosystem. ular in the era of Industry 4.0 with many applications for a In more detail, the DAO attack [3] presented by Mehar et variety of purposes, both good and bad. For instance, in the al. is a clear example of the severity of the vulnerabilities, as fieldoffinance,Blockchainisutilizedtocreatenew,faster,and itresultedinsignificantlossesupto$50million. Tosolvethe moresecurepaymentsystems,examplesofwhichincludeBit- issues,Kushwahaetal. [4]conductedaresearchsurveyonthe coinandEthereum.However,Blockchaincanalsobeexploited differenttypesofvulnerabilitiesinsmartcontractsandprovided for money laundering, as it enables anonymous money trans- an overview of the existing tools for detecting and analyzing fers, as exemplified by cases like Silk Road [1]. The number thesevulnerabilities.Developershavecreatedanumberoftools ofkeywordsassociatedwithblockchainisgrowingrapidly,re- todetectvulnerabilitiesinsmartcontractssourcecode,suchas flecting the increasing interest in this technology. A typical Oyente [5], Slither [4], Conkas [6], Mythril [7], Securify [8], example is the smart contract deployed on Ethereum. Smart etc.Thesetoolsusestaticanddynamicanalysisforseekingvul- contractsareprogrammedinSolidity,whichisanewlanguage nerabilities,buttheymaynotcoverallexecutionpaths,leading that has been developed in recent years. When deployed on a to false negatives. Additionally, exploring all execution paths blockchain system, smart contracts often execute transactions in complex smart contracts can be time-consuming. Current related to cryptocurrency, specifically the ether (ETH) token. endeavorsincontractsecurityanalysisheavilydependonpre- However, the smart contracts still have many vulnerabilities, definedrulesestablishedbyspecialists,aprocessthatdemands which have been pointed out by Zou et al. [2]. Alongside the significantlaborandlacksscalability. immutable and transparent properties of Blockchain, the pres- Meanwhile,theemergenceofMachineLearning(ML)meth- ence of vulnerabilities in smart contracts deployed within the odsinthedetectionofvulnerabilitiesinsoftwarehasalsobeen explored. Thisisalsoapplicabletosmartcontracts,wherenu- merous tools and research are developed to identify security Emailaddresses:duypt@uit.edu.vn(PhanTheDuy), bugs, such as ESCORT by Lutz [9], ContractWard by Wang khoanh@uit.edu.vn(NghiHoangKhoa),quyennh@uit.edu.vn(Nguyen HuuQuyen),19522404@gm.uit.edu.vn(LeCongTrinh), [10]andQian[11]. TheML-basedmethodshavesignificantly 19521722@gm.uit.edu.vn(VuTrungKien),19521548@gm.uit.edu.vn improvedperformanceoverstaticanddynamicanalysismeth- (TrinhMinhHoang),haupv@uit.edu.vn(Van-HauPham) ods,asindicatedinthestudybyJiang[12]. PreprintsubmittedtoKnowledge-BasedSystems September18,2023 3202 peS 51 ]RC.sc[ 1v47480.9032:viXraHowever,thecurrentstudiesdoexhibitcertainlimitations, coderevealslow-levelexecutiondetails,andopcodesequences primarilycenteredaroundtheutilizationofonlyasingulartype capturetheexecutionflow. Byfusingthesefeatures,themodel of feature from the smart contract as the input for ML mod- canextractarichersetoffeatures,potentiallyleadingtomore
els. To elaborate, it is noteworthy that a smart contract’s rep- accuratedetectionofvulnerabilities. Themaincontributionsof resentationandsubsequentanalysiscanbeapproachedthrough thispaperaresummarizedasfollows: itssourcecode,employingtechniquessuchasNLP,asdemon- • First, we propose a multimodal learning approach con- stratedinthestudyconductedbyKhodadadietal. [13]. Con- sistingofBERT,BiLSTMandGNNtoanalyzethesmart versely, an alternative approach, as showcased by Chen et al. contractundermulti-viewstrategybyleveragingtheca- [14], involves the usage of the runtime bytecode of a smart pability of NLP algorithms, corresponding to three type contract published on the Ethereum blockchain. Additionally, of features, including source code, opcodes, and CFG Wangandcolleagues[10]addressedvulnerabilitydetectionus- generatedfrombytecode. ingopcodesextractedthroughtheemploymentoftheSolctool [15] (Solidity compiler), based on either the contract’s source • Then,weextractandleveragethreetypesoffeaturesfrom codeorbytecode. smartcontractstomakeacomprehensivefeaturefusion. Inpracticalterms,thesemethodologiesfallunderthecate- Morespecifics,oursmartcontractrepresentationswhich gorizationofunimodalormonomodalmodels,designedtoex- are created from real-world smart contract datasets, in- clusively handle one distinct type of data feature. Extensively cludingSmartbugsCurated,SolidiFI-BenchmarkandSmart- investigatedandprovenbeneficialindomainssuchascomputer bugsWild, canhelpmodeltocapturesemanticrelation- vision,naturallanguageprocessing,andnetworksecurity,these shipsofcharacteristicsinthephaseofanalysis. unimodalmodelsdoexhibitimpressiveperformancecharacter- istics. However, their inherent drawback lies in their limited • Finally,weevaluatetheperformanceofVulnSenseframe- perspective, resulting from their exclusive focus on singular workonthereal-worldvulnerablesmartcontractstoin- dataattributes,whichlackthepotentialcharacteristicsformore dicatethecapabilityofdetectingsecuritydefectssuchas in-depthanalysis. Reentrancy,Arithmeticonsmartcontracts. Additionally, Thislimitationhaspromptedtheemergenceofmultimodal wealsocompareourframeworkwithaunimodalmodels models,whichofferamorecomprehensiveoutlookondataob- and other multimodal ones to prove the superior effec- jects. The works of Jabeen and colleagues [16], Tadas et al. tivenessofVulnSense. [17],Nametal.[18],andXu[19]underscorethistrend.Specif- The remaining sections of this article are constructed as fol- ically,multimodallearningharnessesdistinctMLmodels,each lows. Section 3 introduces some related works in adversar- accommodating diverse input types extracted from an object. ial attacks and countermeasures. The section 2 gives a brief Thisapproachfacilitatestheacquisitionofholisticandintricate backgroundofappliedcomponents. Next,thethreatmodeland representationsoftheobject,aconcertedefforttosurmountthe methodologyarediscussedinsection4.Section5describesthe limitations posed by unimodal models. By leveraging multi- experimental settings and scenarios with the result analysis of ple input sources, multimodal models endeavor to enrich the ourwork. Finally,weconcludethepaperinsection6. understanding of the analyzed data objects, resulting in more comprehensiveandaccurateoutcomes. Recently,multimodalvulnerabilitydetectionmodelsforsmart 2. Background contractshaveemergedasanewresearcharea,combiningdif- 2.1. BytecodeofSmartContracts ferenttechniquestoprocessdiversedata,includingsourcecode, Bytecode is a sequence of hexadecimal machine instruc- bytecode and opcodes, to enhance the accuracy and reliability tions generated from high-level programming languages such of AI systems. Numerous studies have demonstrated the ef- as C/C++, Python, and similarly, Solidity. In the context of fectiveness of using multimodal deep learning models to de- deploying smart contracts using Solidity, bytecode serves as tect vulnerabilities in smart contracts. For instance, Yang et the compiled version of the smart contract’s source code and al. [20]proposedamultimodalAImodelthatcombinessource isexecutedontheblockchainenvironment. Bytecodeencapsu- code, bytecode, and execution traces to detect vulnerabilities latestheactionsthatasmartcontractcanperform. Itcontains in smart contracts with high accuracy. Chen et al. [13] pro- statementsandnecessaryinformationtoexecutethecontract’s posed a new hybrid multimodal model called HyMo Frame- functionalities. Bytecode is commonly derived from Solidity work, which combines static and dynamic analysis techniques orotherlanguagesusedinsmartcontractdevelopment. When to detect vulnerabilities in smart contracts. Their framework deployedonEthereum,bytecodeiscategorizedintotwotypes: usesmultiplemethodsandoutperformsothermethodsonsev- creationbytecodeandruntimebytecode. eraltestdatasets. Recognizingthatthesefeaturesaccuratelyreflectsmartcon- 1. Creation Bytecode: The creation bytecode runs only tracts and the potential of multimodal learning, we employ a once during the deployment of the smart contract onto multimodalapproachtobuildavulnerabilitydetectiontoolfor thesystem. Itisresponsibleforinitializingthecontract’s smart contracts called VulnSense. Different features can pro- initialstate,includinginitializingvariablesandconstruc- videuniqueinsightsintovulnerabilitiesonsmartcontracts.Source tor functions. Creation bytecode does not reside within
code offers a high-level understanding of contract logic, byte- thedeployedsmartcontractontheblockchainnetwork. 22. RuntimeBytecode:Runtimebytecodecontainsexecutable Furthermore,CFGsupportstheoptimizationofSoliditysource informationaboutthesmartcontractandisdeployedonto code. By analyzing and understanding the control flow struc- theblockchainnetwork. ture, we can propose performance and correctness improve- ments for the Solidity program. This is particularly crucial in Onceasmartcontracthasbeencompiledintobytecode,itcan thedevelopmentofsmartcontractsontheEthereumplatform, bedeployedontotheblockchainandexecutedbynodeswithin whereperformanceandsecurityplayessentialroles. thenetwork. Nodesexecutethebytecode’sstatementstodeter- Inconclusion,CFGisapowerfulrepresentationthatallows minethebehaviorandinteractionsofthesmartcontract. ustoanalyze,understand,andoptimizethecontrolflowinSo- Bytecodeishighlydeterministicandremainsimmutableaf- lidityprograms. Byconstructingcontrolflowgraphsandana- tercompilation. Itprovidesparticipantsintheblockchainnet- lyzingthecontrolflowstructure,wecanidentifyerrors,verify work the ability to inspect and verify smart contracts before correctness,andoptimizeSoliditysourcecodetoensureperfor- deployment. manceandsecurity. Insummary,bytecodeservesasabridgebetweenhigh-level programming languages and the blockchain environment, en- abling smart contracts to be deployed and executed. Its deter- 3. Relatedwork ministic nature and pre-deployment verifiability contribute to This section will review existing works on smart contract thesecurityandreliabilityofsmartcontractimplementations. vulnerabilitydetection,includingconventionalmethods,single learningmodelandmultimodallearningapproaches. 2.2. OpcodeofSmartContracts Opcodeinsmartcontractsreferstotheexecutablemachine 3.1. Staticanddynamicmethod instructions used in a blockchain environment to perform the There are many efforts in vulnerability detection in smart functionsofthesmartcontract. Opcodesarelow-levelmachine contractsthroughbothstaticanddynamicanalysis.Thesetech- commandsusedtocontroltheexecutionprocessofthecontract niques are essential for scrutinizing both the source code and onablockchainvirtualmachine,suchastheEthereumVirtual theexecutionprocessofsmartcontractstouncoversyntaxand Machine(EVM). logic errors, including assessments of input variable validity Eachopcoderepresentsaspecifictaskwithinthesmartcon- and string length constraints. Dynamic analysis evaluates the tract,includinglogicaloperations,arithmeticcalculations,mem- controlflowduringsmartcontractexecution,aimingtounearth orymanagement,dataaccess,callingandinteractingwithother potential security flaws. In contrast, static analysis employs contracts in the Blockchain network, and various other tasks. approaches such as symbolic execution and tainting analysis. Opcodes define the actions that a smart contract can perform Taintanalysis,specifically,identifiesinstancesofinjectionvul- and specify how the contract’s data and state are processed. nerabilitieswithinthesourcecode. Theseopcodesarelistedanddefinedinthebytecoderepresen- Recentresearchstudieshaveprioritizedcontrolflowanaly- tationofthesmartcontract. sisastheprimaryapproachforsmartcontractvulnerabilityde- Theuseofopcodesprovidesflexibilityandstandardization tection. Notably,Kushwahaetal. [21]havecompiledanarray inimplementingthefunctionalitiesofsmartcontracts.Opcodes of tools that harness both static analysis techniques—such as ensureconsistencyandsecurityduringtheexecutionofthecon- thoseinvolvingsourcecodeandbytecode—anddynamicanal- tractontheblockchain,andplayasignificantroleindetermin- ysis techniques via control flow scrutiny during contract exe- ingthebehaviorandlogicofthesmartcontract. cution. AprominentexampleofstaticanalysisisOyente[22], a tool dedicated to smart contract examination. Oyente em- 2.3. ControlFlowGraph ploys control flow analysis and static checks to detect vulner- CFG is a powerful data structure in the analysis of Solid- abilitieslikeReentrancyattacks,faultytokenissuance,integer ity source code, used to understand and optimize the control overflows, and authentication errors. Similarly, Slither [23], a flowofaprogramextractedfromthebytecodeofasmartcon- dynamicanalysistool,utilizescontrolflowanalysisduringexe- tract. The CFG helps determine the structure and interactions cutiontopinpointsecurityvulnerabilities,encompassingReen- between code blocks in the program, providing crucial infor- trancy attacks, Token Issuance Bugs, Integer Overflows, and mation about how the program executes and links elements in Authentication Errors. It also adeptly identifies concerns like thecontrolflow. TransactionOrderDependence(TOD)andTimeDependence. Specifically, CFG identifies jump points and conditions in Beyond static and dynamic analysis, another approach in- Soliditybytecodetoconstructacontrolflowgraph. Thisgraph volvesfuzzytesting. Inthistechnique,inputstringsaregener- describesthebasicblocksandcontrolbranchesintheprogram, ated randomly or algorithmically to feed into smart contracts, therebycreatingaclearunderstandingofthestructureoftheSo- and their outcomes are verified for anomalies. Both Contract lidity program. With CFG, we can identify potential issues in Fuzzer[24]andxFuzz[25]pioneertheuseoffuzzingforsmart
theprogramsuchasinfiniteloops, incorrectconditions, orse- contractvulnerabilitydetection. ContractFuzzeremployscon- curityvulnerabilities.ByexaminingcontrolflowpathsinCFG, colic testing, a hybrid of dynamic and static analysis, to gen- we can detect logic errors or potential unwanted situations in erate test cases. Meanwhile, xFuzz leverages a genetic algo- theSolidityprogram. rithmtodeviserandomtestcases,subsequentlyapplyingthem tosmartcontractsforvulnerabilityassessment. 3Moreover,symbolicexecutionstandsasanadditionalmethod And most recently, Jie Wanqing and colleagues have pub- for in-depth analysis. By executing control flow paths, sym- lished a study [32] utilizing four attributes of smart contracts bolicexecutionallowsthegenerationofgeneralizedinputval- (SC), including source code, Static Single Assigment (SSA), ues,addressingchallengesassociatedwithrandomnessinfuzzing CFG, and bytecode. With these four attributes, they construct approaches. Thisapproachholdspotentialforovercominglim- three layers: SC, BB, and EVMB. Among these, the SC layer itationsandintricaciestiedtothecreationofarbitraryinputval- employs source code for attribute extraction using Word2Vec ues. and BERT, the BB layer uses SSA and CFG generated from However, the aforementioned methods often have low ac- the source code, and finally, the EVMB layer employs assem- curacyandarenotflexiblebetweenvulnerabilitiesastheyrely bly code and CFG derived from bytecode. Additionally, the on expert knowledge, fixed patterns, and are time-consuming authorscombinetheseclassesthroughvariousmethodsandun- and costly to implement. They also have limitations such as dergoseveraldistinctsteps. onlydetectingpre-definedfixedvulnerabilitiesandlackingthe These models yield promising results in terms of Accu- abilitytodetectnewvulnerabilities. racy, with HyMo [30] achieving approximately 0.79%, HY- DRA[31]surpassingitwitharound0.98%andthemultimodal 3.2. MachineLearningmethod AI of Jie et al. [32]. achieved high-performance results rang- ML methods often use features extracted from smart con- ingfrom0.94%to0.99%acrossvarioustestcases. Withthese tractsandemploysupervisedlearningmodelstodetectvulnera- achievedresults,thesestudieshavedemonstratedthepowerof bilities. Recentresearchhasindicatedthatresearchgroupspri- multimodal models compared to unimodal models in classify- marily rely on supervised learning. The common approaches ing objects with multiple attributes. However, the limitations usually utilize feature extraction methods to obtain CFG and within the scope of the paper are more constrained by imple- Abstract Syntax Tree (AST) through dynamic and static anal- mentation than design choices. They utilized word2vec that ysis tools on source code or bytecode. Th ese studies [26, lackssupportforout-of-vocabularywords.Toaddressthiscon- 27] used a sequential model of Graph Neural Network to pro- straint, they proposed substituting word2vec with the fastText cess opcodes and employed LSTM to handle the source code. NLPmodel. Subsequently,theirvulnerabilitydetectionframe- Besides, a team led by Nguyen Hoang has developed Mando work was modeled as a binary classification problem within a Guru[28],aGNNmodeltodetectvulnerabilitiesinsmartcon- supervisedlearningparadigm.Inthiswork,theirprimaryfocus tracts.TheirteamappliedadditionalmethodssuchasHeteroge- wasondeterminingwhetheracontractcontainsavulnerability neous Graph Neural Network, Coarse-Grained Detection, and ornot.Asubsequenttaskcoulddelveintoinvestigatingspecific Fine-GrainedDetection. Theyleveragedthecontrolflowgraph vulnerabilitytypesthroughdesiredmulti-classclassification. (CFG) and call graph (CG) of the smart contract to detect 7 Fromtheevaluationspresentedinthissection,wehaveiden- vulnerabilities. Theirapproachiscapableofdetectingmultiple tified the strengths and limitations of existing literature. It is vulnerabilities in a single smart contract. The results are rep- evident that previous works have not fully optimized the uti- resentedasnodesandpathsinthegraph. Additionally, Zhang lizationofSmartContractdataandlacktheincorporationofa Lejun et al. [29] also utilized ensemble learning to develop a diverse range of deep learning models. While unimodal ap- 7-layerconvolutionalmodelthatcombinedvariousneuralnet- proaches have not adequately explored data diversity, multi- workmodelssuchasCNN,RNN,RCN,DNN,GRU,Bi-GRU, modaloneshavetraded-offconstructiontimeforclassification and Transformer. Each model was assigned a different role in focus, solely determining whether a smart contract is vulnera- eachlayerofthemodel. bleornot. Inlightoftheseinsights,weproposeanovelframeworkthat 3.3. MultimodalLearning leverages the advantages of three distinct deep learning mod- elsincludingBERT,GNN,andBiLSTM.Eachmodelformsa The HyMo Framework [30], introduced by Chen et al. in separatebranch,contributingtothecreationofaunifiedarchi- 2020, presented a multimodal deep learning model used for tecture. Our approach adopts a multi-class classification task, smartcontractvulnerabilitydetectionillustratesthecomponents aiming to collectively improve the effectiveness and diversity oftheHyMoFramework.Thisframeworkutilizestwoattributes of vulnerability detection. By synergistically integrating these of smart contracts, including source code and opcodes. After
models, we strive to overcome the limitations of the existing preprocessing these attributes, the HyMo framework employs literatureandprovideamorecomprehensivesolution. FastTextforwordembeddingandutilizestwoBi-GRUmodels toextractfeaturesfromthesetwoattributes. Anotherframework, theHYDRAframework, proposedby 4. Methodology Chen and colleagues [31], utilizes three attributes, including API, bytecode, and opcode as input for three branches in the Thissectionprovidestheoutlineofourproposedapproach multimodalmodeltoclassifymalicioussoftware. Eachbranch forvulnerabilitydetectioninsmartcontracts. Additionally,by processes the attributes using basic neural networks, and then employingmultimodallearning, wegenerateacomprehensive theoutputsofthesebranchesareconnectedthroughfullycon- viewofthesmartcontract,whichallowsustorepresentofsmart nected layers and finally passed through the Softmax function contractwithmorerelevantfeaturesandboosttheeffectiveness toobtainthefinalresult. ofthevulnerabilitydetectionmodel. 4Figure1:TheoverviewofVulnSenseframework. 4.1. Anoverviewofarchitecture functionality, wedesignedaBERTnetworkwhichisabranch Our proposed approach, VulnSense, is constructed upon a inourMultimodal. multimodaldeeplearningframeworkconsistingofthreebranches, As in Figure 2, BERT model consists of 3 blocks: Pre- including BERT, BiLSTM, and GNN, as illustrated in Figure processor,EncoderandNeuralnetwork. Morespecifically,the 1. Morespecifically,thefirstbranchistheBERTmodel,which Preprocessor processes the inputs, which is the source code isbuiltupontheTransformerarchitectureandemployedtopro- of smart contracts. The inputs are transformed into vectors cessthesourcecodeofthesmartcontract. Secondly,tohandle through the input embedding layer, and then pass through the andanalyzetheopcodecontext,theBiLSTMmodelisapplied positional encoding layer to add positional information to the in the second branch. Lastly, the GNN model is utilized for words. Then, the preprocessed valuesare fedinto theencod- representingtheCFGofbytecodeinthesmartcontract. ingblocktocomputerelationshipsbetweenwords. Theentire Thisintegrativemethodologyleveragesthestrengthsofeach encodingblockconsistsof12identicalencodinglayersstacked component to comprehensively assess potential vulnerabilities ontopofeachother. Eachencodinglayercomprisestwomain withinsmartcontracts. Thefusionoflinguistic,sequential,and parts: aself-attentionlayerandafeed-forwardneuralnetwork. structuralinformationallowsforamorethoroughandinsight- Theoutputencodedformsavectorspaceoflength768.Subse- ful evaluation, thereby fortifying the security assessment pro- quently,theencodedvaluesarepassedthroughasimpleneural cess.Thisapproachpresentsarobustfoundationforidentifying network. The resulting bert output values constitute the out- vulnerabilitiesinsmartcontractsandholdspromiseforsignifi- put of this branch in the multimodal model. Thus, the whole cantlyreducingrisksinblockchainecosystems. BERTcomponentcouldbedemonstratedasfollows: preprocessed= positional encoding(e(input)) (1) 4.2. BidirectionalEncoderRepresentationsfromTransformers (BERT) encoded= Encoder(preprocessed) (2) bert output= NN(encoded) (3) where, (1), (2) and (3) represent the Preprocessor block, En- coderblockandNeuralNetworkblock,respectively. 4.3. Bidirectionallong-shorttermmemory(BiLSTM) Toward the opcode, we applied the BiLSTM which is an- other branch of our Multimodal approach to analysis the con- textual relation of opcodes and contribute crucial insights into thecode’sexecutionflow. ByprocessingOpcodesequentially, Figure2:ThearchitectureofBERTcomponentinVulnSense weaimedtocapturepotentialvulnerabilitiesthatmightbeover- lookedbysolelyconsideringstructuralinformation. In this study, to capture high-level semantic features from Indetail,asinFigure3,wefirsttokenizetheopcodesand thesourcecodeandenableamorein-depthunderstandingofits convertthemintointegervalues.Theopcodefeaturestokenized 5Inthisbranch,wefirstlyextracttheCFGfromthebytecode, andthenuseOpenAI’sembeddingAPItoencodethenodesand edgesoftheCFGintovectors,asin(7). encode= Encoder(edges,nodes) (7) Theencodedvectorshavealengthof1536. Thesevectors are then passed through 3 GCN layers with ReLU activation functions(8),withthefirstlayerhavinganinputlengthof1536 Figure3:ThearchitectureofBiLSTMcomponentinVulnSense. andanoutputlengthofacustomhidden channels(hc)variable. GCN1=GCNConv(1536,relu)(encode) are embedded into a dense vector space using an embedding GCN2=GCNConv(hc,relu)(GCN1) (8) layerwhichhas200dimensions. GCN3=GCNConv(hc)(GCN2) token=Tokenize(opcode) (4) Finally, to feed into the multimodal deep learning model, vector space= Embedding(token) the output of the GCN layers isfed into 2 dense layers with 3 and64unitsrespecitvely,asdescribedin(9). Then,theopcodevectorisfedintotwoBiLSTMlayerswith 128and64unitsrespectively. Moreover,toreduceoverfitting, d1 gnn= Dense(3,relu)(GCN3) (9) theDropoutlayerisappliedafterthefirstBiLSTMlayerasin gnn output= Dense(64,relu)(d1 gnn) (5). 4.5. Multimodal bi lstm1= Bi LSTM(128)(vector space) Each of these branches contributes a unique dimension of
r= Dropout(dense(bi lstm1)) (5) analysis, allowing us to capture intricate patterns and nuances bi lstm2= Bi LSTM(128)(r) presentinthesmartcontractdata.Therefore,weadoptaninno- vativeapproachbysynergisticallyconcatinatingtheoutputsof Finally,theoutputofthelastBiLSTMlayeristhenfedinto threedistinctivemodelsincludingBERTbert output(3),BiL- adenselayerwith64unitsandReLUactivationfunctionasin STMlstm output(6),andGNNgnn output(9)toenhancethe (6). accuracyanddepthofourpredictivemodel,asshownin(10): c=Concatenate([bert output, lstm output= Dense(64,relu)(bi lstm2) (6) (10) lstm output,gnn output]) 4.4. GraphNeuralNetwork(GNN) Then the output c is transformed into a 3D tensor with di- Toofferinsightsintothestructuralcharacteristicsofsmart mensions(batch size,194,1)usingtheReshapelayer(11): contracts based on bytecode, we present a CFG-based GNN c reshaped=Reshape((194,1))(c) (11) model which is the third branch of our multimodal model, as Next,thetransformedtensorc reshapedispassedthrough showninFigure4. a 1D convolutional layer (12) with 64 filters and a kernel size of3,utilizingtherectifiedlinearactivationfunction: conv out=Conv1D(64,3,relu)(c reshaped) (12) The output from the convolutional layer is then flattened (13)togeneratea1Dvector: f out=Flatten()(conv out) (13) Theflattenedtensorf outissubsequentlypassedthrougha fully connected layer with length 32 and an adjusted rectified linearactivationfunctionasin(14): d out=Dense(32,relu)(f out) (14) Finally,theoutputispassedthroughthesoftmaxactivation function (15) to generate a probability distribution across the threeoutputclasses: (cid:101)y=Dense(3,softmax)(d out) (15) Figure4:ThearchitectureofGNNcomponentinVulnSense Thisarchitectureformsthefinalstagesofourmodel,culmi- natinginthegenerationofpredictedprobabilitiesforthethree outputclasses. 65. ExperimentsandAnalysis 5.3.1. SourceCodeSmartContract Whenprogramming,developersoftenhavethehabitofwrit- 5.1. ExperimentalSettingsandImplementation ing comments to explain their source code, aiding both them- In this work, we utilize a virtual machine (VM) of Intel selvesandotherprogrammersinunderstandingthecodesnip- Xeon(R)CPUE5-2660v4@2.00GHzx24,128GBofRAM, pets. BERT, a natural language processing model, takes the and Ubuntu 20.04 version for our implementation. Further- source code of smart contracts as its input. From the source more,allexperimentsareevaluatedunderthesameexperimen- codeofsmartcontracts,BERTcalculatestherelevanceofwords talconditions.TheproposedmodelisimplementedusingPython within the code. Comments present within the code can in- programming language and utilized well-established libraries troduce noise to the BERT model, causing it to compute un- suchasTensorFlow,Keras. necessary information about the smart contract’s source code. Foralltheexperiments,wehaveutilizedthefine-tunestrat- Hence,preprocessingofthesourcecodebeforefeedingitinto egy to improve the performance of these models during the theBERTmodelisnecessary. trainingstage. Wesetthebatchsizeas32andthelearningrate ofoptimizerAdamwith0.001. Additionally,toescapeoverfit- tingdata,thedropoutoperationratiohasbeensetto0.03. 5.2. PerformanceMetrics Weevaluateourproposedmethodvia4followingmetrics, including Accuracy, Precision, Recall, F1-Score. Since our workconductsexperimentsinmulti-classesclassificationtasks, thevalueofeachmetriciscomputedbasedona2Dconfusion matrixwhichincludesTruePositive(TP),TrueNegative(TN), FalsePositive(FP)andFalseNegative(FN). AccuracyistheratioofcorrectpredictionsTP, TNoverall predictions. Figure5:AnexampleofSmartContractPriortoProcessing Precision measures the proportion of TP over all samples classifiedaspositive. Moreover, removing comments from the source code also RecallisdefinedtheproportionofTPoverallpositivein- helps reduce the length of the input when fed into the model. stancesinatestingdataset. To further reduce the source code length, we also eliminate F1-ScoreistheHarmonicMeanofPrecisionandRecall. extra blank lines and unnecessary whitespace. Figure 5 pro- vides an example of an unprocessed smart contract from our 5.3. DatasetandPreprocessing dataset.Thiscontractcontainscommentsfollowingthe’//’syn- tax,blanklines,andexcessivewhitespacesthatdonotadhere Table1:DistributionofLabelsintheDataset toprogrammingstandards. Figure6representsthesmartcon- tractafterundergoingprocessing. VulnerabilityType Contracts Arithmetic 631 Re-entrancy 591 Non-Vulnerability 547 Inthisdataset,wecombinethreedatasets,includingSmart- bugs Curated [33, 34], SolidiFI-Benchmark [35], and Smart- bugs Wild [33, 34]. For the Smartbugs Wild dataset, we col- lectsmartcontractscontainingasinglevulnerability(eitheran Arithmetic vulnerability or a Reentrancy vulnerability). The Figure6:AnexampleofprocessedSmartContract identification of vulnerable smart contracts is confirmed by at leasttwovulnerabilitydetectiontoolscurrentlyavailable.Into- tal,ourdatasetincludes547Non-Vulnerability,631Arithmetic 5.3.2. OpcodeSmartContracts VulnerabilitiesofSmartContracts,and591ReentrancyVulner- Weproceedwithbytecodeextractionfromthesourcecode abilitiesofSmartContracts,asshowninTable1.
of the smart contract, followed by opcode extraction through thebytecode. Theopcodeswithinthecontractarecategorized into 10 functional groups, totaling 135 opcodes, according to theEthereumYellowPaper[36]. However,wehavecondensed thembasedonTable2. 7Table2:Thesimplifiedopcodemethods SubstitutedOpcodes OriginalOpcodes DUP DUP1-DUP16 SWAP SWAP1-SWAP16 PUSH PUSH5-PUSH32 LOG LOG1-LOG4 During the preprocessing phase, unnecessary hexadecimal characterswereremovedfromtheopcodes.Thepurposeofthis preprocessing is to utilize the opcodes for vulnerability detec- tioninsmartcontractsusingtheBiLSTMmodel. Inadditiontoopcodepreprocessing,wealsoperformedother preprocessingstepstopreparethedatafortheBiLSTMmodel. Firstly, we tokenized the opcodes into sequences of integers. Subsequently, we applied padding to create opcode sequences ofthesamelength. Themaximumlengthofopcodesequences wassetto200,whichisthemaximumlengththattheBiLSTM modelcanhandle. Figure8:VisualizeGraphby.cgf.gvfile Afterthepaddingstep,weemployaWordEmbeddinglayer totransformtheencodedopcodesequencesintofixed-sizevec- tors,servingasinputsfortheBiLSTMmodel. Thisenablesthe To train the GNN model, we encode the nodes and edges BiLSTM model to better learn the representations of opcode oftheCFGintonumericalvectors. Oneapproachistouseem- sequences. beddingtechniquestorepresenttheseentitiesasvectors.Inthis Ingeneral,thepreprocessingstepsweperformedarecrucial case, we utilize the OpenAI API embedding to encode nodes inpreparingthedatafortheBiLSTMmodelandenhancingits and edges into vectors of length 1536. This could be a cus- performanceindetectingvulnerabilitiesinSmartContracts. tomizedapproachbasedonOpenAI’spre-traineddeeplearning models. OncethenodesandedgesoftheCFGareencodedinto 5.3.3. ControlFlowGraph vectors,weemploythemasinputsfortheGNNmodel. 5.4. ExperimentalScenarios To prove the efficiency of our proposed model and com- paredmodels, weconductedtrainingwithatotalof7models, categorizedintotwotypes:unimodaldeeplearningmodelsand multi-modaldeeplearningmodels. On the one hand, the unimodal deep learning models con- sisted of models within each branch of VulnSense. On the other hand, these multimodal deep learning models are pair- wisecombinationsofthreeunimodaldeeplearningmodelsand VulnSense,utilizinga2-wayinteraction. Specifically: • Unimodal: – BiLSTM – BERT – GNN Figure7:GraphExtractedfromBytecode • Multimodal: First,weextractbytecodefromthesmartcontract,thenex- – MultimodalBERT-BiLSTM(M1) tract the CFG through bytecode into .cfg.gv files as shown in – MultimodalBERT-GNN(M2) Figure 7. From this .cfg.gv file, a CFG of a Smart Contract – MultimodalBiLSTM-GNN(M3) throughbytecodecanberepresentedasshowninFigure8.The – VulnSense(asmentionedinSection4) nodesintheCFGtypicallyrepresentcodeblocksorstatesofthe contract,whiletheedgesrepresentcontrolflowconnectionsbe- Furthermore, to illustrate the fast convergence and stability of tweennodes. multimodalmethod,wetrainandvalidate7modelson3differ- entmocksoftrainingepochsincluding10,20and30epochs. 8Table3:Theperformanceof7models Score Epoch BERT BiLSTM GNN M1 M2 M3 VulnSense E10 0.5875 0.7316 0.5960 0.7429 0.6468 0.7542 0.7796 Accuracy E20 0.5903 0.6949 0.5988 0.7796 0.6553 0.7768 0.7796 E30 0.6073 0.7146 0.6016 0.7796 0.6525 0.7683 0.7796 E10 0.5818 0.7540 0.4290 0.7749 0.6616 0.7790 0.7940 Precision E20 0.6000 0.7164 0.7209 0.7834 0.6800 0.7800 0.7922 E30 0.6000 0.7329 0.5784 0.7800 0.7000 0.7700 0.7800 E10 0.5876 0.7316 0.5960 0.7429 0.6469 0.7542 0.7797 Recall E20 0.5900 0.6949 0.5989 0.7797 0.6600 0.7800 0.7797 E30 0.6100 0.7147 0.6017 0.7700 0.6500 0.7700 0.7700 E10 0.5785 0.7360 0.4969 0.7509 0.6520 0.7602 0.7830 F1 E20 0.5700 0.6988 0.5032 0.7809 0.6600 0.7792 0.7800 E30 0.6000 0.7185 0.5107 0.7700 0.6500 0.7700 0.7750 5.5. ExperimentalResults the M1 and M3 models, which require 30 epochs. Besides, Theexperimentationprocessforthemodelswascarriedout throughout30trainingepochs,theM3,M1,BiLSTM,andM2 onthedatasetasdetailedinSection5.3. modelsexhibitedsimilarperformancetotheVulnSensemodel, yet they demonstrated some instability. On the one hand, the 5.5.1. Modelsperformanceevaluation VulnSensemodelmaintainsaconsistentperformancelevelwithin therangeof75-79%,ontheotherhand,theM3modelexperi- ThroughthevisualizationsinTable3, itcanbeintuitively enced a severe decline in both Accuracy and F1-Score values, answeredthattheabilitytodetectvulnerabilitiesinsmartcon- tracts using multi-modal deep learning models is more effec- decliningbyover20%bythe15th epoch,indicatingsignificant disturbanceinitsperformance. tivethanunimodaldeeplearningmodelsintheseexperiments. Fromthisobservation,thesefindingsindicatethatourpro- Specifically,whentesting3multimodalmodelsincludingM1, posed model, VulnSense, is more efficient in identifying vul-
M3 and VulnSense on 3 mocks of training epochs, the results nerabilitiesinsmartcontractscomparedtotheseothermodels. indicate that the performance is always higher than 75.09% Furthermore,byharnessingtheadvantagesofmultimodalover with 4 metrics mentioned above. Meanwhile, the testing per- unimodal,VulnSensealsoexhibitedconsistentperformanceand formanceofM2and3unimodalmodelsincludingBERT,BiL- rapidconvergence. STM and GNN are lower than 75% with all 4 metrics. More- over, with the testing performance on all 3 mocks of training 5.5.2. ComparisonsofTime epochs, VulnSense model has achieved the highest F1-Score withmorethan77%andAccuracywithmorethan77.96%. Figure11illustratesthetrainingtimefor30epochsofeach In addition, Figure 9 provides a more detailed of the per- model. Concerning the training time of unimodal models, on formances of all 7 models at the last epoch training. It can the one hand, the training time for the GNN model is very be seen from Figure 9 that, among these multimodal models, short, at only 7.114 seconds, on the other hand BERT model VulnSense performs the best, having the accuracy of Arith- reaches significantly longer training time of 252.814 seconds. metic, Reentrancy, and Clean label of 84.44%, 64.08% and For the BiLSTM model, the training time is significantly 10 84.48%,respectively,followedbyM3,M1andM2model.Even timeslongerthanthatoftheGNNmodel. Furthermore, when thoughtheGNNmodel,whichisanunimodalmodel,managed comparing the multimodal models, the shortest training time to attain an accuracy rate of 85.19% for the Arithmetic label belongstotheM3model(themultimodalcombinationofBiL- and82.76%fortheReentrancylabel,itsperformanceinterms STM and GNN) at 81.567 seconds. Besides, M1, M2, and of the Clean label accuracy was merely 1.94%. Similarity in VulnSenseinvolvetheBERTmodel,resultinginrelativelylonger thecontextofunimodalmodels,BiLSTMandBERTbothhave trainingtimeswithover270secondsfor30epochs.Itisevident givedtheaccuracyofall3labelsrelativelylowlessthan80%. thattheunimodalmodelsignificantlyimpactsthetrainingtime Furthermore, the results shown in Figure 10 have demon- ofthemultimodalmodelitcontributesto. AlthoughVulnSense stratedthesuperiorconvergencespeedandstabilityofVulnSense takes more time compared to the 6 other models, it only re- modelcomparedtotheother6models. Indetail,throughtest- quires10epochstoconverge. Thisgreatlyreducesthetraining ing after 10 training epochs, VulnSense model has gained the timeofVulnSenseby66%comparedtotheother6models. highestperformancewithgreaterthan77.96%inall4metrics. Inaddition,Figure12illustratesthepredictiontimeonthe Although, VulnSense, M1 and M3 models give high perfor- same testing set for each model. It’s evident that these mul- mance after 30 training epochs, VulnSense model only needs timodal models M1, M2, and VulSense, which are incorpo- tobetrainedfor10epochstoachievebetterconvergencethan rated from BERT, as well as the unimodal BERT model, ex- 9(a)BERT (b)M1 (c)BiLSTM (d)M2 (e)GNN (f)M3 (g)VulnSense Figure9:Confusionmatricesatthe30thepoch,with(a),(c),(e)representingtheunimodalmodels,and(b),(d),(f),(g)representingthemultimodalmodels hibit extended testing durations, surpassing 5.7 seconds for a of blockchain-based systems. As the adoption of smart con- set of 354 samples. Meanwhile, the testing durations for the tractscontinuestogrow,thevulnerabilitiesassociatedwiththem GNN,BiLSTM,andM3modelsareremarkablybrief,approxi- pose considerable risks. Our proposed methodology not only mately0.2104,1.4702,and2.0056secondscorrespondingly. It addresses these vulnerabilities but also paves the way for fu- isnoticeablethatthepresenceoftheunimodalmodelshasadi- ture research in the realm of multimodal deep learning and its rectinfluenceonthepredictiontimeofthemultimodalmodels diversifiedapplications. in which the unimodal models are involved. In the context of Inclosing, VulnSensenotonlymarksasignificantstepto- the2mosteffectivemultimodalmodels,M3andVulSense,M3 wards securing Ethereum smart contracts but also serves as a modelgavetheshortesttestingtime,about2.0056seconds. On stepping stone for the development of advanced techniques in thecontrary,theVulSensemodelexhibitsthelengthiestpredic- blockchainsecurity. Asthelandscapeofcryptocurrenciesand tiontime,extendingtoabout7.4964seconds,whichisroughly blockchain evolves, our research remains poised to contribute fourtimesthatoftheM3model. WhiletheM3modeloutper- totheongoingquestforenhancedsecurityandreliabilityinde- formstheVulSensemodelintermsoftrainingandtestingdura- centralizedsystems. tion,theVulSensemodelsurpassestheM3modelinaccuracy. Nevertheless,inthecontextofdetectingvulnerabilityforsmart Acknowledgment contracts,increasingaccuracyismoreimportantthanreducing execution time. Consequently, the VulSense model decidedly ThisresearchwassupportedbyTheVNUHCM-University outperformstheM3model. ofInformationTechnology’sScientificResearchSupportFund. 6. Conclusion References Inconclusion,ourstudyintroducesapioneeringapproach, [1] D.Ghimiray(Feb2023).[link]. VulnSense, which harnesses the potency of multimodal deep URLhttps://www.avast.com/c-silk-road-dark-web-market
learning, incorporatinggraphneuralnetworksandnaturallan- [2] W.Zou,D.Lo,P.S.Kochhar,X.-B.D.Le,X.Xia,Y.Feng,Z.Chen, B.Xu,Smartcontractdevelopment:Challengesandopportunities,IEEE guage processing, to effectively detect vulnerabilities within TransactionsonSoftwareEngineering47(10)(2021)2084–2106. Ethereum smart contracts. By synergistically leveraging the [3] I.Mehar,C.Shier,A.Giambattista,E.Gong,G.Fletcher,R.Sanayhie, strengths of diverse features and cutting-edge techniques, our H.Kim,M.Laskowski,Understandingarevolutionaryandflawedgrand experimentinblockchain: Thedaoattack,JournalofCasesonInforma- frameworksurpassesthelimitationsoftraditionalsingle-modal tionTechnology21(2019)19–32.doi:10.4018/JCIT.2019010102. methods.Theresultsofcomprehensiveexperimentsunderscore [4] S.S.Kushwaha,S.Joshi,D.Singh,M.Kaur,H.-N.Lee,Ethereumsmart the superiority of our approach in terms of accuracy and effi- contract analysis tools: A systematic review, IEEE Access 10 (2022) ciency, outperforming conventional deep learning techniques. 57037–57062.doi:10.1109/ACCESS.2022.3169902. This affirms the potential and applicability of our approach in [5] S.Badruddoja,R.Dantu,Y.He,K.Upadhyay,M.Thompson,Making smart contracts smarter, 2021, pp. 1–3. doi:10.1109/ICBC51069. bolstering Ethereum smart contract security. The significance 2021.9461148. of this research extends beyond its immediate applications. It [6] Nveloso,Nveloso/conkas: Ethereumvirtualmachine(evm)bytecodeor contributestothebroaderdiscourseonenhancingtheintegrity soliditysmartcontractstaticanalysistoolbasedonsymbolicexecution. URLhttps://github.com/nveloso/conkas 10(a)Accuracy (b)Precision (c)Recall (d)F1-Score Figure10:Theperformanceof7modelsin3differentmocksoftrainingepochs. Figure11:Comparisonchartoftrainingtimefor30epochsamongmodels Figure12:Comparisonchartofpredictiontimeonthetestsetforeachmodel [7] Consensys,Consensys/mythril:Securityanalysistoolforevmbytecode. [12] F. Jiang, K. Chao, J. Xiao, Q. Liu, K. Gu, J. Wu, Y. Cao, Enhanc- supports smart contracts built for ethereum, hedera, quorum, vechain, ingsmart-contractsecuritythroughmachinelearning: Asurveyofap- roostock,tronandotherevm-compatibleblockchains. proaches and techniques, Electronics 12 (9) (2023). doi:10.3390/ URLhttps://github.com/ConsenSys/mythril electronics12092046. [8] P. Tsankov, A. Dan, D. Drachsler-Cohen, A. Gervais, F. Bu¨nzli, URLhttps://www.mdpi.com/2079-9292/12/9/2046 M.Vechev, Securify: Practicalsecurityanalysisofsmartcontracts, in: [13] M. Khodadadi, J. Tahmoresnezhad, Hymo: Vulnerability detection in Proceedings of the 2018 ACM SIGSAC Conference on Computer and smartcontractsusinganovelmulti-modalhybridmodel(2023). arXiv: CommunicationsSecurity,CCS’18,AssociationforComputingMachin- 2304.13103. ery,NewYork,NY,USA,2018,p.67–82. [14] J. Chen, X. Xia, D. Lo, J. Grundy, X. Luo, T. Chen, DefectChecker: URLhttps://doi.org/10.1145/3243734.3243780 AutomatedsmartcontractdefectdetectionbyanalyzingEVMbytecode, [9] O. Lutz, H. Chen, H. Fereidooni, C. Sendner, A. Dmitrienko, A. R. IEEETransactionsonSoftwareEngineering48(7)(2022)2189–2207. Sadeghi, F. Koushanfar, Escort: Ethereum smart contracts vulnerabil- doi:10.1109/tse.2021.3054928. ity detection using deep neural network and transfer learning (2021). [15] [link]. arXiv:2103.12607. URL https://docs.soliditylang.org/en/develop/index. [10] W.Wang,J.Song,G.Xu,Y.Li,H.Wang,C.Su,Contractward: Auto- html matedvulnerabilitydetectionmodelsforethereumsmartcontracts,IEEE [16] J.Summaira, X.Li, A.M.Shoib, J.Abdul, Areviewonmethodsand TransactionsonNetworkScienceandEngineering8(2)(2021)1133– applicationsinmultimodaldeeplearning(2022).arXiv:2202.09195. 1144.doi:10.1109/TNSE.2020.2968505. [17] T.Baltrusˇaitis,C.Ahuja,L.-P.Morency,Multimodalmachinelearning: [11] P.Qian,Z.Liu,Q.He,R.Zimmermann,X.Wang,Towardsautomated Asurveyandtaxonomy,IEEETransactionsonPatternAnalysisandMa- reentrancy detection for smart contracts based on sequential models, chineIntelligence41(2)(2019)423–443.doi:10.1109/TPAMI.2018. IEEE Access 8 (2020) 19685–19695. doi:10.1109/ACCESS.2020. 2798607. 2969429. [18] W.Nam,B.Jang,Asurveyonmultimodalbidirectionalmachinelearn- 11ingtranslationofimageandnaturallanguageprocessing,ExpertSystems SoftwareTestingandAnalysis,2020. withApplications(2023)121168. [36] D.D.Wood,Ethereum: Asecuredecentralisedgeneralisedtransaction [19] P.Xu,X.Zhu,D.A.Clifton,Multimodallearningwithtransformers: A ledger,2014. survey,IEEETransactionsonPatternAnalysisandMachineIntelligence (2023)1–20doi:10.1109/TPAMI.2023.3275156.
[20] W.Jie, Q.Chen, J.Wang, A.S.VoundiKoe, J.Li, P.Huang, Y.Wu, Y.Wang,Anovelextendedmultimodalaiframeworktowardsvulnerabil- itydetectioninsmartcontracts,InformationSciences636(2023)118907. doi:https://doi.org/10.1016/j.ins.2023.03.132. [21] S.Kushwaha,S.Joshi,D.Singh,M.Kaur,H.-N.Lee,Ethereumsmart contract analysis tools: A systematic review, IEEE Access (2022) 1– 1doi:10.1109/ACCESS.2022.3169902. [22] L.Luu,D.-H.Chu,H.Olickel,P.Saxena,A.Hobor,Makingsmartcon- tracts smarterhttps://eprint.iacr.org/2016/633 (2016). doi: 10.1145/2976749.2978309. URLhttps://eprint.iacr.org/2016/633 [23] J.Feist, G.Grieco, A.Groce, Slither: Astaticanalysisframeworkfor smartcontracts(082019). [24] B.Jiang,Y.Liu,W.K.Chan,Contractfuzzer: Fuzzingsmartcontracts forvulnerabilitydetection,in: Proceedingsofthe33rdACM/IEEEIn- ternational Conference onAutomated Software Engineering, ASE ’18, AssociationforComputingMachinery, NewYork, NY,USA,2018, p. 259–269.doi:10.1145/3238147.3238177. URLhttps://doi.org/10.1145/3238147.3238177 [25] Y. Xue, J. Ye, W. Zhang, J. Sun, L. Ma, H. Wang, J. Zhao, xfuzz: Machinelearningguidedcross-contractfuzzing, IEEETransactionson Dependable and Secure Computing (2022) 1–14doi:10.1109/TDSC. 2022.3182373. [26] Z.Liu,P.Qian,X.Wang,Y.Zhuang,L.Qiu,X.Wang,Combininggraph neuralnetworkswithexpertknowledgeforsmartcontractvulnerability detection,CoRRabs/2107.11598(2021).arXiv:2107.11598. URLhttps://arxiv.org/abs/2107.11598 [27] Y.Zhuang,Z.Liu,P.Qian,Q.Liu,X.Wang,Q.He,Smartcontractvul- nerability detection using graph neural network, in: C. Bessiere (Ed.), ProceedingsoftheTwenty-NinthInternationalJointConferenceonAr- tificialIntelligence,IJCAI-20,InternationalJointConferencesonArtifi- cialIntelligenceOrganization,2020,pp.3283–3290,maintrack. doi: 10.24963/ijcai.2020/454. URLhttps://doi.org/10.24963/ijcai.2020/454 [28] H. H. Nguyen, N.-M. Nguyen, H.-P. Doan, Z. Ahmadi, T.-N. Doan, L.Jiang,Mando-guru: Vulnerabilitydetectionforsmartcontractsource codebyheterogeneousgraphembeddings, in: Proceedingsofthe30th ACM Joint European Software Engineering Conference and Sympo- sium on the Foundations of Software Engineering, ESEC/FSE 2022, AssociationforComputingMachinery, NewYork, NY,USA,2022, p. 1736–1740.doi:10.1145/3540250.3558927. URLhttps://doi.org/10.1145/3540250.3558927 [29] L. Zhang, J. Wang, W. Wang, Z. Jin, C. Zhao, Z. Cai, H. Chen, A novel smart contract vulnerability detection method based on informa- tiongraphandensemblelearning,Sensors22(9)(2022).doi:10.3390/ s22093581. [30] M. Khodadadi, J. Tahmoresnezhad, Hymo: Vulnerability detection in smartcontractsusinganovelmulti-modalhybridmodel(2023). arXiv: 2304.13103. [31] D. Gibert, C. Mateu, J. Planes, Hydra: A multimodal deep learning frameworkformalwareclassification,Computers&Security95(2020) 101873.doi:https://doi.org/10.1016/j.cose.2020.101873. [32] W.Jie,Q.Chen,J.Wang,A.S.V.Koe,J.Li,P.Huang,Y.Wu,Y.Wang,A novelextendedmultimodalaiframeworktowardsvulnerabilitydetection insmartcontracts,InformationSciences636(2023)118907. [33] T.Durieux,J.F.Ferreira,R.Abreu,P.Cruz,Empiricalreviewofauto- matedanalysistoolson47,587ethereumsmartcontracts,in:Proceedings oftheACM/IEEE42ndInternationalconferenceonsoftwareengineering, 2020,pp.530–541. [34] J.F.Ferreira,P.Cruz,T.Durieux,R.Abreu,Smartbugs:Aframeworkto analyzesoliditysmartcontracts,in:Proceedingsofthe35thIEEE/ACM InternationalConferenceonAutomatedSoftwareEngineering,2020,pp. 1349–1352. [35] A.Ghaleb, K.Pattabiraman, Howeffectivearesmartcontractanalysis tools? evaluatingsmartcontractstaticanalysistoolsusingbuginjection, in:Proceedingsofthe29thACMSIGSOFTInternationalSymposiumon 12
2309.10644 Robin: A Novel Method to Produce Robust Interpreters for Deep Learning-Based Code Classifiers Zhen Li1†, Ruqian Zhang1†, Deqing Zou1†∗, Ning Wang1†, Yating Li1†, Shouhuai Xu2, Chen Chen3, and Hai Jin4† 1School of Cyber Science and Engineering, Huazhong University of Science and Technology, Wuhan, 430074, China 2Department of Computer Science, University of Colorado Colorado Springs, USA 3Center for Research in Computer Vision, University of Central Florida, USA 4School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, 430074, China {zh li, ruqianzhang, deqingzou, wangn, leeyating}@hust.edu.cn sxu@uccs.edu, chen.chen@crcv.ucf.edu, hjin@hust.edu.cn Abstract—Deep learning has been widely used in source code networks are often considered black-boxes which means they classification tasks, such as code classification according to their cannot provide explanations for why a particular prediction functionalities, code authorship attribution, and vulnerability is made. The lack of interpretability poses as a big hurdle to detection. Unfortunately, the black-box nature of deep learning the adoption of these models in the real world (particularly makes it hard to interpret and understand why a classifier (i.e., classification model) makes a particular prediction on a in high-security scenarios), because practitioners do not know given example. This lack of interpretability (or explainability) when they should trust the predictions made by these models might have hindered their adoption by practitioners because it and when they should not. is not clear when they should or should not trust a classifier’s The importance of addressing the aforementioned lack of prediction. The lack of interpretability has motivated a number interpretability is well recognized by the research community ofstudiesinrecentyears.However,existingmethodsareneither robustnorabletocopewithout-of-distributionexamples.Inthis [11]–[13],asevidencedbyveryrecentstudies.Existingstudies paper,weproposeanovelmethodtoproduceRobustinterpreters on addressing the interpretability of source code classifiers for a given deep learning-based code classifier; the method is (i.e., classification models) can be classified into two ap- dubbed Robin. The key idea behind Robin is a novel hybrid proaches: ante-hoc vs. post-hoc. The ante-hoc approach aims structure combining an interpreter and two approximators, to provide built-in interpretability by leveraging the attention while leveraging the ideas of adversarial training and data augmentation. Experimental results show that on average the weight matrix associated with a neural network in question interpreter produced by Robin achieves a 6.11% higher fidelity [14], [15], which in principle can be applied to explain the (evaluated on the classifier), 67.22% higher fidelity (evaluated prediction on any example. The post-hoc approach aims to ontheapproximator),and15.87xhigherrobustnessthanthatof interpret the decision-making basis of a trained model. In the the three existing interpreters we evaluated. Moreover, the in- context of sourcecode classification, this approachmainly fo- terpreter is 47.31% less affected by out-of-distribution examples than that of LEMNA. cusesonlocalinterpretation,whichaimstoexplainpredictions Index Terms—Explainable AI, deep learning, code classifica- for individual examples by leveraging: (i) perturbation-based tion, robustness feature saliency [16], [17], which computes the importance scoresoffeaturesbyperturbingfeaturesincodeexamplesand I. INTRODUCTION then observing changes in prediction scores; or (ii) program reduction[18],[19],whichusesthedeltadebuggingtechnique In the past few years there has been an emerging field [20]to reducea programtoa minimalsetof statementswhile focusing on leveraging deep learning or neural networks to preserving the classifier’s prediction. study various kinds of source code classification problems, The ante-hoc approach must be incorporated into the clas- suchasclassifyingcodebasedontheirfunctionalities[1],[2], sifier training phase, meaning that it cannot help existing or codeauthorshipattribution[3]–[7],andvulnerabilitydetection given classifiers, for which we can only design interpreters [8]–[10]. While the accuracy of deep neural networks in to provide interpretability in a retrospective manner. In this this field may be satisfactory, the lack of interpretability, or paper we focus on how to retrospectively equip given code explainability, remains a significant challenge. Deep neural classifiers with interpretability, which is the focus of the post- †NationalEngineeringResearchCenterforBigDataTechnologyandSys- hocapproach.However,existingpost-hocmethodssufferfrom tem,ServicesComputingTechnologyandSystemLab,HubeiKeyLaboratory the following two problems. (i) The first problem is incurred of Distributed System Security, Hubei Engineering Research Center on Big by the possible out-of-distribution of a perturbed example DataSecurity,ClusterandGridComputingLab ∗Correspondingauthor in the perturbation-based feature saliency method. This is 3202 peS 91 ]ES.sc[ 1v44601.9032:viXrainevitablebecausethemethodusesperturbationstoassessfea- with the same functionality as the original example). This ture importance, by identifying the feature(s) whose absence similarity allows us to compute a loss in interpretability and causes a significant decrease in prediction accuracy. When leverage thisloss to trainthe interpreter. (ii)Corresponding to a legitimate example is perturbed into an out-of-distribution mixup, we generate a set of virtual examples by linearly in- input, it is unknown whether the drop in accuracy is caused terpolating the original examples and their perturbed versions
by the absence of certain feature(s) or because of the out- in the feature space; these examples are virtual because they of-distribution of the perturbed example [21], [22]. (ii) The are obtained in the feature space (rather than example space) second problem is the lack of robustness, which is inherent to and thus may not correspond to any legitimate code example thelocalinterpretationapproachandthuscommontoboththe (e.g., a virtual example may not correspond to a legitimate perturbation-based feature saliency method and the program program). Different from traditional data augmentation, we reduction method. This is because the local interpretation train the interpreter and two approximators jointly rather than approach optimizes the interpretation of each example inde- solely training the interpreter on virtual examples due to the pendentofothers,meaningthatoverfittingthenoiseassociated lack of ground truth of the virtual examples (i.e., what the k withindividualexamplesisverylikely[23].Asaconsequence, important features are). aninterpretationwouldchangesignificantlyevenbyincurring Third, we empirically evaluate Robin’s effectiveness and aslightmodificationtoanexample,andthiskindofsensitivity compare it with the known post-hoc methods in terms of fi- couldbeexploitedbyattackerstoruintheinterpretability[24]. delity,robustness,andeffectiveness.Experimentalresultsshow The weaknesses of the existing approaches motivate us to that on average the interpreter produced by Robin achieves investigate better methods to interpret the predictions of deep a 6.11% higher fidelity (evaluated on the classifier), 67.22% learning-based code classifiers. higher fidelity (evaluated on the approximator), and 15.87x higher robustness than that of the three existing interpreters Our Contributions. This paper introduces Robin, a novel weevaluated.Moreover,theinterpreteris47.31%lessaffected method for producing high-fidelity and Robust interpreters in by out-of-distribution examples than that of LEMNA [25]. the post-hoc approach with local interpretation. Specifically, We have made the source code of Robin publicly available this paper makes three contributions. at https://github.com/CGCL-codes/Robin. First, we address the aforementioned out-of-distribution Paper Organization. Section II presents a motivating in- problem by introducing a hybrid interpreter-approximator stance. Section III describes the design of Robin. Section IV structure. More specifically, we design (i) an interpreter to presents our experiments and results. Section V discusses the identify the features that are important to make accurate limitations of the present study. Section VI reviews related predictions, and (ii) two approximators such that one is used prior studies. Section VII concludes the paper. to make predictions based on these important features and the other is used to make predictions based on the other II. AMOTIVATINGINSTANCE features (than the important ones). These approximators are To illustrate the aforementioned problem inherent to the reminiscent of fine-tuning a classifier with perturbed training local interpretation approach, we consider a specific instance examples while removing some features. As a result, a per- of code classification in the context of code functionality turbedtestexampleisnolongeranout-of-distributionexample classification via TBCNN [2]. Although TBCNN offers code totheapproximators,meaningthatthereducedaccuracyofthe functionality classification capabilities, it does not offer any classifier can be attributed to the removal of features (rather interpretability on its predictions. To make its predictions than the out-of-distribution examples). To assess the impor- interpretable, an interpreter is required. We adapt the method tance of the features extracted by the interpreter, we use the proposed in [16], which was originally used to interpret approximators (rather than the given classifier) to mitigate the software vulnerability detectors, to code functionality classifi- side-effectthatmaybecausedbyout-of-distributionexamples. cation because there are currently no existing interpreters for Second, we address the lack of interpretation robustness thispurpose(tothebestofourknowledge).Thisadaptationis by leveraging the ideas of adversarial training and mixup feasible because the method involves deleting features in the to augment the training set. More specifically, we generate a feature space and observing the impact on predictions, which set of perturbed examples for a training example (dubbed the is equally applicable to code functionality classification. original example) as follows. (i) Corresponding to adversarial TheoriginalcodeexampleinFigure1(a)isusedtocompare training but different from traditional adversarial training in two strings for equality. We create a perturbed example by other contexts, the ground-truth labels (i.e., what the k impor- changing the variable names, as illustrated in Figure 1 (b). tantfeaturesare)cannotbeobtained,makingitdifficulttoadd Despite the change in variable names, the perturbed example perturbed examples to the training set for adversarial training. maintainsthesamefunctionalityandsemanticsastheoriginal We overcome this by measuring the similarity between the example. Additionally, both the original and perturbed exam- interpretation of the prediction on the original example and ples are classified into the same category by the classifier. the interpretation of the prediction on the perturbed example, Upon applying an interpreter, adapted from [16], to TBCNN, which is obtained in the example space rather than feature the five most important features of the original and perturbed
space (i.e., the perturbed example is still a legitimate program examples are identified, and highlighted in Fig. 1(a) and 1(b)int main() { char a[100],b[100]i;nt main() { int main(){ fidelity,thetwoapproximatorsaretrainedsimultaneouslysuch int m,n,t=1,r[100]={0c}h,ia,jr; a[100],b[100]; char S[100],shuchuin[1t0 m0]a;in(){ thattheimportantfeaturescontainthemostusefulinformation scanf("%s%s",a,b); int m,n,t=1,r[100]={0},i,j; int Q,cc3,cc2=1,L[100c]h=a{0r }S,[d1,s0o0r]t,;shuchu[100]; m=strlen(a); scanf("%s%s",a,b); scanf("%s%s",S,shuchinut) ;Q,cc3,cc2=1,L[100]={0}f,od,rsormt;aking predictions while the other features contain the n=strlen(b); m=strlen(a); Q=strlen(S); scanf("%s%s",S,shuchu); least useful information for making predictions. if(m==n){ n=strlen(b); cc3=strlen(shuchu); Q=strlen(S); for(i=0;i<=m-1;i++i)f(m==n){ if(Q==cc3){ cc3=strlen(shuchu); To make the interpreter robust, we leverage two ideas. The for(j=0;j<=n-1;j++)for(i=0;i<=m-1;i++) for(d=0;d<=Q-1;d+i+f()Q==cc3){ first idea is to use adversarial training [27], [28] where an if(b[j]==a[i]&&r[j]=fo=0r()j ={0;j<=n-1;j++) for(sort=0;sort<=ccfo3-r1(d;s=o0r;td+<+=)Q-1;d++) r[j]=1; if(b[j]==a[i]&&r[j]==0) { if(shuchu[sort]==S[fdo]r&(s&or Lt=[s0o;rsto]r=t=<0=)c {c3-o1;rsiogrti+n+)alexampleanditsperturbedexamplewillhavethesame break; r[j]=1; L[sort]=1; if(shuchu[sort]==S[pd]r&e&d Li[csotirot]=n=.0)I {n sharp contrast to traditional adversarial training } break; break; L[sort]=1; for(i=0;i<=n-1;i++) } } break; in other contexts where ground-truth can be obtained, it is if(r[i]==0) {t=0; brefaokr(;i}=0;i<=n-1;i++) for(d=0;d<=cc3-1;d++}) difficulttoobtaintheground-truthlabelsinthissettingbecause } if(r[i]==0) {t=0; break;} if(L[d]==0) {cc2=0; fborre(da=k;0};d<=cc3-1;d++) else t=0; } } if(L[d]==0) {cc2=0; brewake;}donotknowwhichfeaturesareindeedthemostimportant if(t==1) printf("YES\ne"l)s;e t=0; else cc2=0; } onesevenforthetrainingexamples.Thatis,wecannotsimply else printf("NO\n");if(t==1) printf("YES\n"); if(cc2==1) printf("YESe\lnse") c;c2=0; return 0; else printf("NO\n"); else printf("NO\n");if(cc2==1) printf("YES\n");use traditional adversarial training method to add perturbed } return 0; return 0; else printf("NO\n"); examples to training set because the “labels” (i.e., the k im- } } return 0; (a) Originalexample (b) Perturbe}dexample portant features) of original examples and perturbed examples Fig. 1. An original example and its perturbed example (modified code is cannot be obtained. We overcome this by (i) generating a set highlighted in blue color and italics), where red boxes highlight the 5 most of perturbed examples via code transformation such that the importantfeatures. prediction on the perturbed example remains the same, and respectively. Notably, only one important feature is common (ii) adding a constraint term to the loss function to make between the two examples, revealing that the interpreter lacks the interpretations of the original example and the perturbed robustness.Thislackofrobustnessoftheinterpretermaycause example as similar to each other as possible. users to question the reliability of the classifier’s predictions The second idea is to leverage mixup [29] to augment the due to the erroneous interpretation. trainingset.Insharpcontrasttotraditionaldataaugmentation, we cannot train the interpreter from the augmented dataset III. DESIGNOFROBIN for the lack of ground-truth (i.e., the important features of Notations. A program code example, denoted by x i, can an original example and its perturbed examples can not be represented as a n-dimensional feature vector x i = be obtained). We overcome this issue by (i) using code (x i,1,x i,2,...,x i,n),wherex i,j (1≤j ≤n)isthejthfeature transformation to generate a perturbed example such that its ofx i.Acodeclassifier(i.e.,classificationmodel)M islearned predictionremainsthesameasthatoftheoriginalexample,(ii) fromatrainingset,denotedbyX,whereeachexamplex i ∈X mixing the original examples with the perturbed examples to is associated with a label y i. Denote by M(x i) the prediction generatevirtualexamples,and(iii)optimizingthepreliminary of classifier M on a example x i. interpreter by training the interpreter and two approximators
Our goal is to propose a novel method to produce an jointly on virtual examples. Note that the difference between interpreter, denoted by E, for any given code classifier M theaforementionedadversarialexamplesandvirtualexamples and test set U such that for test example u i ∈U, E identifies is that the former are obtained by perturbation in the example k important features to explain why M makes a particular space but the latter is obtained in the feature space. prediction on u , where k ≪ n. It is intuitive that the k im- i Design Overview. Fig. 2 highlights the training process of portantfeaturesofexampleu shouldbelargely,ifnotexactly, i the same as the k important features of u′ which is perturbed Robin,whichproducesanoptimizedinterpreterinthreesteps. i fromu i.DenotebyE(u i)=(u i,α1,...,u i,αk)thek important • Step I: Generating perturbed examples. This step gen- features identified by E, where {α ,...,α }⊂{1,...,n}. erates perturbed examples from a training example by 1 k conducting semantics-preserving code transformations such A. Basic Idea and Design Overview that the perturbed example has the same prediction as that Basic Idea. In terms of the out-of-distribution problem asso- of the original example. ciated with existing interpretation methods, we observe that • Step II: Generating a preliminary interpreter. Given a the absence of perturbed examples in the training set makes classifier for which we want to equip with interpretability, a classifier’s prediction accuracy with respect to the perturbed thisstepleveragestheperturbedexamplesgeneratedinStep examples affected by the out-of-distribution examples. Our I to train two approximators and an interpreter Siamese idea to mitigate this problem is to fine-tune a classifier for networkinaniterativefashion.TheinterpreterSiamesenet- perturbedexamplesbyusingahybridinterpreter-approximator work identifies the important features of original examples structure [26] such that (i) one interpreter is for identifying andthatoftheirperturbedexamples,andthencomputesthe the important features for making accurate prediction, (ii) one difference between these two sets. approximator is for using the important features (identified • Step III: Optimizing the preliminary interpreter. This by the interpreter) to making predictions, and (iii) another step optimizes the preliminary interpreter generated in Step approximatorisforusingtheotherfeatures(thantheimportant II by using mixup [29] to augment the training set and up- features) for making predictions. To improve the interpreter’s datethepreliminaryinterpreter’sparameters.TheoptimizedInput Step I: Generating perturbed examples Output Filtering out Identifying Generating Code examples coding style candidate perturbed perturbed examples Perturbed whose prediction in training set X attributes examples labels change examples Step II: Generating a preliminary interpreter Code classification Training the interpreter A preliminary model to be Training two approximators Siamese network interpreter explained Step III: Optimizing the preliminary interpreter Generating virtual Updating the interpreter’s An optimized examples parameters interpreter Fig. 2. Overview of the training process of Robin which produces an optimized interpreter in three steps: generating perturbed examples, generating a preliminaryinterpreter,andoptimizingthepreliminaryinterpreter. Code classification Prediction label: model to be explained 68 … Code … Candidate 1 typedef long longint lli; 2int main() { example 1 int main() { perturbed 3 cin>> tc; 2 cin>> tc; example 3 for (int t = 1; t <= tc; t++) { 4 for (int t = 1; t <= tc; t++) { 4 char c[30]; 5 char c[30]; 5 long longint num; 6 llinum; 6 string s; 7 string s; 7 scanf("%s", c); 8 scanf("%s", c); 8 sscanf(c, "%lld", &num); 9 sscanf(c, "%lld", &num); 9 s = c; 10 s = c; 10 long longint goal = ctdy(s); 11 lligoal = ctdy(s); 12 lliub= num, lb= 0, m; 11 long longint ub= num, lb= 0, m; Prediction label: 12 for (;ub-lb> 1;) { 13 while (ub-lb> 1) { 13 m = (ub+ lb) / 2; 68 14 m = (ub+ lb) / 2; 14 num = m; 15 num = m; 15 sprintf(c, "%lld", num); 16 sprintf(c, "%lld", num); 16 s = c; 17 s = c; 18 if (ctdy(s) != goal) { Line 1: Use global declaration 17 if (ctdy(s) != goal) { 19 lb= m; Line 4: Increment operation is used after the 18 lb= m; 20 } else { variable 19 } else { Perturbed 2 21 2 }ub= m; L Li in ne e 4 1: 3 L : o Loo op p s t sr tu ruct cu ture r e( (f fo or r , ,w wh hil ie le) ) 2 20 1 }ub= m; examples 23 } Line 24: Library function calls (printf, cout) 22 } 24 printf("Case #%d: %lld\n", t, ub); Line 26: Usage of return statement 23 cout<< "Case #" << t << ":“ << ub<< endl; 25 } … 24 } 25 return 0;} 26 } (a) Input (b) Step I.1: Identifying all (c) Step I.2: Generating (d) Step I.3: Filtering out (e) Output coding style attributes candidate perturbed examples perturbed examples whose prediction labels change Fig.3. Acodeexampleshowinggenerationofperturbedexamples(selectedcodingstyleattributesandmodifiedcodearehighlightedinred).
interpreter identifies important features of a test example. x ,x ....x , where x (1 ≤ j ≤ m) denotes the i,+1 i,+2 i,+m i,+j jth perturbed example generated by the semantic-equivalent B. Generating Perturbed Examples code transformation of code example x i. The labels of the m perturbed examples preserve the original example x ’s label i Thisstephasthreesubsteps.First,foreachtrainingexample y owing to the semantic-equivalent code transformations. As x ∈ X, we identify its coding style attributes related to the i i an instance, Fig. 3(c) shows a candidate perturbed example example’slayout,lexis,syntax,andsemantics(e.g.,asdefined generated by transforming randomly selected coding style in[30]).Lett denotethenumberofcodingstyleattributesfor i attributes, which are highlighted in red in Fig. 3(b). Third, x ∈X.Fig.3(a)showsatrainingexamplewherethefirstline i we filter out perturbed examples whose prediction labels uses a global declaration but can be transformed such that no are different from the prediction labels of the corresponding global declaration is used; Fig. 3(b) describes its coding style original examples in X. The reason is that if the prediction attributes. Second, we randomly select θ (θ < t ) coding i i i labels of the perturbed examples change, the robustness of styleattributes,repeatthisprocessformtimes,andtransform the interpreter cannot be judged by the difference of the the value of each of these coding style attributes to any one interpretationbetweentheperturbedexamplesandtheoriginal of the other semantically-equivalent coding style attributes. examples. As an instance in Fig. 3(d), the prediction label Consequently, we obtain m candidate perturbed examplesStep II: Generating a preliminary interpreter Input Interpreter Siamese network Approximators Important Interpreter features Approximator Classification loss 𝐿 !for Code examples E 𝐴 ! original examples in training set X Weights Step II.1: Training sharing Non-important two approximators while attempting Inter Ep ’reter features Appro 𝐴x "imator Clas os ri if gic ina ati lo en x alo mss p l𝐿 e" sfor to minim 𝐿ize 𝐿 !and " Discrepancy loss 𝐿 #$%%of Code classification two interpreters Output model to be Apreliminary explained Copy Copy Iterations interpreter Interpreter Approximators Siamese network Important Interpreter features Approximator Classification loss 𝐿 !for E 𝐴 ! original examples Step II.2: Training Weights the interpreter sharing Non-important Siamese network Interpreter features Approximator Classification loss 𝐿 "for while attempting to Perturbed E’ 𝐴 " original examples minimize 𝐿 !and examples 𝐿 #$%%andmaximize 𝐿 Discrepancy loss 𝐿 #$%%of " two Interpreters Fig.4. OverviewofStepII(generatingapreliminaryinterpreter),involvingtrainingtwoapproximatorsandtrainingtheinterpreterSiamesenetworkiteratively. of the code example in Fig. 3(a) and the prediction label should be as high as possible. On the other hand, non- of the candidate perturbed example are the same, so the importantfeaturescontainlessinformationimportantforcode candidate perturbed example is a perturbed example that can classification, so the accuracy of the approximator using only be used for robustness enhancement. Finally, we obtain the non-important features as input should be as low as possible. set of perturbed examples for robustness enhancement of the (ii) To achieve high robustness, we introduce the interpreter interpreter. Siamese network with two interpreters that have the same neural network structure and share weights, using the original C. Generating a Preliminary Interpreter code examples and perturbed examples as input respectively. Foreachoriginalexampleanditscorrespondingperturbedex- An ideal interpreter is simultaneously achieving high fi- amples,theSiamesenetworkcalculatesthesimilaritydistance delity and high robustness. (i) High fidelity indicates that betweentheimportantfeaturesoftheoriginalexampleandthe the important features identified by interpreter E contain as important features of the perturbed examples identified by the much information as possible that is most useful for code two interpreters, and adds the similarity distance to the loss classification, and the remaining non-important features con- value to improve the interpreters’ robustness during training. tain as little information as possible that is useful for code classification. (ii) High robustness indicates that the important Fig. 4 shows the structure of the neural network involving featuresidentifiedbyinterpreterE toexplainwhyM predicts an interpreter Siamese network and two approximators. The x i as the label y i should not change dramatically for small interpreter Siamese network involves two interpreters which perturbed examples which are predicted as label y i. Robin have the same neural network structure and share weights. achieves this by first generating a preliminary interpreter in Their neural networkstructure depends on the structure of the Step II and then optimizing the preliminary interpreter further code classifier to be explained. We divide the code classifier in Step III. to be explained into two parts. One part is to extract the The purpose of Step II is to generate a preliminary in- featuresfromtheinputcodeexamplesthroughneuralnetwork terpreter by training two approximators and the interpreter toobtainthevectorrepresentationofthecodeexamples,which Siamese network iteratively. The basic idea is as follows: (i) is equivalent to encoder, and this part usually uses Batch To achieve high fidelity, we introduce two approximators that Normalization, Embedding, LSTM, Convolutional layer, etc.
havethesameneuralnetworkstructureforcodeclassification, The other part maps the vector representation to the output using the identified important features and non-important fea- vector. When generating the structure of the interpreter, the turesasinputrespectively.Sinceimportantfeaturescontainthe first part of the code classifier is kept and the latter part most useful information for code classification, the accuracy is modified to a fully connected layer and a softmax layer, of the approximator using only important features as input which maps the learned representation of code examples tothe output space, and the output is of the same length as the features for the original examples and for the perturbed number of features, indicating whether each feature is labeled examples, the smaller the Jaccard distance, and the smaller asimportantornot.Thesetwointerpretersareusedtoidentify the corresponding difference value L . diff the important features of the code examples in training set X During the training process, Step II.1 and Step II.2 are and the perturbed examples generated in Step I, respectively. iterated until both the interpreters and the approximators The two approximators have the same neural network converge. structure and are used to predict labels using important fea- D. Optimizing the Preliminary Interpreter tures and non-important features, respectively. They have the identical neural network architecture as the code classifier The purpose of this step is to optimize the preliminary to be interpreted. However, instead of the code example interpretergeneratedinStepIIinbothfidelityandrobustness. as input, the interpreter provides the approximator with the The basic idea is to use mixup [29] for data augmentation important or non-important features identified. As a result, to optimize the interpreter. There are two substeps. First, the approximators can be seen as fine-tuned versions of the we generate virtual examples. For each code example x i code classifier, trained on the datasets of important and non- in training set X, x is a randomly selected perturbed i′,+j important features. example of x , where x is randomly selected from X, and i′ i′ Fig. 4 also shows the training process to generate a prelim- may or may not be identical to x . A virtual example is i inary interpreter, involving the following two substeps. generated by mixing code examples and their corresponding StepII.1:Trainingtwo approximatorswhileattemptingto labels.Specifically,thevirtualexamplex isgeneratedby i,mix minimize L and L . When training the approximator, only linear interpolation between the original example x and the s u i the model parameters of the approximator are updated. The perturbed example x , and the label y of x is i′,+j i,mix i,mix training goal is to minimize the loss of both approximators also generated by linear interpolation between the label y of i A and A , which is the sum of cross-entropies loss of A original example x and the label y of perturbed example s u s i i′,+j and A : x , shown as follows: u i′,+j min (L +L ), (1) s u As,Au x i,mix =λ ix i+(1−λ i)x i′,+j (4) whereL isthecross-entropylossofapproximatorA andL y =λ y +(1−λ )y s s u i,mix i i i i′,+j is the cross-entropy loss of approximator A . The loss of the u where the interpolation coefficients λ is sampled from the β approximatorindicatestheconsistencybetweentheprediction i distribution. Second, we update the interpreter’s parameters labels and the labels. based on the generated virtual examples. Since the output Step II.2: Training the interpreter Siamese network while of the interpreter is the important features in code examples attempting to minimize L and L , and maximize L . s diff u ratherthantheclassificationlabels,itisimpossibletotrainthe WhentrainingtheinterpreterSiamesenetwork,onlythemodel interpreter individually for enhancement. Therefore, we use parameters of the interpreter are updated. The training goal is approximatorsforjointoptimizationwiththeinterpreterE.In to minimize the loss of A and the discrepancy of the outputs s this case, the input of the overall model are code examples between two interpreters E and E′, and maximize the loss of and the output are the labels of code examples, which can A : u be directly trained and optimized using the generated vir- min(L −L +L ), (2) E s u diff tual examples. In the optimization process, the interpreter’s where L is the discrepancy of the outputs between two parameters are updated while preserving the approximators’ diff interpreters E and E′. The interpreter is trained so that (i) the parameters unchanged. loss of prediction using important features is minimized, (ii) IV. EXPERIMENTSANDRESULTS the loss of prediction using non-important features is maxi- mized, and (iii) the discrepancy of the outputs between two A. Evaluation Metrics and Research Questions interpreters is minimized to improve the robustness of the in- EvaluationMetrics.Weevaluateinterpretersviatheirfidelity, terpreter.ThedifferencevalueL diff intheinterpreterSiamese robustness against perturbations, and effectiveness in coping networkrepresentsthedistancebetweentheimportantfeatures with out-of-distribution examples. identifiedbytheinterpreterfortheoriginalexamplesandthose For quantifying fidelity, we adopt the metrics defined in for the perturbed examples. We use Jaccard distance [31] to [26],[32].ConsideracodeclassifierM trainedfromatraining measure the distance as follow: set X, an interpreter E, and a test set U. Denote by E(u ) i L =1−(cid:88) |E(x i)∩E(x i,+j)| (3) thesetofimportantfeaturesidentifiedbyinterpreterE fortest
diff N ·m·|E(x i)∪E(x i,+j)| example u i ∈ U. We train an approximator A s in the same i,j fashion as how M is trained except that we only consider where N is the number of original code examples in the the important features, namely ∪ E(u ). Let M(u ) and ui∈U i i training set X, and m is the number of perturbed examples A (u ) respectively denote the prediction of classifier M s i corresponding to each original example. The more robust the and approximator A on example u . Then, interpreter E’s s i interpreter is, the higher the similarity between the important fidelity is defined as a pair (FS-M∈ [0,1], FS-A∈ [0,1]),where FS-M= |{ui∈U:M(ui)=M(E(ui))}| is the fraction of test attribution. In our experiment, we use a dataset from |U| examples that have the same predictions by M using all the Google Code Jam (GCJ) [34], [35], involving 1,632 features and by M only using the important features, and FS- C++ program files from 204 authors for 8 programming A= |{ui∈U:M(ui)=As(E(ui))}| is the fraction of test examples challenges and has been widely used in code authorship |U| that have the same predictions by M using all features and attribution task [30], [35]. This dataset is different from by A only using the important features [32]. Note that a the one used in [7], which is not available to us. s larger (FS-M, FS-A) indicates a higher fidelity, meaning that • TBCNN [2]. The method represents source code as an the important features are indeed important in terms of their AbstractSyntaxTree(AST),encodestheresultingASTas contribution to prediction. avector,usesatree-basedconvolutionallayertolearnthe For quantifying robustness against perturbations, we adopt featuresintheAST,andusesafully-connectedlayerand themetricproposedin[33],whichisbasedontheaverageJac- softmax layer for making predictions. In our experiment, card similarity between (i) the important features of an origi- we use the dataset of pedagogical programming Open nal example and (ii) the important features of the perturbed Judge system, involving 52,000 C programs for 104 example[31].Thesimilarityisdefinedoverinterval[0,1]such programming problems. This dataset is the same as the that a higher similarity indicates a more robust interpreter. one used in [2] because it is publicly available. For quantifying effectiveness in coping with out-of- We implement Robin in Python using Tensorflow [36] to distribution examples, we adopt the metric defined in [21]. retrofit the interpretability of DL-CAIS and TBCNN. We run Specifically, we take the number of features n over 8, and experiments on a computer with a RTX A6000 GPU and an incrementally and equally sample q features among all the Intel Xeon Gold 6226R CPU operating at 2.90 GHz. features, starting at q = n 8, i.e. q ∈ Q = {n 8,2 8n··· ,7 8n}. Interpreters for Comparison. We compare Robin with three For a given q, we use the same training set to learn the existing interpreters: LIME [13], LEMNA [25], and the one same kind of classifier M q by removing the q least im- proposed in [16], which would represent the state-of-the-art portant features (with respect to the interpreter), namely in interpretability of code classifier in feature-based post-hoc ∪ ui∈UE(cid:101)(u i), where E(cid:101)(u i) is the code example u i with localinterpretation.Morespecifically,LIME[13]makessmall q least important features (with respect to the interpreter) local perturbations to an example and obtains an interpretable removed, and the difference of accuracy between classi- linear regression model based on (i) the distance between the fier M and retrained classifier M q is defined as AD q = perturbedexampleandtheoriginalexampleand(ii)thechange |{ui∈U:M(ui)=M(E(cid:101)(ui))}|−|{ui∈U:M(ui)=Mq(E(cid:101)(ui))}|. The de- to the prediction. As such, LIME can be applied to explain |U| greetowhichtheinterpreterisimpactedbyout-of-distribution anyclassifier.LEMNA[25]approximateslocalnonlineardeci- inputs is the average AD for each q ∈Q. A smaller average sionboundariesforcomplexclassifiers,especiallyRNN-based q difference of accuracy indicates a reduced impact of out-of- ones with sequential properties, to provide interpretations in distribution inputs on the interpreter. securityapplications.Meanwhile,themethodin[16]interprets Correspondingtotheprecedingmetrics,ourexperimentsare vulnerabilitydetectorpredictionsbyperturbingfeaturevalues, driven by three Research Questions (RQs): identifyingimportantfeaturesbasedontheirimpactonpredic- • RQ1: What is Robin’s fidelity? (Section IV-C) tions, training a decision-tree with the important features, and • RQ2: What is Robin’s robustness against code perturba- extracting rules for interpretation. Additionally, we establish a tions? (Section IV-D) random feature selection method as a baseline. • RQ3: What is Robin’s effectiveness in coping with out- C. What Is Robin’s Fidelity? (RQ1) of-distribution examples? (Section IV-E) To determine the effectiveness of Robin on fidelity, we first B. Experimental Setup train two code classifiers DL-CAIS [7] and TBCNN [2] to be explained according to the settings of the literature, acheiving Implementation. We choose two deep learning-based code 88.24% accuracy for code authorship attribution and 96.72%
classifiers: DL-CAIS [7] for code authorship attribution and accuracy for code functionality classification. Then we apply TBCNN [2] for code functionality classification. We choose Robin and the interpreters for comparison to DL-CAIS and these two classifiers because they offer different code clas- TBCNN models. For Robin, we set the candidate number of sification tasks, use different kinds of code representations selected coding style attributes θ to 4 and the number of i and different neural networks, are representative of the state- important features selected by the interpreter k to 10. We of-the-art in code classification, and are open-sourced; these split the dataset randomly by 3:1:1 for training, validation, characteristicsarenecessarytotestRobin’swideapplicability. and testing for TBCNN and use 8-fold cross-validation for • DL-CAIS [7]. This classifier leverages a Term DL-CAIS when training the interpreter. Frequency-Inverse Document Frequency based approach Table I shows the fidelity evaluation results on DL-CAIS to extract lexical features from source code and a and TBCNN for different interpreters. We observe that LIME Recurrent Neural Network (RNN) is employed to learn and LEMNA achieve an average FS-M of 0.49% and an the code representation, which is then used as input to average FS-A of 2.70% for DL-CAIS, and an average FS- a random forest classifier to achieve code authorship M of 6.73% and an average FS-A of 9.47% for TBCNN,TABLEI TABLEIII FIDELITYEVALUATIONRESULTSFORDIFFERENTINTERPRETERS ABLATIONANALYSISRESULTSOFFIDELITYEVALUATION(UNIT:%) DL-CAIS TBCNN Method FS-M(%) FS-A(%) FS-M(%) FS-A(%) Method DL-CAIS TBCNN Baseline 1.96 2.45 10.29 9.23 FS-M FS-A FS-M FS-A LIME[13] 0.49 3.43 7.98 10.67 Robin 13.73 92.65 20.67 83.65 LEMNA[25] 0.49 1.96 5.48 8.27 Robinw/oFactor1 12.25 90.69 20.19 81.44 Zouetal.[16] 33.33 69.60 18.75 31.63 Robinw/oFactor2 11.76 90.20 19.90 82.12 Robin 13.73 92.65 20.67 83.65 Robinw/oFactor1&2 12.25 87.75 18.94 80.77 TABLEII TABLEIV AVERAGEINTERPRETATIONTIMEOFEACHCODEEXAMPLEFOR FIDELITYEVALUATIONRESULTSFORDIFFERENTINTERPRETERSON DIFFERENTINTERPRETERS DL-CAISWITHDIFFERENTNEURALNETWORKSTRUCTURES(UNIT:%) Method DL-CAIS(ms) TBCNN(ms) DL-CAIS DL-CAIS-CNN DL-CAIS-MLP Method Baseline 1.00 1.43 FS-M FS-A FS-M FS-A FS-M FS-A LIME[13] 61957.71 111484.20 Baseline 1.96 2.45 2.94 3.92 4.41 3.43 LEMNA[25] 17448.35 43722.97 LIME[13] 0.49 3.43 4.90 6.37 3.43 6.86 Zouetal.[16] 166298.43 243142.95 LEMNA[25] 0.49 1.96 1.47 1.47 0.98 1.47 Robin 1.71 408.04 Zouetal.[16] 33.33 69.60 40.50 54.90 20.09 17.65 Robin 13.73 92.65 40.69 99.51 63.24 97.06 performing even worse than baseline. This can be explained 1.53-2.88% for TBCNN. Robin without Factor1 and Factor2 by the fact that LIME and LEMNA do not perform well in achieves the worst results. This indicates the significance of multi-class code classification tasks due to the more complex Factor1 and Factor2 for the fidelity of Robin. decision boundaries of the classifiers. We also observe that Robin outperforms other interpreters in terms of FS-M and EffectivenessofFidelityWhenAppliedtoDifferentNeural FS-A metrics significantly except Zou et al.’s method [16] in Network Structures. To demonstrate the applicability of terms of FS-M on DL-CAIS. Robin achieves 23.05% higher Robintovariousneuralnetworkstructures,wetakeDL-CAIS FS-A at the cost of 19.60% lower FS-M. However, Zou et for instance to replace the Recurrent Neural Network (RNN) al.’s method [16] is much less robust to perturbed examples layers of DL-CAIS with the Convolutional Neural Network than Robin which we will discuss in Section IV-D. Compared (CNN) layers (denoted as “DL-CAIS-CNN”) and replace the with other interpreters, Robin achieves 6.11% higher FS-M RNN layers of DL-CAIS with the Multi-Layer Perception and67.22%higherFS-Aonaverage,whichindicatesthehigh (MLP)layers(denotedas“DL-CAIS-MLP”),respectively.We fidelity of Robin. first train two code authorship attribution models DL-CAIS- Forthetimecostofinterpreters,TableIIshowstheaverage CNN and DL-CAIS-MLP to be explained according to the interpretation time (in milliseconds) for each code example. settingsofDL-CAIS[7].WeobtainaDL-CAIS-CNNwithan We observe that Robin significantly outperforms the other accuracy of 91.18% and a DL-CAIS-MLP with an accuracy three interpreters in terms of time cost. Note that while of 90.69% for code authorship attribution. Then we apply baselineislesstime-consuming,ithasmuchlowerfidelityand RobinandotherinterpretersforcomparisontoDL-CAIS-CNN robustness than Robin (see Section IV-D). Other interpreters and DL-CAIS-MLP respectively. Table IV shows the fidelity aresignificantlymoretimecostlythanRobinbecausetheyare evaluation results for different interpreters on DL-CAIS with optimizedindependentlyonasinglecodeexampleandrequire different neural networks. For DL-CAIS-CNN and DL-CAIS- a new perturbation and analysis each time a code example is MLP, Robin achieves a 40.07% higher FS-M and an 83.50%
interpreted, while Robin directly constructs an interpreter that higherFS-Aonaveragethantheotherthreeinterpreters,which applies to all code examples and automatically identifies the shows the effectiveness of Robin applied to different neural important features by simply feeding code examples into the network structures. interpreter model. Robin achieves a 99.75% reduction in time UsefulnessofRobininUnderstandingReasonsforClassifi- cost than the other three interpreters on average. cation.ToillustratetheusefulnessofRobininthisperspective, Ablation Analysis. Robin has two modules to improve the we consider a scenario of code functionality classification via interpreter, i.e., adding L to the loss of the interpreter TBCNN [2]. The code example in Fig. 5 is predicted by diff (denoted as “Factor1”), and data augmentation using mixup the classifier as the functionality class “finding the number (denoted as “Factor2”). To show the contribution of each of factors”. The interpreter generated by Robin extracts five moduleinRobintotheeffectivenessoffidelity,weconductthe featuresofthecodeexample,whicharedeemedmostrelevant ablationstudy.WeexcludeFactor1,Factor2,andbothFactor1 with respect to the prediction result and are highlighted via and Factor2 to generate three variants of Robin, respectively, red boxes in Fig. 5. These five features are related to the andcompareRobinwiththethreevariantsintermsoffidelity. remainder, division, and counting operators. By analyzing Table III summarizes the fidelity evaluation results of Robin these five features, it becomes clear that the code example and its variants on DL-CAIS and TBCNN. We observe that is predicted as “finding the number of factors” because the Robin without Factor1, Factor2, or both Factor1 and Factor2 example looks for, and counts, the number of integers that can reduce FS-M of 1.48-1.97% and FS-A of 1.96-4.90% can divide the input integer. for DL-CAIS, and reduce FS-M of 0.48-1.73% and FS-A of Insight 1: Robin achieves a 6.11% higher FS-M and aint change (int a, int p) { Code int change (int a, int p) { Code int i, count = 0; example int i, count = 0; example for (i= p; i< a; i++) { for (i= p; i< a; i++) { if (a % i== 0 && a / i>= i) { if (a % i== 0 && a / i>= i) { count++; count++; int k, t; int k, t; k = (int) sqrt(a / i); k = (int) sqrt(a / i); for (t = 2; t <= k; t++) { Classifier for (t = 2; t <= k; t++) { if ((a / i) % t == 0) { if ((a / i) % t == 0) { Interpretation of the count += change (a / i, i); count += change (a / i, i); break; Code functionality break; classification: } classification } The classifier predicts the function of the example as } } "finding the number of } } } Prediction class: } factors" because the return count; Finding the number return count; example looks for and } of factors } counts the number of integers that divide the int main() { int main() { input integer exactly. int n, i, a; int n, i, a; cin>> n; cin>> n; for (i= 1; i<= n; i++) { for (i= 1; i<= n; i++) { int total = 0; Interpreter int total = 0; cin>> a; cin>> a; total += change (a, 2); total += change (a, 2); cout<< total + 1 << endl; cout<< total + 1 << endl; } } return 0; return 0; } } Fig.5. Theinterpretationofaspecificinstanceofcodeclassificationinthecontextofcodefunctionalityclassification,whereredboxeshighlightthe5most importantfeatures. TABLEV TABLEVI ROBUSTNESSEVALUATIONRESULTSFORDIFFERENTINTERPRETERS ABLATIONANALYSISRESULTSOFROBUSTNESSEVALUATION Method DL-CAIS TBCNN Method DL-CAIS TBCNN Baseline 0.0121 0.0348 Robin 0.9275 0.5269 LIME[13] 0.0592 0.0962 Robinw/oFactor1 0.9181 0.5194 LEMNA[25] 0.0157 0.0475 Robinw/oFactor2 0.9174 0.5025 Zouetal.[16] 0.3681 0.3852 Robinw/oFactor1&2 0.9073 0.4931 Robin 0.9275 0.5269 67.22% higher FS-A on average than the three interpreters CAIS and TBCNN. This indicates that Robin is insensitive we considered. to semantics-preserving code transformations and has higher robustness against perturbations. D. What Is Robin’s Robustness? (RQ2) Ablation Analysis. To show the contribution of Factor1 and To evaluate the robustness of Robin against perturbations, Factor2 in Robin to the robustness, we conduct the ablation we generate perturbed examples by using the semantics- study. We exclude Factor1, Factor2, and both Factor1 and preservingcodetransformationtocodeexamplesinthetestset Factor2 to generate three variants of Robin, respectively, and andfilterouttheperturbedexamplesthatchangethepredicted compare Robin with the three variants in terms of robustness labelsoftheclassifier.Weusetheseperturbedexamplestotest for the number of important features k = 10. Table VI the robustness of interpreters. summarizes the robustness evaluation results of Robin and Table V summarizes the robustness evaluation results for its three variants on DL-CAIS and TBCNN. We observe that differentinterpreters.WeobservethattherobustnessofLIME Robin achieves the highest robustness, and removing Factor1 and LEMNA on the code classifier is very poor and only and/orFactor2candecreaseitsrobustness,whichindicatesthe slightly higher than the baseline. This is caused by the fol- significance of Factor1 and Factor2 to Robin’s robustness. lowing:LIMEandLEMNAsufferfromuncertainty,thusthere To show the impact of the number of important features k may be differences between the important features obtained on the robustness, we take DL-CAIS for example to compare
whenthesamecodeexampleisinterpretedmultipletimes.We Robin and its three variants when applied to DL-CAIS in alsoobservethattherobustnessofZouetal.’smethod[16]is termsoftherobustnessofinterpretersbasedonk (e.g.,10,20, higher than that of LIME and LEMNA, but still much lower 30, 40, and 50) important features, respectively. As shown in thanthatofRobin.TheaverageJaccardsimilaritybetweenthe TableVII,therobustnessdecreasesask increases.Thiscanbe importantfeaturesoftheoriginalexamplesidentifiedbyRobin explained by the following: As k increases, the less important andtheimportantfeaturesoftheadversarialexamplesis1.94x features are added to the selected important features; these higherthanthestate-of-the-artmethod[16]and15.87xhigher less important features are difficult to be recognized by the on average than the three interpreters we evaluated for DL- interpreter due to their less prominent contribution to the pre-TABLEVII TABLEIX ABLATIONANALYSISRESULTSOFROBUSTNESSEVALUATIONON THEDIFFERENCEOFACCURACYADqBETWEENCLASSIFIERAND DL-CAISINDIFFERENTkVALUE RETRAINEDCLASSIFIERWITHqNON-IMPORTANTFEATURESREMOVED Method k=10 k=20 k=30 k=40 k=50 DL-CAIS Robin 0.9275 0.9129 0.8962 0.8939 0.8835 Method q=100 q=200 q=300 q=400 q=500 q=600 q=700 Average Baseline 0.1029 0.1029 0.1274 0.0979 0.098 0.0735 0.0931 0.0994 Robinw/oFactor1 0.9181 0.8981 0.8932 0.8819 0.8808 LEMNA 0.0834 0.0784 0.1030 0.0637 0.3579 0.2402 0.0245 0.1359 Robinw/oFactor2 0.9174 0.8969 0.8793 0.8799 0.8784 Robin 0.0736 0.0834 0.0785 0.0883 0.1030 0.1128 0.1814 0.1030 Robinw/oFactor1&2 0.9073 0.8943 0.8744 0.8731 0.8640 TBCNN Method q=25 q=50 q=75 q=100 q=125 q=150 q=175 Average TABLEVIII Baseline 0.0074 0.0042 0.0204 0.0138 0.0043 0.0355 0.0358 0.0173 LEMNA 0.0105 0.0159 0.0241 0.0227 0.1804 0.0569 0.0445 0.0507 ROBUSTNESSEVALUATIONRESULTSOFDIFFERENTINTERPRETERSON Robin 0.0299 0.0037 0.0184 0.0156 0.0200 0.0029 0.0145 0.0150 DL-CAISWITHDIFFERENTNEURALNETWORKSTRUCTURES Method DL-CAIS DL-CAIS-CNN DL-CAIS-MLP TABLEX Baseline 0.0121 0.0121 0.0121 THEDIFFERENCEOFACCURACYADqBETWEENCLASSIFIERAND LIME[13] 0.0592 0.2651 0.0592 RETRAINEDCLASSIFIERWITHqNON-IMPORTANTFEATURESREMOVED LEMNA[25] 0.0157 0.0167 0.0157 ONDL-CAISWITHDIFFERENTNEURALNETWORKSTRUCTURES Zouetal.[16] 0.3681 0.3812 0.3059 DL-CAIS Robin 0.9275 0.4922 0.3298 Method q=100 q=200 q=300 q=400 q=500 q=600 q=700 Average Baseline 0.1029 0.1029 0.1274 0.0979 0.098 0.0735 0.0931 0.0994 LEMNA 0.0834 0.0784 0.1030 0.0637 0.3579 0.2402 0.0245 0.1359 diction, thus perform worse robustness against perturbations. Robin 0.0736 0.0834 0.0785 0.0883 0.1030 0.1128 0.1814 0.1030 DL-CAIS-CNN Wealsoobservethat(i)Robinachievesthebestrobustnesson Method q=100 q=200 q=300 q=400 q=500 q=600 q=700 Average DL-CAISinallk values,and(ii)removingFactor1orFactor2 Baseline 0.0147 0.0049 0.0147 0.0049 0.0049 0.0196 0.0818 0.0207 LEMNA 0.0490 0.0396 0.0245 0.0195 0.2745 0.1618 0.0196 0.0840 or both of them from Robin can decrease the robustness of Robin 0.0196 0.0049 0.0095 0.0196 0.0294 0.0294 0.0735 0.0266 DL-CAIS-MLP Robin,whichindicatesthesignificanceofFactor1andFactor2 Method q=100 q=200 q=300 q=400 q=500 q=600 q=700 Average Baseline 0.0197 0.0196 0.0588 0.0147 0.0049 0.0687 0.0765 0.0376 for the robustness of Robin. LEMNA 0.0049 0.0047 0.0196 0.0147 0.2598 0.2010 0.0490 0.0791 Robin 0.0098 0.0049 0.0049 0.0196 0.0196 0.0490 0.0490 0.0224 Robustness Evaluation When Applied to Different Neural Network Structures. To show the robustness of Robin when baselinemethodoutperformsRobinonDL-CAIS,ithasmuch applied to different neural network structures, we adopt DL- lower fidelity and robustness than Robin which we have CAIS, DL-CAIS-CNN, and DL-CAIS-MLP we have trained discussed in Section IV-C and Section IV-D. In contrast, in Section IV-C for interpretation. For DL-CAIS-CNN and the average difference of accuracy achieved by LEMNA is DL-CAIS-MLP, we generate perturbed examples by using the notably larger than those of Robin and the baseline method, semantics-preserving code transformations to code examples becauseLEMNAreliesonthechangesofclassifier’saccuracy inthetestsetandfilterouttheperturbedexamplesthatchange to calculate the importance of features. Robin achieves a the prediction labels of the classifier. Table VIII shows the 24.21% smaller average difference of accuracy for DL-CAIS robustness evaluation results for different interpreters on DL- and a 70.41% smaller average difference of accuracy for CAIS with different neural networks. For DL-CAIS-CNN and TBCNNthanLEMNA,indicatingthatRobinachieves47.31% DL-CAIS-MLP,Robinachievesa10.05xhigherrobustnesson less affected by the out-of-distribution examples compared to
average, compared with the other three interpreters. Though LEMNA on average. Robin is minimally affected by the out- Robin achieves different robustness for different neural net- of-distribution examples, which attributes to introducing the work structures, Robin achieves the highest robustness among prediction accuracy of the retrained classifier to evaluate the all interpreters we evaluated. importance of features. Insight2:Robinachievesa1.94xhigherrobustnessthanthe state-of-the-art method [16] and a 15.87x higher robustness EffectivenessinCopingwithOut-of-DistributionExamples on average than the three interpreters we evaluated. When Applied to Different Neural Network Structures. E. What Is Robin’s Effectiveness in Coping with Out-of- To show the effectiveness in coping with out-of-distribution Distribution Examples? (RQ3) examples when applied to different neural network structures, we adopt DL-CAIS, DL-CAIS-CNN, and DL-CAIS-MLP we To demonstrate the effectiveness of Robin in copying with have trained in Section IV-C for interpretation. Table X out-of-distributionexamples,weconductexperimentswiththe describes the difference of accuracy AD between the given number of removed non-important features q ∈Q={100, 200, q classifier and the retrained classifier after removing q non- 300, 400, 500, 600, 700} for DL-CAIS and q ∈ Q={25, 50, important features while using different neural network struc- 75, 100, 125, 150, 175} for TBCNN according to the number tures.ForDL-CAIS-CNNandDL-CAIS-MLP,Robinachieves of all features. Table IX shows the difference of accuracy a 70.00% less affected by out-of-distribution examples when AD between the classifier and the retrained classifier with q q compared with LEMNA on average, where the average is non-important features removed. We observe that the average taken over DL-CAIS-CNN and DL-CAIS-MLP, which shows difference of accuracy of Robin and the baseline method is the effectiveness of Robin in coping with out-of-distribution very small, indicating that they are less affected by out-of- examples when applied to different neural network structures. distribution examples. This can be explained by the fact that bothofthesemethodsdonotemploythechangeinclassifier’s Insight 3: Robin achieves a 47.31% less affected by out-of- accuracy to assess the importance of features. Although the distribution examples when compared with LEMNA.V. LIMITATIONS robust(SectionIV-D);inparticular,existingmethodsforlocal interpretation suffers from the problem of out-of-distribution The present study has limitations, which represent exciting examples[21],[22].Robinaddressesboththerobustnessissue open problems for future studies. First, our study does not and the out-of-distribution issue in the post-hoc approach to evaluate the effectiveness of Robin on graph-based code local interpretation, by introducing approximators to mitigate classifiers and pre-training models like CodeT5 [37] and out-of-distributionexamplesandusingadversarialtrainingand CodeBERT [38]. The unique characteristics of these models data augmentation to improve robustness. pose challenges that require further investigation, particularly in the context of applying Robin to classifiers with more Prior Studies on Improving Robustness of Interpretation complex model structures. Second, Robin can identify the Methods. These studies have been conducted in other appli- most important features but cannot give further explanations cation domains than code classification. In the image domain, why a particular prediction is made. To our knowledge, this one idea is to aggregate multiple interpretation [24], [48], and kind of desired further explanation is beyond the reach of another idea is to smooth the model’s decision surface [47], the current technology in deep learning interpretability. Third, [49]. In the text domain, one idea is to eliminate the uncer- Robincanidentifythemostimportantfeaturesthatleadtothe tainties that are present in the existing interpretation methods particularpredictionofagivenexample,butcannottellwhich [50], [51], and another idea is to introduce continuous small training examples in the training set that leads to the code perturbations to interpretation and use adversarial training for classifiercontributetotheparticularprediction.Achievingthis robustness enhancement [27], [28]. To our knowledge, we are type of training examples traceability is important because it thefirsttoinvestigatehowtoachieverobustinterpretabilityin may help achieve better interpretability. the code classification domain, while noting that none of the aforementionedmethodsthatareeffectiveintheotherdomains VI. RELATEDWORK can be adapted to the code classification domain. This is because program code must follow strict lexical and syntactic Prior Studies on Deep Learning-Based Code Classifiers. requirements, meaning that perturbed representations may not We divide these models into three categories according to be mapped back to real-world code examples, which is a the code representation they use: token-based [5], [7], [39] general challenge when dealing with programs. This justifies vs. tree-based [4], [8], [40] vs. graph-based [10], [15], [41]. whyRobininitiatesthestudyofanewandimportantproblem. Token-basedmodelsrepresentapieceofcodeasasequenceof individualtokens,whileonlyperformingbasiclexicalanalysis. These models are mainly used for code authorship attribution VII. CONCLUSION and vulnerability detection. Tree-based models represent a We have presented Robin, a robust interpreter for deep piece of code as a syntax tree, while incorporating both learning-based code classifiers such as code authorship at- lexical and syntax analysis. These models are widely used tribution classification and code function classification. The
for code authorship attribution, code function classification, key idea behind Robin is to (i) use approximators to mitigate and vulnerability detection. Graph-based models represent a the out-of-distribution example problem, and (ii) use adver- piece of code as a directed graph, where a node represents sarial training and data augmentation to improve interpreter an expression or statement and an edge represents a control robustness, which is different from the widely-adopted idea flow, control dependence, or data dependence. These models of using adversarial training to achieve classifier’s (rather are suitable for complex code structures such as vulnerability than interpreter’s) robustness. Experimental results show that detection.WehaveshownhowRobincanofferinterpretability Robin achieves a high fidelity and a high robustness, while to token- and tree-based code classifiers [2], [7], but not to mitigating the effect of out-of-distribution examples caused graph-based models as discussed in the preceding section. byperturbations. Thelimitationsof Robinserve asinteresting Prior Studies on Interpretation Methods for Deep Learn- open problems for future research. ing Models. These studies are often divided into two ap- proaches: ante-hoc [11], [12] vs. post-hoc [13], [42]–[47], ACKNOWLEDGMENTS wherethelattercanbefurtherdividedintoglobal(i.e.,seeking model-level interpretability) [42], [43] vs. local (i.e., seeking We thank the anonymous reviewers for their comments example-level interpretability) [13], [44]–[47] interpretation whichguidedusinimprovingthepaper.Theauthorsaffiliated methods. In the context of code classification, the ante-hoc with Huazhong University of Science and Technology were approach leverages the attention weight matrix [14], [15]. supported by the National Natural Science Foundation of Thereiscurrentlynopost-hocapproachaimingatglobalinter- ChinaunderGrantNo.62272187.ShouhuaiXuwassupported pretationincodeclassification;whereas,thepost-hocapproach in part by the National Science Foundation under Grants aiming at local interpretation mainly leverages perturbation- #2122631, #2115134, and #1910488 as well as Colorado based feature saliency [16], [17] and program reduction [18], State Bill 18-086. Any opinions, findings, conclusions or [19].Sinceante-hocinterpretationmethodscannotprovidein- recommendations expressed in this work are those of the terpretationsforgivenclassifiers,wewillnotdiscussthemany authors and do not reflect the views of the funding agencies further. On the other hand, existing poc-hoc methods are not in any sense.REFERENCES [18] S.Suneja,Y.Zheng,Y.Zhuang,J.A.Laredo,andA.Morari,“Probing model signal-awareness via prediction-preserving input minimization,” [1] J.Zhang,X.Wang,H.Zhang,H.Sun,K.Wang,andX.Liu,“Anovel in Proceedings of the 29th ACM Joint Meeting on European Software neuralsourcecoderepresentationbasedonabstractsyntaxtree,”inPro- EngineeringConferenceandSymposiumontheFoundationsofSoftware ceedingsofthe41stInternationalConferenceonSoftwareEngineering Engineering(ESEC/FSE),Athens,Greece,2021,pp.945–955. (ICSE),QC,Canada. IEEE,2019,pp.783–794. [19] M.R.I.Rabin,V.J.Hellendoorn,andM.A.Alipour,“Understanding neuralcodeintelligencethroughprogramsimplification,”inProceedings [2] L. Mou, G. Li, L. Zhang, T. Wang, and Z. Jin, “Convolutional neural of the 29th ACM Joint Meeting on European Software Engineering networks over tree structures for programming language processing,” ConferenceandSymposiumontheFoundationsofSoftwareEngineering in Proceedings of the 30th AAAI Conference on Artificial Intelligence (ESEC/FSE),Athens,Greece,2021,pp.441–452. (AAAI),Phoenix,Arizona,USA. AAAIPress,2016,pp.1287–1293. [20] A.ZellerandR.Hildebrandt,“Simplifyingandisolatingfailure-inducing [3] A. Caliskan-Islam, R. Harang, A. Liu, A. Narayanan, C. Voss, F. Ya- input,”IEEETransactionsonSoftwareEngineering,vol.28,no.2,pp. maguchi, and R. Greenstadt, “De-anonymizing programmers via code 183–200,2002. stylometry,” in Proceedings of the 24th USENIX Security Symposium [21] S.Hooker,D.Erhan,P.-J.Kindermans,andB.Kim,“Abenchmarkfor (USENIXSecurity),Washington,D.C.,USA,2015,pp.255–270. interpretabilitymethodsindeepneuralnetworks,”inProceedingsofAn- [4] B.Alsulami,E.Dauber,R.Harang,S.Mancoridis,andR.Greenstadt, nualConferenceonNeuralInformationProcessingSystems(NeurIPS), “Sourcecodeauthorshipattributionusinglongshort-termmemorybased Vancouver,BC,Canada,2019,pp.9734–9745. networks,”inProceedingsofthe22ndEuropeanSymposiumonResearch [22] L. Brocki and N. C. Chung, “Evaluation of interpretability methods inComputerSecurity(ESORICS),Oslo,Norway,2017,pp.65–82. and perturbation artifacts in deep neural networks,” arXiv preprint [5] X.Yang,G.Xu,Q.Li,Y.Guo,andM.Zhang,“Authorshipattributionof arXiv:2203.02928,2022. sourcecodebyusingbackpropagationneuralnetworkbasedonparticle [23] M. Bajaj, L. Chu, Z. Y. Xue, J. Pei, L. Wang, P. C.-H. Lam, and swarmoptimization,”PloSone,vol.12,no.11,p.e0187204,2017. Y. Zhang, “Robust counterfactual explanations on graph neural net-
[6] E.Bogomolov,V.Kovalenko,Y.Rebryk,A.Bacchelli,andT.Bryksin, works,” in Proceedings of Annual Conference on Neural Information “Authorship attribution of source code: A language-agnostic approach ProcessingSystems(NeurIPS),VirtualEvent,2021,pp.5644–5655. and applicability in software engineering,” in Proceedings of the 29th [24] X.Zhang,N.Wang,H.Shen,S.Ji,X.Luo,andT.Wang,“Interpretable ACMJointMeetingonEuropeanSoftwareEngineeringConferenceand deeplearningunderfire,”inProceedingsofthe29thUSENIXSecurity Symposium on the Foundations of Software Engineering (ESEC/FSE), Symposium(USENIXSecurity),VirtualEvent,2020,pp.1659–1676. Athens,Greece,2021,pp.932–944. [25] W. Guo, D. Mu, J. Xu, P. Su, G. Wang, and X. Xing, “LEMNA: [7] M.Abuhamad,T.AbuHmed,A.Mohaisen,andD.Nyang,“Large-scale Explainingdeeplearningbasedsecurityapplications,”inProceedingsof and language-oblivious code authorship identification,” in Proceedings the2018ACMSIGSACConferenceonComputerandCommunications of the 2018 ACM SIGSAC Conference on Computer and Communica- Security(CCS),Toronto,ON,Canada,2018,pp.364–379. tionsSecurity(CCS),Toronto,ON,Canada,2018,pp.101–114. [26] J. Chen, L. Song, M. Wainwright, and M. Jordan, “Learning to ex- [8] G.Lin,J.Zhang,W.Luo,L.Pan,andY.Xiang,“Poster:Vulnerability plain:Aninformation-theoreticperspectiveonmodelinterpretation,”in discoverywithfunctionrepresentationlearningfromunlabeledprojects,” Proceedingsofthe35thInternationalConferenceonMachineLearning inProceedingsofthe2017ACMSIGSACConferenceonComputerand (ICML),Stockholmsma¨ssan,Stockholm,Sweden,2018,pp.883–892. Communications Security (CCS), Dallas, TX, USA, 2017, pp. 2539– [27] H.Lakkaraju,N.Arsov,andO.Bastani,“Robustandstableblackbox 2541. explanations,” in Proceedings of the 37th International Conference on [9] Z.Li,D.Zou,S.Xu,X.Ou,H.Jin,S.Wang,Z.Deng,andY.Zhong, MachineLearning(ICML),VirtualEvent,2020,pp.5628–5638. “VulDeePecker: A deep learning-based system for vulnerability detec- [28] E. La Malfa, A. Zbrzezny, R. Michelmore, N. Paoletti, and tion,”inProceedingsofthe25thAnnualNetworkandDistributedSystem M.Kwiatkowska,“OnguaranteedoptimalrobustexplanationsforNLP SecuritySymposium(NDSS),SanDiego,California,USA,2018,pp.1– models,”inProceedingsofthe30thInternationalJointConferenceon 15. ArtificialIntelligence(IJCAI),VirtualEvent,2021,pp.2658–2665. [10] Z. Li, D. Zou, S. Xu, H. Jin, Y. Zhu, and Z. Chen, “SySeVR: A [29] H. Zhang, M. Cisse, Y. N. Dauphin, and D. Lopez-Paz, “mixup: framework for using deep learning to detect software vulnerabilities,” Beyondempiricalriskminimization,”inProceedingsofthe6thInterna- IEEETransactionsonDependableandSecureComputing,vol.19,no.4, tionalConferenceonLearningRepresentations(ICLR),Vancouver,BC, pp.2244–2258,2022. Canada,2018. [11] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. [30] Z.Li,G.Chen,C.Chen,Y.Zou,andS.Xu,“RopGen:Towardsrobust Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in code authorship attribution via automatic coding style transformation,” Proceedings of Annual Conference on Neural Information Processing inProceedingsofthe44thInternationalConferenceonSoftwareEngi- Systems(NeurIPS),LongBeach,CA,USA,2017,pp.5998–6008. neering(ICSE),Pittsburgh,PA,USA,2022,pp.1906–1918. [12] E.Choi,M.T.Bahadori,J.Sun,J.Kulas,A.Schuetz,andW.Stewart, [31] M.LevandowskyandD.Winter,“Distancebetweensets,”Nature,vol. “Retain:Aninterpretablepredictivemodelforhealthcareusingreverse 234,no.5323,pp.34–35,1971. time attention mechanism,” in Proceedings of Annual Conference on [32] J. Liang, B. Bai, Y. Cao, K. Bai, and F. Wang, “Adversarial infidelity Neural Information Processing Systems (NeurIPS), Barcelona, Spain, learning for model interpretation,” in Proceedings of the 26th ACM 2016,pp.3504–3512. SIGKDD International Conference on Knowledge Discovery & Data [13] M. T. Ribeiro, S. Singh, and C. Guestrin, “‘Why should I trust you?’ Mining(KDD),VirtualEvent,2020,pp.286–296. Explainingthepredictionsofanyclassifier,”inProceedingsofthe22nd [33] M. B. Zafar, M. Donini, D. Slack, C. Archambeau, S. Das, and ACMSIGKDDInternationalConferenceonKnowledgeDiscoveryand K. Kenthapadi, “On the lack of robust interpretability of neural text DataMining(KDD),SanFrancisco,CA,USA,2016,pp.1135–1144. classifiers,” in Proceedings of the Association for Computational Lin- [14] N. D. Bui, Y. Yu, and L. Jiang, “Autofocus: Interpreting attention- guisticsFindings(ACL/IJCNLP),VirtualEvent,2021,pp.3730–3740. basedneuralnetworksbycodeperturbation,”inProceedingsofthe34th [34] https://codingcompetitions.withgoogle.com/codejam,2022. IEEE/ACMInternationalConferenceonAutomatedSoftwareEngineer- [35] E.Quiring,A.Maier,andK.Rieck,“Misleadingauthorshipattribution
ing(ASE),SanDiego,CA,USA,2019,pp.38–41. of source code using adversarial learning,” in Proceedings of the 28th [15] D.Zou,Y.Hu,W.Li,Y.Wu,H.Zhao,andH.Jin,“mVulPreter:Amulti- USENIX Security Symposium, Santa Clara, CA, USA. USENIX granularity vulnerability detection system with interpretations,” IEEE Association,2019,pp.479–496. TransactionsonDependableandSecureComputing,pp.1–12,2022. [36] M.Abadi,P.Barham,J.Chen,Z.Chen,A.Davis,J.Dean,M.Devin, [16] D. Zou, Y. Zhu, S. Xu, Z. Li, H. Jin, and H. Ye, “Interpreting S.Ghemawat,G.Irving,M.Isard,M.Kudlur,J.Levenberg,R.Monga, deeplearning-basedvulnerabilitydetectorpredictionsbasedonheuristic S. Moore, D. G. Murray, B. Steiner, P. A. Tucker, V. Vasudevan, searching,”ACMTransactionsonSoftwareEngineeringandMethodol- P. Warden, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: A system ogy,vol.30,no.2,pp.1–31,2021. for large-scale machine learning,” in Proceedings of the 12th USENIX [17] J.Cito,I.Dillig,V.Murali,andS.Chandra,“Counterfactualexplanations SymposiumonOperatingSystemsDesignandImplementation(OSDI), formodelsofcode,”inProceedingsofthe44thIEEE/ACMInternational Savannah,GA,USA. USENIXAssociation,2016,pp.265–283. ConferenceonSoftwareEngineering:SoftwareEngineeringinPractice [37] Y. Wang, W. Wang, S. Joty, and S. C. Hoi, “CodeT5: Identifier-aware (ICSE-SEIP),Pittsburgh,PA,USA,2022,pp.125–134. unifiedpre-trainedencoder-decodermodelsforcodeunderstandingandgeneration,”inProceedingsofthe2021ConferenceonEmpiricalMeth- odsinNaturalLanguageProcessing(EMNLP),VirtualEvent,2021,pp. 8696–8708. [38] Z.Feng,D.Guo,D.Tang,N.Duan,X.Feng,M.Gong,L.Shou,B.Qin, T. Liu, D. Jiang, and M. Zhou, “CodeBERT: A pre-trained model for programmingandnaturallanguages,”inProceedingsofFindingsofthe 2020ConferenceonEmpiricalMethodsinNaturalLanguageProcessing (EMNLP),VirtualEvent,2020,pp.1536–1547. [39] R. Russell, L. Kim, L. Hamilton, T. Lazovich, J. Harer, O. Ozdemir, P. Ellingwood, and M. McConley, “Automated vulnerability detection in source code using deep representation learning,” in Proceedings of the 17th IEEE International Conference on Machine Learning and Applications(ICMLA),Orlando,FL,USA. IEEE,2018,pp.757–762. [40] U. Alon, M. Zilberstein, O. Levy, and E. Yahav, “code2vec: Learning distributedrepresentationsofcode,”Proc.ACMProgram.Lang.,vol.3, no.POPL,pp.1–29,2019. [41] M.Allamanis,M.Brockschmidt,andM.Khademi,“Learningtorepre- sentprogramswithgraphs,”inProceedingsofthe6thInternationalCon- ference on Learning Representations (ICLR), Vancouver, BC, Canada, 2018. [42] N. Puri, P. Gupta, P. Agarwal, S. Verma, and B. Krishnamurthy, “MAGIX: Model agnostic globally interpretable explanations,” arXiv preprintarXiv:1706.07160,2017. [43] J.Wang,L.Gou,W.Zhang,H.Yang,andH.-W.Shen,“DeepVID:Deep visual interpretation and diagnosis for image classifiers via knowledge distillation,”IEEETransactionsonVisualizationandComputerGraph- ics,vol.25,no.6,pp.2168–2180,2019. [44] K.Simonyan,A.Vedaldi,andA.Zisserman,“Deepinsideconvolutional networks: Visualising image classification models and saliency maps,” in Proceedings of the 2nd International Conference on Learning Rep- resentations(ICLR),Banff,AB,Canada,2014. [45] S. M. Lundberg and S.-I. Lee, “A unified approach to interpreting model predictions,” in Proceedings of Annual Conference on Neural InformationProcessingSystems(NeurIPS),LongBeach,CA,USA,2017, pp.4765–4774. [46] P. Schwab and W. Karlen, “CXPlain: Causal explanations for model interpretationunderuncertainty,”inProceedingsofAnnualConference on Neural Information Processing Systems (NeurIPS), Vancouver, BC, Canada,2019,pp.10220–10230. [47] D. Smilkov, N. Thorat, B. Kim, F. Vie´gas, and M. Wattenberg, “SmoothGrad: Removing noise by adding noise,” arXiv preprint arXiv:1706.03825,2017. [48] L. Rieger and L. K. Hansen, “A simple defense against adversarial attacksonheatmapexplanations,”inProceedingsofthe2020Workshop on Human Interpretability in Machine Learning (WHI), Virtual Event, 2020. [49] Z. Wang, H. Wang, S. Ramkumar, P. Mardziel, M. Fredrikson, and A. Datta, “Smoothed geometry for robust attribution,” in Proceed- ings of Annual Conference on Neural Information Processing Systems (NeurIPS),VirtualEvent,2020,pp.13623–13634. [50] X. Zhao, W. Huang, X. Huang, V. Robu, and D. Flynn, “BayLIME: Bayesian local interpretable model-agnostic explanations,” in Proceed- ings of the 37th Conference on Uncertainty in Artificial Intelligence (UAI),VirtualEvent,2021,pp.887–896. [51] Z.Zhou,G.Hooker,andF.Wang,“S-LIME:Stabilized-limeformodel explanation,”inProceedingsofthe27thACMSIGKDDConferenceon KnowledgeDiscovery&DataMining(KDD),VirtualEvent,2021,pp. 2429–2438.
2309.14677 XGV-BERT: Leveraging Contextualized Language Model and Graph Neural Network for ffi E cient Software Vulnerability Detection VuLeAnhQuana,b,ChauThuanPhata,b,KietVanNguyena,b,PhanTheDuya,b,Van-HauPhama,b aInformationSecurityLaboratory,UniversityofInformationTechnology,HoChiMinhcity,Vietnam bVietnamNationalUniversityHoChiMinhCity,HochiminhCity,Vietnam Abstract Withtheadvancementofdeeplearning(DL)invariousfields,therearemanyattemptstorevealsoftwarevulnerabilitiesbydata- driven approach. Nonetheless, such existing works lack the effective representation that can retain the non-sequential semantic characteristics and contextual relationship of source code attributes. Hence, in this work, we propose XGV-BERT, a framework thatcombinesthepre-trainedCodeBERTmodelandGraphNeuralNetwork(GCN)todetectsoftwarevulnerabilities. Byjointly training the CodeBERT and GCN modules within XGV-BERT, the proposed model leverages the advantages of large-scale pre- training, harnessing vast raw data, and transfer learning by learning representations for training data through graph convolution. The research results demonstrate that the XGV-BERT method significantly improves vulnerability detection accuracy compared to two existing methods such as VulDeePecker and SySeVR. For the VulDeePecker dataset, XGV-BERT achieves an impressive F1-score of 97.5%, significantly outperforming VulDeePecker, which achieved an F1-score of 78.3%. Again, with the SySeVR dataset,XGV-BERTachievesanF1-scoreof95.5%,surpassingtheresultsofSySeVRwithanF1-scoreof83.5%. Keywords: DeepLearning,SoftwareSecurity,VulnerabilityDetection,GraphNeuralNetworks,NLP 1. Introduction ability detection [7], [8], [9], [10]. With the continuous inno- vationanddevelopmentofDLtechnology,significantadvance- Recently, as technology continues its rapid evolution, the mentshavebeenmadeinNaturalLanguageProcessing(NLP). software development landscape has witnessed an exponential ModelssuchasGPT[11]andBERT[12]havepropelledNLP surge. While these software innovations offer unprecedented technologyforward. Sourcecodeisessentiallyatextinaspe- convenience,theyalsobringforthaloomingspecter:aproblem cific format, making it logically feasible to utilize NLP tech- ofsoftwarevulnerabilities.Thesevulnerabilitiesareformidable niquesforcodeanalysis. Infact, modelslikeCodeBERT[13] adversariestotheseamlessfunctioningofsoftwaresystems[1]. havebeenproposedbyseveralresearchers,andsomecode-level Theglobaleconomictoll,bothdirectandindirect,inflictedby tasks have been addressed, yielding promising results. These these vulnerabilities has surpassed billions of dollars, making findingsdemonstratethepotentialofusingNLPtechnologyfor it an issue of paramount concern. It is an undeniable fact that automatedvulnerabilitydetectionresearch. thevastmajorityofsoftwareapplicationsharborvarioustypes However, there are various directions of research that em- of vulnerabilities. Notable among these are buffer overflow ployDLforsecurityvulnerabilitydetection. Accordingtothe vulnerabilities like CVE-2019-8917, library/API function call surveybyZengetal. [14],therearefourmainresearchdirec- vulnerabilities like CVE-2016-10033, array usage vulnerabil- tions. ThefirstinvolvesusingDLmodelstolearnthesemantic ities like CVE-2020-12345, and many more extensively cata- representationsofprograms, asproposedbyWangetal. [15]. logedwithintheCommonVulnerabilitiesandExposures(CVE) The second direction focuses on end-to-end solutions for de- database[2].Astimeelapses,thelongeravulnerabilitypersists tectingBufferOverflowvulnerabilities,asexploredbyChoiet unaddressed, the more it becomes an inviting target for mali- al. [16]. The third direction involves extracting vulnerability- cious actors. This, in turn, exposes companies and organiza- containingcodepatternstotrainmodels,asdemonstratedbyLi tionstotheominousspecterofsubstantialdamages[3]. Conse- etal. [17]. Finally,thefourthdirectionaddressesvulnerability quently,thequestfortheautomateddetectionofsoftwarevul- detectionforbinarycode,asstudiedbyLiuetal. [18]. nerabilitieswithinstringenttimeframesstandsatthevanguard Each of these research directions has its own advantages ofadvancedresearchendeavors[4],[5],[6]. andlimitations. Basedonthepreviousresearchoutcomes,ex- Ontheotherhand,DeepLearning(DL)technologycanpro- tracting vulnerability-containing patterns to create a training videthecapabilitytoachievemoreaccurateautomaticvulner- datasethasachievedpromisingresults,displayingrelativelygood effectiveness and potential for further development [19]. No- Emailaddresses:19520233@uit.edu.vn(VuLeAnhQuan), tableexamplesofthisapproachincludetheVulDeePeckerpa- 19520827@gm.uit.edu.vn(ChauThuanPhat),kietnv@uit.edu.vn(Kiet per by Li et al. [17] and SySeVR by Li et al. [20], as well VanNguyen),duypt@uit.edu.vn(PhanTheDuy),haupv@uit.edu.vn (Van-HauPham) as VulDeBERT by Kim et al. [21]. However, both SySeVR PreprintsubmittedtoElsevier September27,2023 3202 peS 62 ]RC.sc[ 1v77641.9032:viXraandVulDeBERTstillexhibitcertainshortcomings;namely,the tional Network (GCN) model for analyzing C and C++
processed data consists solely of isolated code statements ex- sourcecode. tractedfromthesourcecode, lackingcontextuallinkage. This • Our experimental results indicate that XGV-BERT out- deficiencyinherentlydiminishestheprecisionofthemodel. performsthestate-of-the-artmethod[17][20]ontheSARD Meanwhile,toretainthenon-sequentialsemanticattributes [30]andNVD[31]datasets. inherentinthesourcecodeoftheprogram,certaingraph-based methodologies have been introduced, as documented by [22], Theremainingsectionsofthisarticleareconstructedasfol- [23], [24], [25], [26], [27], [28], [29]. These studies advo- lows.Section3introducedsomerelatedworksindetectingvul- cate the transformation of contract source code into a graph nerabilitiesinsoftware. InSection2, wegivetheoverviewof representation,followedbytheutilizationofgraphneuralnet- background knowledge used in our work. Next, the proposed works for vulnerability detection. These existing works indi- framework and methodology are discussed in Section 4. Sec- catethatthereisthepotentialforutilizinggraphrepresentations tion 5 describes the experimental settings and result analysis for inspecting the relationship of source code components to ofvulnerabilitydetectiononvariousdatasets. Finally,wecon- reveal software vulnerability. Specifically, the slices extracted cludethepaperinSection6. fromthesourcecodearetransformedintographrepresentations andsubsequentlyincorporatedintothedeeplearningmodelfor 2. Background training. The conversion of slices into graphs aims to capture relationships between words, as well as between words and 2.1. AbstractSyntaxTree-AST slices,enhancingthemodel’scapacitytounderstandcodepat- 2.1.1. Definition ternsandrelationshipsamongvariouscomponents. In terms of software, an Abstract Syntax Tree (AST) [32] Additionally,intheevolvinglandscapeofsoftwaresecurity, embodies a tree-like depiction of the abstract syntax structure Natural Language Processing (NLP) has emerged as a potent inherentinafragmentoftext,commonlyreferredtoassource toolwiththepotentialtorevolutionizethefield. Itsapplication code,authoredinaformallanguage. Eachnodewithinthistree extendstobridgingthesubstantialsemanticgapbetweenNatu- servesasarepresentationofadiscerniblestructurefoundwithin ralLanguage(NL)andprogramminglanguages(PL).Nonethe- the text. More specifics, abstraction in the AST is manifested less,asignificantsemanticdisparityexistsbetweenNaturalLan- bynotrepresentingeverydetailpresentintheactualsyntaxbut guage(NL)andprogramminglanguages(PL).Tomitigatethis focusing on the structure and content-related aspects. For in- divergence and gain a deeper comprehension of the semantic stance,unnecessarysinglequotesinthesyntacticstructureare contentwithinNL,scholarsintroducedCodeBERT[13],alan- notrepresentedasseparatenodesinthetree.Similarly,asyntax guagemodelthatemploysthemaskedlanguagemodelandto- structurelikethe”if”conditionalstatementcanberepresented ken replacement detection approach to pre-train both NL and byasinglenodewiththreebranches. TheASTisavitaltoolin PL.CodeBERThasdemonstratedremarkablegeneralizationabil- parsingprogramminglanguages. Itprovidesanabstractstruc- ities, exhibiting a strong competitive edge in various down- tural representation of the source code, enabling programs to stream tasks related to multiple programming languages. The understandandprocesscodemoreeasily.Theabstractioninthe embeddingvectorsderivedfromCodeBERTencapsulateawe- AST allows us to concentrate on essential syntax components alth of information, thereby enhancing the DL model’s ability and overlook irrelevant details. This simplifies the language toyieldoptimalresultspost-training. analysisandprocessing,whileprovidingaconvenientstructure Therefore, to cope with the above challenges, we propose forworkingwithandinteractingwiththesourcecode. theXGV-BERTmodelforbetterobtainingthecontextualrep- resentationofprograms. Specifically,weleverageapre-trained 2.1.2. Design model named CodeBERT for code embedding because it is a ThedesignoftheASTisoftencloselyintertwinedwiththe modelpre-trainedonmultipleprogramminglanguagesthatun- design of the compiler. The core requirements of the design derstand better the source code. Subsequently, the integration includethefollowing: oftheGNNmodelwithgraphsconstructedfromextracteddata helps to enhance the connections between words and slices in • Preservation of Variables: Variables must be retained, source code to disclose the vulnerability in the software pro- alongwiththeirdeclarationpositionsinthesourcecode. gram. Insummary,thecontributionsofourworkareillustrated asfollows: • RepresentationofExecutionOrder: Theorderofexecu- tionstatementsmustberepresentedandexplicitlydeter- • Weleveragethelanguagemodeltoconstructamethodof mined. representing vulnerable source code for security defect • ProperHandlingofBinaryOperators: Theleftandright detection,usingCodeBERTembeddingmodeltoreplace componentsofbinaryoperatorsmustbestoredandaccu- theWord2vecembeddingmethodusedinpreviousstud- ratelydetermined. ies[17][20]. • WeproposeavulnerabilitydetectionsystemcalledXGV- • Storage of Identifiers and Assigned Values: The identi- fiersandtheirassignedvaluesmustbestoredwithinthe BERT that utilizes CodeBERT with a Graph Convolu- assignmentstatements. 22.2. CodeBERT around a central data point using learnable filters and
CodeBERT[13]isapre-trainedBERTmodelthatcombines weights. Spectral-based networks operate on a similar both natural language (NL) and programming language (PL) principle,aggregatingtheattributesofneighboringnodes encodings to create a comprehensive model suitable for fine- foracentralnode. However,spectral-basedmethodsof- tuning on source code tasks. The model is trained on a large tenhavehighercomputationalcomplexityandhavegrad- dataset sourced from code repositories and programming doc- uallybeenreplacedbyspatial-basedmethods. uments,leadingtoimprovedeffectivenessinsoftwareprogram • Spatial ConvolutionalNetwork: Thisapproach provides trainingandsourcecodeanalysis. a simpler and more efficient way to handle data. It em- During the pre-training stage, the input data is formed by bedsnodesbasedontheirneighboringnodes.Thisspatial- combiningtwosegmentswithspecialseparatortoken: [CLS], based method has become popular due to its simplicity w 1,w 2,..w n,[SEP],c 1,c 2,...,c m,[EOS],with[CLS]isclassifi- andeffectivenessinprocessingdata. cationtoken,[SEP]isseparatortokenand[EOS]is”endofthe sequence”token.Onesegmentrepresentsnaturallanguagetext, 2.4. GraphConvolutionalNetwork-GCN while the other represents code from a specific programming GCN (Graph Convolutional Network) [34] is a powerful language. The [CLS] token is a special token placed before neural network architecture designed for machine learning on the two segments. Following the standard text processing in graphs.Infact,itissopowerfulthatevenarandomlyinitialized Transformer,thenaturallanguagetextistreatedasasequence two-layerGCNcanproducemeaningfulfeaturerepresentations ofwordsanddividedintoWordPieces[33]. Acodesnippetis fornodesinthegraph. regardedasasequenceoftokens. TheoutputofCodeBERTin- Specifically, the GCN model takes the graph data G = (V, cludes(1)contextualizedvectorrepresentationsforeachtoken, E)asinput,where: encompassingbothnaturallanguageandcode,and(2)therep- resentationof[CLS],servingasasummarizedrepresentation. • N ×F0 is the input feature matrix, denoted as X, where N isthenumberofnodes,andF0 isthenumberofinput 2.3. GraphNeuralNetwork-GNN featuresforeachnode. 2.3.1. Overview • N ×N istheadjacencymatrixA,representingthestruc- Agraphisadatastructureincomputersciencecomprising two components: nodes and edges G = (V, E). Each node has turalinformationaboutthegraph. edges(E)connectingittoothernodes(V).Adirectedgraphhas Thus,ahiddenlayerinGCNcanbewrittenasHi= f(Hi−1,A), arrowsonitsedges,indicatingdirectionaldependencies,while whereH0 = X,and f isthepropagationrule.EachlayerHicor- undirectedgraphslacksucharrows. respondstoafeaturematrixN×Fi,whereeachrowrepresents GraphshaveattractedconsiderableattentioninMachineLe- afeaturerepresentationofanode. Ateachlayer,thesefeatures arningduetotheirpowerfulrepresentationalcapabilities. Each areaggregatedtocreatethefeaturesforthenextlayerusingthe nodeisembeddedintoavector,establishingitspositioninthe propagationrule f. Thisway,thefeaturesbecomeincreasingly data space. Graph Neural Networks (GNNs) are specialized abstractateachconsecutivelayer. VariantsofGCNdifferonly neural network architectures that operate on graphs. The pri- inthechoiceofpropagationrule f. mary goal of GNN architecture is to learn an embedding vec- tor containing information about its local neighborhood. This embedding can be used to address various tasks, such as node 3. Relatedwork labeling,nodeandedgeprediction,andmore. 3.1. SoftwareVulnerability In essence, GNNs are a subclass of DL techniques specif- ically designed for performing inference on graph-structured 3.1.1. Concept data. They are applied to graphs and have the ability to per- Softwarevulnerabilitiesrepresenterrors,weaknesses,orim- formpredictiontasksatthenode,edge,andgraphlevels. perfections within software or operating systems that are sus- ceptible to the influence of attacks or malevolent actions that 2.3.2. Classification may inflict harm upon the system or the information it pro- GNNsaredividedintothreetypes: cesses. Softwarevulnerabilitiescanbeexploitedbymalicious actorstocarryoutactionssuchasunauthorizedsystemaccess, • Recurrent Graph Neural Network: In this network, the pilfering sensitive information, impeding the normal function- graph is bidirectional, where data flows in both direc- ingofthesystem,orfacilitatingotherformsofattacks. tions. It applies graph message passing over edges to Withtheswiftdevelopmentofnovelattacktechniques,the propagate the output from the initial direction back to severity of software vulnerabilities is continuously escalating. the graph nodes, but adjusts edge weights based on the All systems inherently harbor latent vulnerabilities; however, previouslyappliedgradientsforthatnode. thepertinentquestionremainswhetherthesevulnerabilitiesare exploitedandresultindeleteriousconsequences. • SpectralConvolutionalNetwork: Thistypesharesasim- ilarideawithCNNs. InCNNs,convolutionisperformed by summing up the values of neighboring data points 33.1.2. Thecurrentstate • ExtractionofVulnerability-ContainingCodePatternsfrom An increasing number of cyberattacks originate from soft- SourceCode: Li’smethodology[17]revolvesaroundthe ware vulnerabilities, resulting in user data breaches and tar- extractionofvulnerability-containingcodepatternsfrom nishing the reputation of companies [35]. Despite numerous sourcecodetotrainmodels. researcheffortsproposedtoaidinvulnerabilitydetection,vul-
Direction 1 - Automating Semantic Representation Learn- nerabilitiescontinuetoposeathreattothesecureoperationof ingforVulnerabilityPrediction IT infrastructure [36]. The number of disclosed vulnerabili- ties in the Common Vulnerabilities and Exposures (CVE) and Wang’s research [15] is a pioneering study that employs NationalVulnerabilityDatabase(NVD)repositorieshassurged Deep Belief Networks (DBNs) to delve into the semantic rep- fromapproximately4,600in2010to8,000in2014beforesky- resentations of programs. The study’s aim is to harness high- rocketing to over 17,000 in 2017 [17], [37]. These vulnera- level semantic representations learned by neural networks as bilitiesmayhaveledtopotentialthreatsconcerningthesecure vulnerability-indicativefeatures. Specifically,itenablestheau- usageofdigitalproductsanddevicesworldwide[38]. tomatic acquisition of features denoting source code vulnera- bilities without relying on manual techniques. This approach 3.1.3. ExploitationMechanismsofVulnerabilities is not only suited for predicting vulnerabilities within a single projectbutalsoforcross-projectvulnerabilityprediction. Ab- Upondiscoveryofasecurityvulnerability,theattackercan stractSyntaxTrees(ASTs)areemployedtorepresentprograms capitalizeonitbycraftingprogramstoinfiltrateandtakecon- asinputforDBNsintrainingdata. Theyproposedadatapre- trolofthetargeteddevice. Oncesuccessfulingainingaccessto processingapproach,comprisingfoursteps: thetarget,attackersmayconductsystemreconnaissancetofa- miliarizethemselveswithitsworkings. Consequently,theycan • Tokenization: The first step involves parsing the source execute diverse actions such as accessing critical files or de- codeintotokens. ployingmaliciouscode. Leveragingsuchcontrol,attackerscan hijackthecomputerandpilferdatafromthevictim’sdevice. • TokenMapping: Thesecondstepmapstokenstointeger Vulnerabilities are sometimes identified either by software identifiers. developers themselves or through user and researcher alerts. However, in certain cases, hackers or espionage organizations • DBN-basedSemanticFeatureGeneration: Thethirdstep mayuncoverintrusiontechniquesbutrefrainfromnotifyingthe employs DBNs to autonomously generate semantic fea- developers, leading to so-called ”zero-day” vulnerabilities, as tures. developers have not had an opportunity to patch them. As a • VulnerabilityPredictionModelEstablishment: Thefinal result, software or hardware remains exposed to threats until steputilizesDBNstoestablishavulnerabilityprediction patchesorfixesaredistributedtousers. model. Software vulnerabilities can lead to grave consequences, grantingattackersunauthorizedaccessandcontroloverdevices. Direction2-End-to-EndBufferOverflowVulnerabilityPre- To obviate such calamities, the detection and remediation of dictionfromRawSourceCodeusingNeuralNetworks vulnerabilities assume utmost significance. Nevertheless, on Choi’sresearch[16]standsastheinauguralworkproviding certain occasions, vulnerabilities remain latent until they are anend-to-endsolutionfordetectingbufferoverflowvulnerabil- maliciouslyleveraged,wreakingconsiderablehavocuponusers. ities. Experimental studies substantiate that neural networks Tomitigaterisks,regularsoftwareandhardwareupdatestoap- possess the capability to directly learn vulnerability-relevant plypatchesandfixesareessential. characteristics from raw source code, obviating the need for code analysis. The proposed neural network is equipped with 3.2. Relatedresearchworks integratedmemoryblockstoretainextensive-rangecodedepen- TherearemyriadresearchdirectionsemployingDLforse- dencies.Consequently,adaptingthisnetworkispivotaliniden- curity vulnerability detection. According to the survey con- tifying buffer overflow vulnerabilities. Test outcomes demon- ductedbyPengZengandcolleagues[14],fourprimaryresearch strate the method’s precision in accurately detecting distinct avenuesareobserved: typesofbufferoverflow. However, this approach still harbors limitations, necessi- • UtilizingDLModelsforSemanticProgramRepresenta- tating further enhancements. A primary constraint lies in its tions: This direction involves the automatic acquisition inability to identify buffer overflow incidents occurring within of semantic program representations using DL models, external functions, as input data excludes code from external asproposedbyWang[15]. files. Anotherlimitationistherequirementforeachlinetoen- • BufferOverflowVulnerabilityPredictionfromRawSource compass data assignments for the model to function. Apply- Code:Choi’sapproach[16]entailspredictingbufferover- ing this method directly to source code containing conditional statementsprovesintricate,asattentionscoresarecomputedto flowvulnerabilitiesdirectlyfromrawsourcecode. locatethemostrelevantcodepositions. • VulnerabilityDetectionforBinaryCode: Liu’sapproach Direction 3 - Vulnerability Detection Solution for Binary [18]targetsvulnerabilitydetectionwithinbinarycode. Code 4Liu’sresearch[18]introducesaDL-basedvulnerabilityde- codealone. Conversely,Direction4hasseenconsecutivestud- tection tool for binary code. This tool is developed with the iesthatextractbothsemanticandsyntacticfeatures,asexempli- intentofexpandingthevulnerabilitydetectiondomainbymiti- fiedbySySeVR[20],enhancingthereliabilityofmodeltrain-
gatingthescarcityofsourcecode. Totrainthedata,binaryseg- ingdata. RegardingDirection2-End-to-EndBufferOverflow ments are fed into a Bidirectional Long Short-Term Memory Vulnerability Prediction from Raw Source Code using Neural network with Attention mechanism (Att-BiLSTM). The data Networks,ithasadrawbackinthatitexclusivelydetectsBuffer processinginvolvesthreesteps: Overflowvulnerabilities. Incontrast,Direction4employsdata extractionmethodsthatcanincreasethenumberofvulnerabil- • Initially, binary segments are collected by applying the itiestoasmanyas126CWEs,dividedintofourcategories,as IDAProtoolontheoriginalbinarycode. we will discuss in Section 5. As for Direction 3 - Vulnera- bility Detection Solution for Binary Code, it holds significant • In the second step, functions are extracted from binary potential as it can detect vulnerabilities in various program- segmentsandlabeledas”vulnerable”or”non-vulnerable”. ming languages by utilizing binary code. However, its accu- • Inthethirdstep,binarysegmentsareusedasbinaryfea- racy remains lower compared to the other research directions. tures before feeding them into the embedding layer of Forthesereasons,ourteamhasdeterminedtoselectDirection Att-BiLSTM. 4asareferenceandproposeXGV-BERTforfurtherimprove- ment. In our proposed XGV-BERT, the fusion of CodeBERT Thegranularityofdetectionliesatthefunctionlevel.Multi- model and Graph Neural Networks (GNN) represents a com- pleexperimentswereconductedonopen-sourceprojectdatasets pellingstrategywithinthedomainofsoftwarevulnerabilityde- to evaluate the proposed method. The results of these experi- tection. CodeBERT,anadvancedtransformer-basedmodel,ex- ments indicate that the proposed approach outperforms other celsinacquiringmeaningfulcoderepresentations,whileGNNs binary code-based vulnerability detection methods. However, exhibit exceptional prowess in encoding semantic connections this method still has limitations. Notably, the detection accu- presentincodegraphs. ThisharmoniouscombinationofCode- racyisrelativelylow,fallingbelow80%ineachdataset. BERTandGNNselevatestheprecisionandefficiencyofsoft- Direction4-ExtractingVulnerableCodePatternsforModel warevulnerabilitydetection,facilitatingthediscoveryofintri- Training catevulnerabilitiesthatconventionalapproachesmightstruggle Li’s VulDeePecker [17] is the pioneering study that em- touncover. ploys the BiLSTM model for vulnerability detection. This re- search direction aligns with our team’s pursuit as well. This 4. Methodology studyemploysBiLSTMtoextractandlearnlong-rangedepen- dencies from code sequences. The training data for the tool 4.1. Theoverviewarchitecture isderivedfromcodegadgetsrepresentingprograms,servingas To detect vulnerabilities using DL, we need to represent inputfortheBiLSTM.Theprocessingofcodegadgetsinvolves programsinawaythatcapturesbothsyntacticandsemanticin- threestages: formation relevant to the vulnerabilities. There are two main • Theinitialstageentailsextractingcorrespondingprogram contributions presented in Li’s research [20]. To start with, slicesoflibrary/APIfunctioncalls. treatingeachfunctionintheprogramasaregionproposal[39] similartoimageprocessing. However, thisapproachisoverly • The second stage revolves around creating and labeling simplisticbecausevulnerabilitydetectiontoolsnotonlyneedto codegadgets. determine the existence of vulnerabilities within functions but also need to pinpoint the locations of the vulnerabilities. This • Thefinalstagefocusesontransformingcodegadgetsinto means we require detailed representations of the programs to vectors. effectivelydetectvulnerabilities.Secondly,theytreateachcode line or statement (used interchangeably) as a unit for vulnera- ExperimentaloutcomesdemonstrateVulDeePecker’scapac- bility detection. However, their approach has two drawbacks. ity to address numerous vulnerabilities, and the integration of More specifics, most statements in a program do not contain human expertise can enhance its effectiveness. However, this any vulnerabilities, resulting in a scarcity of vulnerable sam- methodexhibitscertainlimitationsthatrequirefurtherimprove- ples. Inaddition,manystatementshavesemanticrelationships ment. Firstly, VulDeePecker is restricted to processing pro- witheachother,buttheyarenotconsideredasawhole. gramswritteninC/C++.Secondly,itcansolelyaddressvulner- To synthesize the advantages of the two above proposals, abilitieslinkedtolibrary/APIfunctioncalls. Lastly,theevalu- we divide the program into smaller code segments (i.e., a few ation dataset is relatively small-scale, as it only encompasses linesofcode),correspondingtoregionproposals,andrepresent twotypesofvulnerabilities. thesyntacticandsemanticcharacteristicsofthevulnerabilities. ThereasonweoptedfortheDirection4isthatweassessed After studying Li’s method [20], our research group pro- that this research direction offers certain advantages over the posestheXGV-BERTmethodforefficientlydetectingsoftware other three. In the case of Direction 1 - Automating Seman- vulnerability. Weextractfeaturepatternsfromthesourcecode tic Representation Learning for Vulnerability Prediction, this usingtheCodeBERTmodeltoembedthevectors,andthenwe researchislimitedtoextractingsemanticfeaturesfromsource 5Figure1:TheworkflowofXGV-BERTframeworkforvulnerabilitydetection. feedthemintovariousDLmodels,includingRNN,LSTM,Bi- information,eachSeVCundergoestransformationintoa
LSTM,GRU,Bi-GRU,andtheproposedGCNmodelforcom- symbolicrepresentation. parisonandevaluation.Figure1illustratesthespecificarchitec- – Removal of non-ASCII characters and comments tureofthestepsinvolvedinourproposedsoftwarevulnerability fromthecode. detectionmethod. The architecture we propose uses the original source code – Mapping of user-defined variable names to sym- asinput,followedbytheextractionofprogramslicesbasedon bolicnames(e.g.,”V1”,”V2”)inaone-to-onecor- syntax features. From the program slices, we further extract respondence. lines of code that have semantic relationships with each pro- – Mapping of user-defined function names to sym- gramslice,creatingSemantics-basedVulnerabilityCandidates bolicnames(e.g.,”F1”,”F2”)inaone-to-onecor- (SeVCs) [20] or code gadgets [17], depending on the dataset, respondence. and label them accordingly. Then, we tokenize the SeVCs or ItisimportanttonotethatdifferentSeVCsmaysharethe codegadgetsandfeedthemintotheCodeBERTmodelforvec- same symbolic representation, enhancing the generaliz- tor embedding. Finally, we construct a DL model for training abilityoftheapproach. andpredictingvulnerablesourcecodeusingtheembeddedvec- tors. • Tokenize the symbolic representations: To achieve this, ProfessorLi’steam[20]proposedtosplitthesymbolrep- 4.2. EmbeddingVectors resentationofSeVC(e.g.,”V1=V2-8;”)intoasequence Inthissection,wedelveintotheprocessoftokenizingand ofsymbolsthroughlexicalanalysis(e.g.,”V1”,”=”,”V2”, embeddingtheSeVCsextractedfromthesourcecodefortrain- ”-”,”8”,and”;”). Eachsymbolintheresultingsequence ingintheDLmodel. Thefollowingstepsoutlineourapproach: isconsideredatoken. Thisprocessisperformedforeach code line in the SeVC, resulting in a list of tokens for • Symbolic Representation of SeVCs: To ensure the in- eachSeVC. dependence of SeVCs from user-defined variables and • Afterobtainingthelistoftokens,weusetheCodeBERT function names while capturing the program’s semantic model to embed the data. The CodeBERT model used 6inthisstudyhasbeenpretrainedonsourcecodedataand WhereN isthetotalnumberofslicesinthedataset,and isretrainedwiththeinputofthetokenizedvectors. The n isthenumberofslicesthatcontaintheword.IDFhelps t modelarchitectureconsistsofmultipleTransformerlay- evaluate the importance of a word for a slice, assigning ers,whichextractfeaturesfromthedatasetandenhance lowerIDFscorestofrequentlyoccurringwords. contextualinformationforthevectors. Theoutputofthe • TheTF-IDFweightiscomputedbymultiplyingtheTerm model will be the embedded vectors, which we use as Frequency(TF)andtheInverseDocumentFrequency(IDF). inputsfortheDLmodelstoclassifythedataset. ThePositivePointwiseMutualInformation(PPMI)isused 4.3. TrainingDLModels to determine the weight of word pairs, where a higher PPMI Inthefinalpart,weutilizeDLmodelsfortrainingandclas- value indicates a stronger relationship and co-occurrence fre- sifying the dataset. Specifically, we employ a total of 6 mod- quencybetweentwowords. ThePPMIvalueofiand jiscal- els: RNN, LSTM, Bi-LSTM, GRU, Bi-GRU, and GNN. The culatedusingtheformula: input to these models is the embedded vectors. Among these models, the most significant one we propose is GNN, named p(i, j) PPMI(i, j)=max(log ,0) (2) XGV-BERT. p(i)p(j) FormodelssuchasRNNanditsvariants,includingLSTM, GRU,Bi-LSTM,andBi-GRU,weimplementthesemodelswith Where: inputs being the embedded vectors. The architecture of these • p(i)istheprobabilityofwordiappearinginaslice. models consists of two hidden layers, accompanied by fully connected layers to converge the output for text classification • p(j)istheprobabilityofword jappearinginaslice. purposes. For the GNN model, we employ the GCN architecture for • p(i, j)isthejointprobabilityofbothiand jappearingin training. OurGCNmodelisdesignedtotakeinputdatainthe aslice. formofgraphs. Theembeddedvectorsobtainedinsection4.2 Forthenodesintheadjacencymatrix,weutilizetheoutput need to be processed into graph-structured data before being embeddings of the CodeBERT model and use them as input fedintoourtrainingmodel. Toaccomplishthis,wecreateadja- representations for the slice nodes. Once the data, including cencymatricesfortheembeddedvectors,andthesevectorsare the adjacency matrix and the nodes, is constructed, we utilize transformedintographnodes. Figure2illustratesthearchitec- both of them as inputs for the GCN model. Our GCN model tureusingtheproposedGCNmodelfortrainingandpredicting consistsoftwohiddenlayers,alongwithfullyconnectedlayers, resultsonthedataset. toconvergetheoutputfortextclassificationpurposes. Specifically, we construct a non-homogeneous graph con- sisting of both word nodes and slice nodes based on the idea proposed by Yao [40]. In this adjacency matrix, each word or 5. ExperimentsandAnalysis slice is represented as a one-hot vector and used as input for theGCNmodel. Wecreateedgesbetweennodesbasedonthe Inthissection,weconductexperimentstocomparetheXGV- occurrence of a word in a slice (slice-word edge) and the co- BERT’sdetectionaccuracytothestate-of-the-artsolutions,VulDeeP-
occurrenceofwordsacrosstheentiredataset(word-wordedge). ecker [17] and SySeVR [20]. Before discussing the effective- Theweightofanedgebetweentwonodesiandjisdefinedas nessofXGV-BERT,wefirstgoovertheimplementationspecifics follows: anddatasetsusedinthetrials.  5.1. DatasetandPreprocessing A =TPP FM -IDI( Fi, (ij ,), j), i, ij ia sre slw ico e,rd js isan wd oi rd(cid:44) j 5.1.1 F. orB te hn ech em vaa lr uk atD ioa nta ose ft our approach, we use two datasets i,j 1 0, , i ot= hej rwise f [r 1o 7m ]. t Bw oo thre ds ae ta ar sc eh tsp wa ep re ers c: olS ley cS tee dV fR ro[ m20 t] wa on sd ouV ru cl eD s:ee thP eec Nke ar - tionalVulnerabilityDatabase(NVD)[31]andtheSoftwareAs- TheTermFrequency-InverseDocumentFrequency(TF-IDF) suranceReferenceDataset(SARD)[30]. valueofawordinaslicedeterminestheweightoftheedgebe- The NVD dataset provides vulnerable code snippets from tweenaslicenodeandawordnode.Thisvalueisusedtoassess varioussoftwareproducts,includingbothvulnerablecodeand theimportanceofawordinaslice,whereahighervalueindi- their corresponding patches. On the other hand, the SARD catesahigherimportanceofthewordfortheslice.Specifically: datasetoffersacollectionoftest,synthetic,andacademicpro- • TF (Term Frequency) is the number of times the word grams,labeledas”good”(havingnovulnerabilities),”bad”(con- appearsintheslice. tainingvulnerabilities),and”mixed”(havingvulnerabilitieswhose patchedversionsarealsoavailable). • IDF (Inverse Document Frequency) is calculated as fol- Regarding the VulDeePecker dataset, they offer program lows: N pieces that concentrate on two categories of CWE related to IDF=log (1) Library/API Function Call vulnerabilities, including resource n t 7Figure2:TheproposedGCNmodelinXGV-BERTframework. management error vulnerabilities (CWE-399) and buffer error doesnotreceivedirectinputsfromexternalsources vulnerabilities(CWE-119).Weproduced29,313non-vulnerable ortheprogramenvironment. code gadgets and 10,440 vulnerable code gadgets for CWE- – Generateprogramslicescorrespondingtotheargu- 119.Weproduced10,440vulnerableand14,600non-vulnerable mentsofthelibrary/APIfunctioncallsextractedin code gadgets for CWE-399. The number of code gadgets that the previous step. Program slices are further clas- weextractedfromtheVulDeePeckerdatasetisshowninTable sifiedintoforwardslicesandbackwardslices. For- 1. ward slices contain statements affected by specific Thespecificstepsforcodegadgetextractionareasfollows: arguments, while backward slices comprise state- • Extractinglibrary/APIfunctioncallsandtheircorrespond- mentsinfluencingspecificarguments. ingprogramslices: • Extractingcodegadgetsandassigninglabels: – Categorizelibrary/APIfunctioncallsintotwotypes: – Extractcodegadgets: forwardlibrary/APIfunctioncallsandbackwardli- brary/APIfunctioncalls. Theforwardtypereceives * Constructapartofthecodegadgetbycombin- inputs directly from external sources, like a com- ing statements containing arguments from the mand line, program, or file. The backward type library/API function calls. These statements 8belong to the same user-defined function and * Interprocedural backward slice bs’ i is formed are ordered based on the corresponding pro- bymergingthebackwardslicebs offunction i gramslice. Anyduplicatestatementsareelim- f withthebackwardslicesoffunctionscalled i inated. by f andthefunctionsthatcall f. i i * Completethecodegadgetbyincorporatingstate- * Finally,programsliceps iiscreatedbymerging ments from other functions containing the ar- fs’ andbs’. i i guments from the library/API function calls, – Transform the program slice into SeVCs with the following the order in the corresponding pro- followingsteps: gramslice. * Convert the statements belonging to function – Labelthecodegadgets: Thosewithoutvulnerabil- f andappearinginprogramsliceps asanode i i ities receive the label ”0,” while those containing intoSeVCswhilepreservingtheoriginalorder vulnerabilitiesarelabeled”1.” ofstatementsinfunction f. i * Convertthestatementsbelongingtootherfunc- Table1:NumberofcodegadgetsextractedfromtheVulDeePeckerdataset tions, which are related to function f through i functioncalls,intoSeVCs. Non- Vulnerable Dataset Total code gad- vulnerable • Step 3. Label the SeVCs: To differentiate between vul- code gad- gets nerableandsafecodepatterns,welabeltheSeVCs,and gets their corresponding vectors, accordingly. A SeVC con- CWE-119 39,753 10,440 29,313 taining a known vulnerability is labeled as ”1,” while it CWE-399 21,885 7,285 14,600 islabeledas”0”iftheSeVCissafeanddoesnotcontain AllDataset 61,637 17,725 43,913 anyvulnerabilities. FortheSySeVRdataset,itprovidesC/C++programscon- Table2:NumberofSeVCsextractedfromtheSySeVRdataset taining 126 CWE related to four types of vulnerabilities: Li- brary/API Function Call (FC-kind), Array Usage (AU-kind), Non- PointerUsage(PU-kind),andArithmeticExpression(AE-kind). Vulnerable Dataset Total vulnerable In total, we have extracted 547,347 SeVCs from the dataset, SeVCs SeVCs comprising52,227SeVCscontainingvulnerabilitiesand495,120 FC-kind 141,023 17,005 124,018 SeVCswithoutvulnerabilities.ThedistributionofSeVCsbased
AU-kind 55,772 7,996 47,776 onvulnerabilitytypesandtheircorrespondingCWEidentifiers PU-kind 340.324 25.377 314.946 ispresentedinTable2. AE-kind 10.234 1.848 8.386 • Step 1. Extract Syntax-based Vulnerability Candidates AllDataset 61,637 17,725 43,913 (SyVCs). By leveraging these datasets, we were able to comprehen- – RepresenteachfunctionasanAbstractSyntaxTree sively evaluate the effectiveness and performance of our pro- (AST).TherootoftheASTcorrespondstothefunc- posedmethodindetectingsoftwarevulnerabilities. tion,theleavesrepresentthetokensofthefunction, andtheintermediatenodescorrespondtothestate- 5.2. PerformanceMetrics mentsofthefunction. 5.2.1. Detectionmetrics – Comparethecodeelements(comprisingoneormore Toaccordinglyevaluatethemodelprediction,wediscussed consecutivetokens,includingidentifiers,operators, and defined ground truth values as follows: true positive (TP) constants, and keywords) in the AST with a set of represents the number of vulnerable samples that are detected syntactic vulnerability patterns. If a code element asvulnerable;truenegative(TN)representsthenumberofsam- matchesanyelementinthisset,itbecomesaSyVC. plesthatarenotvulnerableandaredetectedasnotvulnerable; • Step2. TransformSyVCsintoSemantic-basedVulnera- False positive (FP) represents the number of samples are not bilityCandidates(SeVCs). vulnerable but are detected as vulnerable; False negative (FN) represents the number of vulnerable samples that are detected – CreateCFGsforeachfunctionintheprogram.From asnotvulnerable. CFGs,generatePDGsforeachfunction. Therefore, we use four metrics as follows for our experi- – Based on the PDG, create program slices ps for ments: i eachSyVC. • Accuracyistheratioofcorrectandtotalpredictions. * Interproceduralforwardslicefs’ i isformedby mergingtheforwardslicefs ioffunction f iwith Accuracy= TP+TN (3) theforwardslicesoffunctionscalledby f i. TP+TN+FP+FN 9• Precisionistheratiooftrulyvulnerablesamplesamong 5.4. ExperimentalResults thedetectedvulnerablesamples. 5.4.1. EffectivenessofCodeBERTembeddingmethod Todemonstratetheeffectivenessofourembeddingmethod TP Precision= (4) using the CodeBERT model (see Section 4.2), we conducted TP+FP experiments and evaluated the performance of the Word2vec • Recallistheproportionoftrulyvulnerablesamplesamong andCodeBERTembeddingmodels.Wecomparedtheseresults totheexperimentalfindingsoftworelatedresearchstudies,Sy- thesamplesthatwerepredictedasnotcontainingvulner- SeVR[20]andVulDeePecker[17]. Tables 5and 6summarize abilities. TP theevaluation’sfindings. Recall= (5) TP+FN Table 5: Evaluation results of vector embedding methods on VulDeePecker • F1-score measures the overall effectiveness by calculat- dataset ingtheharmonicmeanofPrecisionandRecall. Trainingmethod Acc Precision Recall F1 F1−score=2· Recall·Precision (6) VulDeePecker(BiLSTM) 90.8 79.1 77.5 78.3 Recall+Precision OurWord2Vec+LSTM 82.6 85.4 82.3 83.8 OurWord2Vec+BiLSTM 83.4 86.9 81.9 84.3 5.3. ExperimentalSettings CodeBERT+LSTM 83.5 80.9 87.7 84.2 We conducted our experiments on a virtual machine envi- CodeBERT+BiLSTM 86.0 89.5 85.3 87.3 ronmentrunningUbuntu20.04witha8coreCPUs,81.5GBof RAM,40GBofGPURAM,andastoragecapacityof100GB. Table 5 demonstrates that our embedding method using TABLE 3 and TABLE 4 show the architecture for the LSTM the CodeBERT model has achieved significantly better eval- model and XGV-BERT, respectively. To perform our experi- uation results overall compared to VulDeePecker [17]. The ments,wetrainedthedatasetsusingthefollowingconfiguration F1scoreforourapproachis87.3%, representingasubstantial onbothmodels: AdamOptimizerwithlearning rate = 0.001, improvement over VulDeePecker’s F1 score of 79.3%. Sim- epoch = 50 and batch size = 32 for RNN, LSTM, BiLSTM, ilarly, the Precision and Recall metrics have also shown im- GRUandBiGRUmodelsandthesameconfigurationforXGV- provements compared to VulDeePecker. Although the detec- BERT model, but with epoch = 4. In the settings for both tionaccuracywhenweusedCodeBERT(86.0%)wasnotsupe- VulDeePecker and SySeVR datasets, we choose 80% samples rior to VulDeepecker (90.8%), it showed improvement when forthetrainingsetandtheremaining20%forthetestset. compared with our experimental evaluation using Word2vec (83.4%). Inthesecases,theF1metricismoresuitableforeval- Table3:ThearchitectureofCNN&GRUmodel uatingtheperformanceofvulnerabilitydetectionbecauseofthe minorityofvulnerablecodecomparedtocleancode. Layer(ID) Activation Outputshape Connectedto Input(1) - (768,1) [] Table6:EvaluationresultsofvectorembeddingmethodsonSySeVRdataset LSTM(2) ReLU (768,300) [(1)] Dropout(3) - (768,300) [(2)] Trainingmethod Acc Precision Recall F1 LSTM(4) ReLU (300) [(3)] SySeVR(LSTM) 95.2 85.2 78.3 81.6 Dropout(5) - (300) [(4)] SySeVR(BiLSTM) 96.0 86.2 82.5 84.3 Dense(6) Sigmoid (1) [(5)] OurWord2Vec+LSTM 91.8 89.8 81.1 85.2
OurWord2Vec+BiLSTM 92.6 92.1 81.5 86.4 CodeBERT+LSTM 93.3 88.3 99.8 93.7 Table4:ThearchitectureofXGV-BERTmodel CodeBERT+BiLSTM 93.8 89.1 99.7 94.1 Connected Layer(ID) Activation Outputshape Meanwhile,Table 6showstheexperimentalresultsonthe to SySeVR dataset [20]. Again, our embedding method using [Node: (1,768), Input(1) - [] CodeBERToutperformstheembeddingmethodusingSySeVR’s Edge: (1,768)] Word2vec. By combining BiLSTM with CodeBERT embed- GraphConv ReLU Node: (1,200) [(1)] dings, we achieved a significant improvement in the F1-score, (2) increasingitfrom84.3%to94.1%whencomparedtoSySeVR. GraphConv ReLU Node: (1,200) [(2)] (3) 5.4.2. EffectivenessofDLmodels Dropout(4) - Node: (1,200) [(3)] TheTABLE7andTABLE8presentsthedetectionperfor- Dropout(5) - Node: (1,200) [(4)] mancemetricsofsixmodelsontheVulDeePecker[17]andSy- Dense(6) - Edge: (2) [(5)] SeVR [20] datasets, where all models use CodeBERT’s em- bedding vectors as input. Overall, all models performed ex- ceptionally well, exhibiting high scores across all evaluation 10Table7:TestperformanceofvariousmodelsusingCodeBERTembeddingmethodonVulDeePeckerdataset Dataset Metrics RNN LSTM BiLSTM GRU BGRU XGV-BERT Accuracy 69.3 77.4 79.4 78.5 79.1 98.4 CWE Precision 63.6 70.0 73.3 71.5 75.0 97.9 119 Recall 90.3 96.0 92.6 94.6 87.5 98.1 F1-score 74.6 81.0 81.8 81.5 80.8 97.7 Accuracy 80.6 89.0 91.0 89.3 91.0 98.3 CWE Precision 72.3 84.5 92.4 86.5 91.7 97.9 399 Recall 72.3 84.5 92.4 86.5 92.4 98.1 F1-score 82.4 88.5 91.1 89.0 91.2 98.0 Accuracy 76.8 83.5 86.0 82.4 86.0 97.8 All Precision 74.1 80.9 89.5 79.6 81.6 97.3 Dataset Recall 82.4 87.7 85.3 87.2 94.4 97.7 F1-score 78.1 84.2 87.3 83.2 87.6 97.5 Table8:TestperformanceofvariousmodelsusingCodeBERTembeddingmethodonSySeVRdataset Dataset Metrics RNN LSTM BiLSTM GRU BGRU XGV-BERT Accuracy 88.4 91.5 93.0 92.1 92.8 97.2 Library/API Precision 81.8 85.6 87.9 86.5 87.5 94.4 FunctionCall Recall 99.8 99.7 99.7 99.7 99.7 92.5 F1-score 89.5 92.1 93.4 92.7 93.2 93.4 Accuracy 87.3 91.3 91.9 91.1 92.2 98.6 Array Precision 79.9 85.4 86.4 94.9 86.6 96.7 Usage Recall 99.8 99.7 99.6 99.9 99.7 97.9 F1-score 88.7 92.0 92.5 91.8 92.7 97.3 Accuracy 89.4 94.2 94.6 94.0 94.6 99.7 Pointer Precision 82.7 89.8 90.7 89.6 90.6 98.5 Usage Recall 99.6 99.7 99.4 99.6 99.5 97.5 F1-score 90.4 94.5 94.9 94.3 94.9 98.0 Accuracy 78.9 84.5 88.7 85.4 87.6 95.5 Arithmetic Precision 72.1 76.5 81.6 77.5 80.6 90.1 Expression Recall 94.3 99.5 99.7 99.7 98.9 96.3 F1-score 81.7 86.5 89.8 87.2 88.8 92.8 Accuracy 88.4 93.3 93.8 93.0 93.9 97.8 All Precision 81.4 88.3 89.1 87.8 89.4 94.8 Dataset Recall 99.7 99.8 99.7 99.7 99.7 96.2 F1-score 89.6 93.7 94.1 93.4 94.3 95.5 metrics. Notably, for the VulDeePecker dataset, our proposed synergybetweenCodeBERTandGNNsenhancestheaccuracy XGV-BERTmethodachievedthehighestratingonallfourin- and efficacy of software vulnerability detection, enabling the dices,outperformingtheremainingfivemodels. Similarly,for identification of complex vulnerabilities that may remain elu- the SySeVR dataset, XGV-BERT achieved the highest scores sivethroughconventionalmethods. onthreeoutoffourmetrics,includingaccuracy,precisionand F1-score.Basedonthisevaluationresult,wecanseethatXGV- 6. Conclusion BERTgivesthebestclassifierperformanceinthecomparedDL models. These results indicate that XGV-BERT, which lever- In concluding, this study introduces a novel method em- ages the CodeBERT and GNN, can represent the contextual ployingcontextualembeddinganddeeplearningtechniquesfor data to identify the vulnerable code in the software with high theclassificationofsoftwareprogramswithvulnerabilities.The performance. In summary, the integration of the CodeBERT newly devised framework, termed XGV-BERT, leverages the model with GNN has proven as a promising approach in the sophisticatedcapabilitiesofcontextualizedembeddingsthrough realm of software vulnerability detection. CodeBERT, a pre- CodeBERT to delineate the intricate interconnections among trainedTransformer-basedmodel,excelsatlearningrepresenta- code attributes essential for identifying security flaws. Within tionsofsourcecode,whileGNNspossessaremarkablecapabil- the realm of source code analysis, such embeddings are piv- ity to capture semantic relationships within code graphs. This otalindiscerningthenuancedrelationshipsbetweentokensor 11words present in code fragments. Such embeddings empower [7] T. Marjanov, I. Pashchenko, F. Massacci, Machine learning for source the model to depict this variable uniquely, contingent on its codevulnerabilitydetection:Whatworksandwhatisn’tthereyet,IEEE Security&Privacy20(5)(2022)60–76.
positionalcontext. Furthermore, XGV-BERTintegratesCode- [8] G.Lin,S.Wen,Q.-L.Han,J.Zhang,Y.Xiang,Softwarevulnerability BERTwiththeadvancedGraphConvolutionalNetwork(GCN) detectionusingdeepneuralnetworks:asurvey,ProceedingsoftheIEEE deep learning paradigm. A salient feature of GCNs is their 108(10)(2020)1825–1848. adeptnessatassimilatingcontextualintelligencefromelaborate [9] R. A. Khan, S. U. Khan, H. U. Khan, M. Ilyas, Systematic mapping studyonsecurityapproachesinsecuresoftwareengineering,IeeeAccess graphformations. Thesenetworksintrinsicallyevolvecontext- 9(2021)19139–19160. sensitive attributes, obviating the necessity for labor-intensive [10] D.Zou,S.Wang,S.Xu,Z.Li,H.Jin,”µvuldeepecker:Adeeplearning- featurecrafting.Significantly,GCNsexcelinidentifyingmulti- basedsystemformulticlassvulnerabilitydetection”,IEEETransactions layered contextual associations by analyzing not only the im- onDependableandSecureComputing18(5)(2021)2224–2236. doi: 10.1109/TDSC.2019.2942930. mediatecontextofagivenentitybutalsothesurroundingenvi- [11] A.Radford, K.Narasimhan, T.Salimans, I.Sutskever, Improvinglan- ronment of its neighboring entities and their interconnections. guageunderstandingbygenerativepre-training(2018). This intrinsic property renders GCNs exceptionally equipped [12] J.Devlin,M.-W.Chang,K.Lee,K.Toutanova,Bert:Pre-trainingofdeep forapprehendingmultifaceteddependencieswithingraph-centric bidirectional transformers for language understanding, arXiv preprint arXiv:1810.04805(2018). data,therebybolsteringtheirutilityacrossdiverseapplications. [13] Z.Feng,D.Guo,D.Tang,N.Duan,X.Feng,M.Gong,L.Shou,B.Qin, Suchanamalgamationaugmentsthedepthoflearningandin- T.Liu,D.Jiang,M.Zhou,CodeBERT:Apre-trainedmodelforprogram- formationextractionfromthemultifarioussegmentsinherentin mingandnaturallanguages, in: FindingsoftheAssociationforCom- sourcecodes. putational Linguistics: EMNLP 2020, Association for Computational Linguistics,Online,2020,pp.1536–1547. doi:10.18653/v1/2020. Ourframeworkcanhelpcybersecurityexpertsdetecterrors findings-emnlp.139. andvulnerabilitiesinsoftwareprogramsautomaticallywithhigh URLhttps://aclanthology.org/2020.findings-emnlp.139 accuracy.Theexperimentalresultsonthetwobenchmarkdatasets, [14] P.Zeng,G.Lin,L.Pan,Y.Tai,J.Zhang,Softwarevulnerabilityanalysis including VulDeePecker and SySeVR, demonstrate the effec- anddiscoveryusingdeeplearningtechniques: Asurvey, IEEEAccess (2020). tiveness of the proposed framework in improving the perfor- [15] S. Wang, T. Liu, L. Tan, Automatically learning semantic features for mance and detection accuracy of DL-based vulnerability de- defectprediction,Proc.38thInt.Conf.Softw.Eng.(ICSE)(2016)297– tection systems. In the future, we aim to enhance the source 308. [16] M.-j. Choi, S. Jeong, H. Oh, J. Choo, End-to-end prediction of buffer code extraction framework. Our primary objective is to refine overruns from raw source code via neural memory networks, arXiv vulnerabilitydetectiongranularity. Atpresent, oursystemop- preprintarXiv:1703.02458(2017). erates at the slice-level, focusing on multiple semantically in- [17] Z.Li,D.Zou,S.Xu,X.Ou,H.Jin,S.Wang,Z.Deng,Y.Zhong,Vuldeep- terrelatedlinesofcode. Additionally, weaspiretoexpandour ecker: Adeeplearning-basedsystemforvulnerabilitydetection, Proc. Netw.Distrib.Syst.Secur.Symp.(2018). vulnerabilitydetectioncapabilitiesacrossdiverseprogramming [18] S.Liu,M.Dibaei,Y.Tai,C.Chen,J.Zhang,Y.Xiang,Cybervulnerabil- languages,asthecurrentframeworkislimitedtoextractingin- ityintelligenceforinternetofthingsbinary,IEEETrans.Ind.Informat. formationsolelyfromC/C++sourcecode. (2020)2154–2163. [19] R.Croft,Y.Xie,M.A.Babar,Datapreparationforsoftwarevulnerability prediction:Asystematicliteraturereview,IEEETransactionsonSoftware Acknowledgment Engineering49(3)(2022)1044–1063. [20] Z. Li, D. Zou, S. Xu, H. Jin, Y. Zhu, Z. Chen, Sysevr: A framework forusingdeeplearningtodetectsoftwarevulnerabilities,arXivpreprint ThisresearchwassupportedbyTheVNUHCM-University arXiv:1807.06756(2021). ofInformationTechnology’sScientificResearchSupportFund. [21] S.Kim,J.Choi,M.E.Ahmed,S.Nepal,H.Kim,Vuldebert: Avulner- abilitydetectionsystemusingbert,2022IEEEInternationalSymposium onSoftwareReliabilityEngineeringWorkshops(ISSREW)(2022). References [22] H.Wang,G.Ye,Z.Tang,S.H.Tan,S.Huang,D.Fang,Y.Feng,L.Bian, Z.Wang,Combininggraph-basedlearningwithautomateddatacollec- [1] Y.Zhu,G.Lin,L.Song,J.Zhang,Theapplicationofneuralnetworkfor tionforcodevulnerabilitydetection,IEEETransactionsonInformation softwarevulnerabilitydetection:areview,NeuralComputingandAppli- Forensics and Security 16 (2021) 1943–1958. doi:10.1109/TIFS.
cations35(2)(2023)1279–1301. 2020.3044773. [2] Commonvulnerabilitiesexposures(cve). [23] S. Cao, X. Sun, L. Bo, R. Wu, B. Li, C. Tao, Mvd: Memory-related URLhttps://cve.mitre.org/ vulnerabilitydetectionbasedonflow-sensitivegraphneuralnetworks,in: [3] Y.Shin,A.Meneely,L.Williams,J.A.Osborne,Evaluatingcomplexity, Proceedingsofthe44thInternationalConferenceonSoftwareEngineer- codechurn,anddeveloperactivitymetricsasindicatorsofsoftwarevul- ing, ICSE’22, AssociationforComputingMachinery, NewYork, NY, nerabilities,IEEETransactionsonSoftwareEngineering,vol.37,no.6 USA,2022,p.1456–1468.doi:10.1145/3510003.3510219. (2010)772–787. URLhttps://doi.org/10.1145/3510003.3510219 [4] P.Zeng,G.Lin,L.Pan,Y.Tai,J.Zhang,Softwarevulnerabilityanalysis [24] S.Wang,X.Wang,K.Sun,S.Jajodia,H.Wang,Q.Li,Graphspd:Graph- anddiscoveryusingdeeplearningtechniques: Asurvey,IEEEAccess8 basedsecuritypatchdetectionwithenrichedcodesemantics, in: 2023 (2020)197158–197172. IEEESymposiumonSecurityandPrivacy(SP),2023, pp.2409–2426. [5] H.Hanif,M.H.N.M.Nasir,M.F.AbRazak,A.Firdaus,N.B.Anuar, doi:10.1109/SP46215.2023.10179479. The rise of software vulnerability: Taxonomy of software vulnerabili- [25] Y. Wu, D. Zou, S. Dou, W. Yang, D. Xu, H. Jin, Vulcnn: An image- tiesdetectionandmachinelearningapproaches,JournalofNetworkand inspiredscalablevulnerabilitydetectionsystem,in: Proceedingsofthe ComputerApplications179(2021)103009. 44thInternationalConferenceonSoftwareEngineering,ICSE’22,As- [6] T.H.Le,H.Chen,M.A.Babar,Asurveyondata-drivensoftwarevul- sociation for Computing Machinery, New York, NY, USA, 2022, p. nerabilityassessmentandprioritization,ACMComputingSurveys55(5) 2365–2376.doi:10.1145/3510003.3510229. (2022)1–39. URLhttps://doi.org/10.1145/3510003.3510229 [26] D. Hin, A. Kan, H. Chen, M. A. Babar, Linevd: Statement-level vul- 12nerabilitydetectionusinggraphneuralnetworks,in: Proceedingsofthe 19th InternationalConferenceon MiningSoftware Repositories, 2022, pp.596–607. [27] V.-A.Nguyen,D.Q.Nguyen,V.Nguyen,T.Le,Q.H.Tran,D.Phung, Regvd: Revisitinggraphneuralnetworksforvulnerabilitydetection,in: Proceedings of the ACM/IEEE 44th International Conference on Soft- wareEngineering: CompanionProceedings,ICSE’22,Associationfor ComputingMachinery,NewYork,NY,USA,2022,p.178–182. doi: 10.1145/3510454.3516865. URLhttps://doi.org/10.1145/3510454.3516865 [28] W.Guo,Y.Fang,C.Huang,H.Ou,C.Lin,Y.Guo,Hyvuldect: Ahy- bridsemanticvulnerabilityminingsystembasedongraphneuralnetwork, Computers&Security(2022)102823. [29] W.Tang,M.Tang,M.Ban,Z.Zhao,M.Feng,Csgvd: Adeeplearning approachcombiningsequenceandgraphembeddingforsourcecodevul- nerabilitydetection,JournalofSystemsandSoftware199(2023)111623. [30] Softwareassurancereferencedataset. URLhttps://samate.nist.gov/SRD/index.php [31] Nationalvulnerabilitydatabase. URLhttps://nvd.nist.gov/ [32] M.Hicks,J.S.Foster,I.Neamtiu,Understandingsourcecodeevolution usingabstractsyntaxtreematching,ACMSIGSOFTSoftwareEngineer- ingNotes(2005). [33] Y.Wu,M.Schuster,Z.Chen,Q.V.Le,Google’sneuralmachinetrans- lationsystem:Bridgingthegapbetweenhumanandmachinetranslation, ComputerScience,ComputationandLanguage(2016). [34] T.N.Kipf,M.Wellingd,Semi-supervisedclassificationwithgraphcon- volutionalnetworks,ComputerScience,MachineLearning(2016). [35] G.Lin,S.Wen,Q.-L.Han,J.Zhang,Y.Xiang,Softwarevulnerability detectionusingdeepneuralnetworks: Asurvey, Proc.IEEE,vol.108, issue:10(2020)1825–1848. [36] D. Votipka, R. Stevens, E. Redmiles, J. Hu, M. Mazurek, Hackers vs. testers: A comparison of software vulnerability discovery processes, Proc.IEEESymp.Secur.Privacy(SP)(2018)374–391. [37] Record-breaking number of vulnerabilities disclosed in 2017: Report. (2018). URL https://www.securityweek.com/ record-breaking-number\-vulnerabilities-disclosed-2017-report [38] R. Coulter, Q.-L. Han, L. Pan, J. Zhang, Y. Xiang, Data-driven cyber securityinperspective—intelligenttrafficanalysis,IEEETransactionson Cybernetics,vol.50,issue:7(2019)3081–3093. [39] S.Ren,K.He,R.Girshick,J.Sun,Fasterr-cnn:Towardsreal-timeobject detectionwithregionproposalnetworks,IEEETransactionsonPattern AnalysisandMachineIntelligence,vol.39,issue:6(2017)1137–1149. [40] L.Y.etal.,Graphconvolutionalnetworksfortextclassification,,AAAI 33rd(2019). 13
2309.14742 SyzTrust: State-aware Fuzzing on Trusted OS Designed for IoT Devices Qinying Wang∗†, Boyu Chang∗, Shouling Ji∗((cid:0)),, Yuan Tian‡, Xuhong Zhang∗, Binbin Zhao§, Gaoning Pan∗, Chenyang Lyu∗, Mathias Payer†, Wenhai Wang∗, Raheem Beyah§ ∗Zhejiang University, †EPFL, ‡University of California, Los Angelos, §Georgia Institute of Technology E-mails: {wangqinying, bychang, sji}@zju.edu.cn, yuant@ucla.edu, zhangxuhong@zju.edu.cn, binbin.zhao@gatech.edu, {pgn, puppet}@zju.edu.cn, mathias.payer@nebelwelt.net, zdzzlab@zju.edu.cn, rbeyah@ece.gatech.edu Abstract—Trusted Execution Environments (TEEs) embedded Samsung have designed TEEs for low-end Microcontroller in IoT devices provide a deployable solution to secure IoT Units (MCUs) [20], [57], [61] and device manufacturers applications at the hardware level. By design, in TEEs, the have embedded the TEE in IoT devices such as unmanned Trusted Operating System (Trusted OS) is the primary com- aerialvehiclesandsmartlocks,toprotectsensitivedataand ponent. It enables the TEE to use security-based design tech- toprovidekeymanagementservices.ATEEiscomposedof niques, such as data encryption and identity authentication. Client Applications (CAs), Trusted Applications (TAs), and Once a Trusted OS has been exploited, the TEE can no aTrustedOperatingSystem(TrustedOS).Amongthem,the longerensuresecurity.However,TrustedOSesforIoTdevices Trusted OS is the primary component to enable the TEE using security-based design techniques, and its security is have received little security analysis, which is challenging theunderlyingpremiseofareliableTEEwherethecodeand from several perspectives: (1) Trusted OSes are closed-source data are protected in terms of confidentiality and integrity. and have an unfavorable environment for sending test cases Unfortunately,implementationflawsinTrustedOSesviolate and collecting feedback. (2) Trusted OSes have complex data the protection guarantees, bypassing confidentiality and in- structuresandrequireastatefulworkflow,whichlimitsexisting tegrityguarantees.Theseflawsleadtocriticalconsequences, vulnerability detection tools. including sensitive information leakage (CVE-2019-25052) To address the challenges, we present SYZTRUST, the and code execution within the Trusted OS context [13], first state-aware fuzzing framework for vetting the security of [42]. Once attackers gain control of Trusted OSes, they can resource-limitedTrustedOSes.SYZTRUSTadoptsahardware- launch critical attacks, such as creating a backdoor to the assisted framework to enable fuzzing Trusted OSes directly Linux kernel [12], and extracting full disk encryption keys on IoT devices as well as tracking state and code coverage of Android’s KeyMaster service [11]. non-invasively.SYZTRUSTutilizescompositefeedbacktoguide While TEEs are increasingly embedded in IoT devices, the fuzzer to effectively explore more states as well as to the security of Trusted OS for IoT devices remains under increasethecodecoverage.WeevaluateSYZTRUSTonTrusted studied. Considering the emerging amount of diversified OSes from three major vendors: Samsung, Tsinglink Cloud, MCUs and IoT devices, manual analysis, such as reverse and Ali Cloud. These systems run on Cortex M23/33 MCUs, engineering, requires significant expert efforts and is there- which provide the necessary abstraction for embedded TEEs. fore infeasible at scale. Recent academic works use fuzzing We discovered 70 previously unknown vulnerabilities in their to automate TEE testing. However, unlike Trusted OSes Trusted OSes, receiving 10 new CVEs so far. Furthermore, for Android devices, Trusted OSes for IoT devices are compared to the baseline, SYZTRUST has demonstrated sig- built on TrustZone-M with low-power and cost-sensitive nificant improvements, including 66% higher code coverage, MCUs, including NuMicro M23. Thus, Trusted OSes for 651%higherstatecoverage,and31%improvedvulnerability- IoT devices are more hardware-dependent and resource- findingcapability.Wereportalldiscoverednewvulnerabilities constrained, complicating the development of scalable and to vendors and open source SYZTRUST. usable testing approaches with different challenges. In the following, we conclude two challenges for fuzzing IoT 1. Introduction Trusted OSes. Challenge I: The inability of instrumentation and re- Trusted Execution Environments (TEEs) are essential stricted environment. Most Trusted OSes are closed- to securing important data and operations in IoT devices. source. Additionally, TEE implementations, especially the GlobalPlatform, the leading technical standards organiza- Trusted OSes are often encrypted by IoT vendors, which tion, has reported a 25-percent increase in the number implies the inability to instrument and monitor the code ex- of TEE-enabled IoT processors being shipped quarterly, ecution in the secure world. Accordingly, classic feedback- year-over-year [29]. Recently, major IoT vendors such as driven fuzzing cannot be directly applied to the scenario of testing TEEs including TAs and Trusted OSes. Existing ShoulingJiisthecorrespondingauthor. works either rely on on-device binary instrumentations [15] 1 3202 peS 62 ]RC.sc[ 1v24741.9032:viXraor require professional domain knowledge and rehosting IoT devices is resource constrained, which makes storing through proprietary firmware emulation [32] to enable test- ETM traces on board difficult and limits the fuzzing speed. ingandcoveragetracking.However,asfortheTrustedOSes Additionally, the TEE internals are complicated and have designed for IoT devices, the situation is more challenging multiple components, which generate noisy trace packets.
due to the following two reasons. First, IoT devices are Therefore, we offload heavy-weight tasks to a PC and care- severely resource-limited, while existing binary instrumen- fully scheduled the fuzzing subprocesses in a more parallel tations are heavy-weight for them and considerably limit way.Wealsopresentanevent-andaddress-basedtracefilter their execution speed. Second, as for rehosting, IoT devices to handle the noisy trace packets that are not executed by are mostly hardware-dependent, rendering the reverse en- the Trusted OS. Additionally, we adopt an efficient code gineering and implementation effort for emulated software coverage calculation algorithm directly on the raw packets. andhardwarecomponentsunacceptable.Inaddition,rehost- Second, as for the Challenge II, the vulnerability detec- ing faces the limitation of the inaccuracy of modeling the tion capability of coverage-based fuzzers is limited, and a behavior of hardware components. To our best knowledge, more effective fuzzing strategy is required. Therefore, we the only existing TEE rehosting solution PartEmu [32] is propose a composite feedback mechanism, which enhances not publicly available and does not support the mainstream codecoveragewithstatefeedback.Tobespecific,weutilize TEE based on Cortex-M MCUs designed for IoT devices. state variables that control the execution contexts to present Challenge II: Complex structure and stateful work- thestatesofaTrustedOS.However,suchstatevariablesare flow. Trusted OSes for IoT devices are surprisingly com- usually stored in closed-source and customized data struc- plex. Specifically, Trusted OSes implement multiple cryp- tures within Trusted OSes. Existing state variable inference tographic algorithms, such as AES and MAC, without un- methods either use explicit protocol packet sequences [45] derlying hardware support for these algorithms as would or require source codes of target software [35], [68], which be present on Cortex-A processors. To implement these are unavailable for Trusted OSes. Therefore, to identify the algorithms in a secure way, Trusted OSes maintain several state-related members from those complex data structures, state diagrams to store the execution contexts and guide the SYZTRUST collects some heuristics for Trusted OS and execution workflow. To explore more states of a Trusted utilizes them to perform an active state variable inference OS, a fuzzer needs to feed syscall sequences in several algorithm.Afterthat,SYZTRUSTmonitorsthestatevariable specific orders with different specific state-related argument values during the fuzzing procedure as the state feedback. values. Without considering such statefulness of Trusted Finally, SYZTRUST is the first end-to-end solution ca- OSes,coverage-basedfuzzersareunlikelytoexplorefurther pable of fuzzing Trusted OSes for IoT devices. Moreover, states, causing the executions to miss the vulnerabilities thedesignoftheon-devicefuzzingframeworkandmodular hidden in a deep state. Unfortunately, existing fuzzing tech- implementation make SYZTRUST more extensible. With niques lack state-awareness for Trusted OSes. Specifically, several MCU-specific configurations, SYZTRUST scales to they have trouble understanding which state a Trusted OS Trusted OSes on different MCUs from different vendors. reaches since there are no rich-semantics response codes to Evaluation. We evaluate SYZTRUST on real-world Trusted indicate it. In addition, due to the lack of source code and OSes from three leading IoT vendors Samsung, Tsinglink theinabilityofinstrumentation,itishardtoinferandextract Cloud, and Ali Cloud. The evaluation result shows that the state variables by program analysis. SYZTRUST is effective at discovering new vulnerabilities Oursolution.Toaddresstheabovekeychallenges,wepro- andexploringnewstatesandcodes.Asaresult, SYZTRUST poseandimplementSYZTRUST,thefirstfuzzingframework hasdiscovered70newvulnerabilities.Amongthem,vendors targetingTrustedOSesforIoTdevices,supportingstateand confirmed 28, and assigned 10 CVE IDs. The vendors coverage-guided fuzzing. Specifically, we propose an on- are still investigating others. Compared to state-of-the-art device fuzzing framework and leverage a hardware-in-the- approaches, SYZTRUST findsmorevulnerabilities,hits66% loop approach. To support in-depth vulnerability detection, higher code branches, and 651% higher state coverage. weproposeacompositefeedbackmechanismthatguidesthe Summary and contributions. fuzzer to explore more states and increase code coverage. SYZTRUST necessitates diverse contributions. First, to • We propose SYZTRUST, the first fuzzing framework tackle Challenge I, we propose a hardware-assisted fuzzing targetingTrustedOSesforIoTdevices,supportingeffective framework to execute test cases as well as collect code state and code coverage guided fuzzing. With a carefully coverage feedback. Specifically, we decouple the execution designed hardware-assisted fuzzing framework and a com- engine from the rest of the fuzzer to enable directly exe- posite feedback mechanism, SYZTRUST is extensible and cuting test cases in the protective TEE secure world on the configurable to different IoT devices. resource-limited MCU. To support coverage tracking, we • With SYZTRUST, we evaluate three popular Trusted presentaselectivetracecollectionapproachinsteadofcostly OSes on three leading IoT vendors and detect several code instrumentation to enable tracing instructions on a previously unknown bugs. We have responsibly reported targetMCU.Inparticular,weleveragetheARMEmbedded these vulnerabilities to the vendors and got acknowledged Trace Macrocell (ETM) feature to collect raw trace packets from vendors such as Samsung. We release SYZTRUST
by monitoring instruction and data buses on MCU with as an open-source tool for facilitating further studies at a low-performance impact. However, the Trusted OS for https://github.com/SyzTrust. 22.2. Debug Probe Normal World Secure World Client Applications (CAs) Trusted Applications (TAs) A debug probe is a special hardware device for low- level control of ARM-based MCUs, using DAP (Debug CA CA CA TA TA TA Access Port) provided by the ARM CoreSight Architecture [25]. It bridges the connection between a computer and an TEE Client API TEE Internal API MCU and provides full debugging functionality, including watchpoints, flash memory breakpoints, memory, as well as Rich OS Switch Trusted OS register examination or editing. In addition, a debug probe (e.g., FreeRTOS) Instructions can record data and instruction accesses at runtime through the ARM ETM feature. ETM is a subsystem of ARM Figure 1: Structure of TrustZone-M based TEE. Coresight Architecture and allows for traceability, whose function is similar to Intel PT. The ETM generates trace elements for executed signpost instructions that enable 2. Background reconstruction of all the executed instructions. Utilizing the above features, the debug probe has shown its effectiveness in tracing and debugging malware [49], unpacking Android apps [65], or fuzzing Linux peripheral drivers [37]. 2.1. TEE and Trusted OS 3. Threat Model A TEE is a secure enclave on a device’s main processor Our attacker tries to achieve multiple goals: gaining that is separated from the main OS. It ensures the confiden- control over, extracting confidential information from, and tiality and integrity of code and data loaded inside it [64]. causing crashes in other Trusted Applications (TAs) hosted ForstandardizingtheTEEimplementations,GlobalPlatform on the same Trusted OS or the Trusted OS itself. We (GP)hasdevelopedanumberofspecifications.Forinstance, considertwopracticalattackscenarios.First,anattackercan it specifies the TEE Internal Core API implemented in the exploitourdiscoveredvulnerabilitiesbyprovidingcarefully Trusted OS to enable a TA to perform its security functions crafted data to a TA. They can utilize a malicious Client [30]. However, it is difficult for vendors to implement a Application (CA) to pass the crafted data to a TA. For Trusted OS correctly since there are lots of APIs with instance, in mTower, CVE-2022-38511 (ID 1 in Table 1) complexandstatefulfunctionsdefinedintheGPTEEspec- can be triggered by passing a large key size value from a ifications. For instance, the TEE Internal Core API defines CA to a TA. Second, an attacker can exploit our discovered six types of APIs, including cryptographic operations APIs vulnerabilities by injecting a malicious TA into the secure supportingmorethan20complexcryptographicalgorithms. world. They can do this through rollback attacks or electro- In addition, the TEE Internal Core API also requires that a magnetic fault injections (CVE-2022-47549). Trusted OS shall implement state diagrams to manage the operation.Thetwoaforementionedfactorsmakeitchalleng- 4. Design ing for device vendors to develop a secure Trusted OS. ARMTrustZonehasbecomethede-factohardwaretech- Figure 2 gives an overview of SYZTRUST’s design. nologytoimplementTEEsinmobileenvironments[17]and SYZTRUST includestwomodules:thefuzzingengineonthe has been deployed in servers [33], low-end IoT devices [8], Personal Computer (PC) and the execution engine on the [53], and industrial control systems [27]. For IoT devices, MCU. The fuzzing engine generates and sends test cases theCortex-M23/33MCUs,introducedbytheARMCommu- to the MCU via the debug probe. The execution engine nity in 2016, are built on new TrustZone for ARMv8-M as executes the received test case on the target Trusted OS. security foundations for billions of devices [22]. TrustZone At a high level, we propose a hardware-assisted fuzzing forARMv8-MCortex-Mhasbeenoptimizedforfastercon- framework and a composite feedback mechanism to guide textswitchandlow-powerapplicationsandisdesignedfrom the fuzzer. Given the inaccessible environment of Trusted the ground up instead of being reused from Cortex-A [54]. OSes, we design a TA and CA pair as a proxy to the As Figure 1 shows, instead of utilizing a secure monitor TrustedOSandutilizeadebugprobetoaccesstheMCUfor in TrustZone for Cortex-A, the division of the secure and feedback collection. To handle the challenge of limited re- normal world is memory-based, and transitions take place sources,wedecoupletheexecutionenginefrom SYZTRUST automatically in the exception handle mode. Based on the and only run it on the MCU. This allows SYZTRUST, TrustZone-M, IoT vendors provide Trusted OS binaries to with its resource-demanding core components, to run more thedevicemanufacturers andthenthedevicemanufacturers effectively on a PC. To handle the statefulness of Trusted produce devices with device-specific TAs for the end users. OSes, we include state feedback with code coverage in the This paper focuses on the Trusted OSes from different IoT compositefeedback.Statevariablesrepresentinternalstates, vendors and provides security insights for device manufac- and our inference method identifies them in closed-source turers and end users. Trusted OSes. 3State Variable Trusted OS Inference State Variables Fuzzing Engine (on PC) Execution Engine (on MCU) Test cases Initial Seeds Manager Hardware-assisted Controller Test cases Proxy CA & TA Code coverage Trace A Collector composite feedback State coverage State Variables Feedback Trusted OS Monitor Debug Probe Fuzzing Loop Test cases Syscall Feedback Templates Figure 2: Overview of SYZTRUST. SYZTRUST consists of a fuzzing engine running on a PC, an execution engine running on an MCU, and a debug probe to bridge the fuzzing engine and the execution engine. The main workflow is as follows. First, SYZTRUST a target Trusted OS by mutating them. However, there is
accepts two inputs, including initial seeds and syscall tem- no off-the-shelf seed corpus for testing Trusted OSes, and plates. Second, the manager generates test cases through constructing one is not trivial, as the syscall sequences twofuzzingtasks,includinggeneratinganewtestcasefrom with arguments should follow critical crypto constraints, scratchbasedonsyscalltemplatesorbymutatingaselected and manually constructing valid sequences will need much seed. Then, the generated test cases are delivered to the effort. For example, a valid seed includes a certain order execution engine on MCU through the debug probe. The of syscall sequences to initiate an encryption operation, and executionengineexecutesthesetestcasestotesttheTrusted the key and the encryption mode should be consistent with OS. Meanwhile, the debug probe selectively records the supporting encryption. Fortunately, OP-TEE [39], an open- executed instruction trace, which is processed by our trace sourceTEEimplementationforCortex-A,offersatestsuite, collector as an alternative to code coverage. Additionally, which can be utilized to construct seeds for most Trusted thestatevariablemonitortracksthevaluesofasetoftarget OSes. Specifically, we automatically inject codes in TAs state variables to calculate state coverage via the debug provided by the test suite to log the syscall names and probe. Finally, the code and state coverage is fed to the their arguments. Then we automatically convert those logs manager as composite feedback to guide the fuzzer. The into seeds following the format required by SYZTRUST. In aboveproceduresareiterativelyexecutedandwedenotethe addition, we automatically add data dependencies between iterative workflow as the fuzzing loop. syscalls in the seed corpus. Accordingly, we identify the returnvaluesfromsyscallsthatareinputargumentsofother 4.1. Inputs syscalls and add helper syscalls for the identified values. Althoughthesyscalltemplateandseedcorpusconstruc- SyscalltemplatesandinitialseedsarefedtoSYZTRUST. tion require extra work, it is a one-shot effort and can be Syscall templates. Provided syscall templates, used in the further study of Trusted OS security. SYZTRUST is more likely to generate valid test cases by following the format defined by the templates. Syscall 4.2. A Hardware-assisted Framework templates define the syscalls with their argument types as well as special types that provide the semantics of an Now our hardware-assisted fuzzing framework enables argument. For instance, resources represent the values the generated test cases to be executed in protective and that need to be passed from an output of one syscall resource-constrained trusted environments on IoT devices. to an input of another syscall, which refers to the data To handle the constrained resources in Challenge I, we dependency between syscalls. We manually write syscall decouple the execution engine from the other components templates for the Internal Core API for the GP TEE in of SYZTRUST to ease the execution overhead of the MCU. Syzlang (SYZKALLER’s syscall description language [31]). As shown in Figure 2, the fuzzing engine, which includes However, a value with a complex structure type, which is most core components of SYZTRUST and requires more common in Trusted OSes, cannot be used as a resource computing resources and memory space, runs on a PC, value due to the design limitation of Syzlang. To handle while only the execution engine runs on the MCU. Thus, this limitation, we extend the syscall templates with helper SYZTRUST does the heavy tasks, including seed preserva- syscalls, which accept values of complex structures and tion, seed selection, type-aware generation, and mutation. output their point addresses as resource values. In AsfortheprotectedexecutionenvironmentinChallenge total, we have 37 syscall templates and 8 helper syscalls, I, we design a pair of CA and TA as the execution engine covering all the standard cryptographic APIs. that executes the test cases to test the Trusted OSes. We Initial seeds. Since syscall templates are used to gen- utilize a debug probe to bridge the connection between the erate syntactically valid inputs, SYZTRUST requires initial fuzzing engine and the execution engine. In the following, seeds to collect some valid syscall sequences and argu- weintroducethedesignofdeliveringtestcasesandthepair ments and speed up the fuzzing procedure. Provided initial ofCAandTA.First,asshowninFigure2,thedebugprobe seeds, SYZTRUST continuously generates test cases to test transferstestcasesandfeedbackbetweenthefuzzingengine 4and the execution engine. Specifically, a debug probe can 10 InvokeOneSyscall(); directly access the memory of the target MCU. Thus, the 11 // Data access event is triggered , stop tracing debug probe accepts generated test cases from the manager 12 stop event++; 13 if (AllSyscallsExecuted()){ engine and writes them to a specific memory of the target 14 break; MCU.Asfortestcasetransportation,wedesignaserializer 15 } 16 } while(1); to encode the test cases into minimum binary data denoted 17 } as payloads before sending them to the MCU. A payload 18 includes a sequence of syscalls with their syscall names, Listing 1: A code snippet containing the main execution syscall argument types, and syscall argument values. In ad- logic from our designed TA. dition,SYZTRUSTdenotessyscallnamesasordinalnumbers tominimizethetransportationoverhead.Second,thepairof However, the IoT Trusted OSes are highly resource- CAandTAplaystheroleofaproxytohandlethetestcases. constrained, making locally storing ETM traces infeasi-
Accordingly, the CA monitors the specific memory in the ble and limiting the speed of local fuzzing. Therefore, MCU. If a payload is written, the CA reads the binary data SYZTRUST utilizes the debug probe (see Section 4.2) to fromthespecificmemoryanddeliversthemtotheTA.Then stream all ETM trace data to the host PC in real time while the TA deserializes the received binary data and executes the target system is running. Moreover, SYZTRUST enables them one by one to test a Trusted OS implementation. parallel execution of test case generation, transmission, and Accordingly, the TA invokes specific syscalls according execution, as well as coverage calculation, thereby boosting to the ordinal numbers and fills them in with arguments the speed of fuzzing. extracted from the payload. To this end, the TA hardcodes For code coverage, we cannot directly use the raw the syscalls declaration codes that are manually prepared. instruction trace packets generated by the ETM as a re- The manual work to prepare the declaration codes is a one- placement for branch coverage due to two issues. First, shot effort that can be easily done by referring to the GP there is a gap between the raw ETM trace packets and TEE Internal Core API specification. the instruction traces generated by the Trusted OS. The TEE internals are complicated and the ETM component 4.3. Selective Instruction Trace Tracking recordsinstructiontracesgeneratedbythesoftwarerunning on the MCU, including the CA, rich OS, the TA, and the Thissectionintroduceshowtoobtainthecodecoverage Trusted OS. Thus, we design a selective instruction trace feedback in SYZTRUST when testing the Trusted OSes. collection strategy to generate fine-grained traces. ARM To handle the inability of instrumentation in Challenge ETM allows enabling/disabling trace collection when cor- I, we present a selective instruction trace track, which is responding events occur. We configure the different events implemented in the trace collector in the fuzzing engine. via the Data Watchpoint and Trace Unit (DWT) hardware Thetracecollectorcontrolsthedebugprobetocollecttraces featureto filterout noisypackets.In SYZTRUST,we aimto synchronouslywhenthetestcasesarebeingexecuted.After calculate the code coverage triggered by every syscall in a completing a test case, it calculates the code coverage and test case and generated by Trusted OS. Thus, as shown in delivers it to the manager as feedback. Listing1,weconfiguretheevent-basedfiltersbyaddingtwo In the selective instruction trace track, we utilize the datawriteaccesseventconditions.Thetwoeventconditions ETM component to enable non-invasive monitoring of the start ETM tracing before invoking a syscall and stop ETM executioncontextoftargetTrustedOSes.Theoverallwork- tracingaftercompletingthesyscall,respectively.Inaddition, flow is as follows. (1) Before each payload is sent to we specify the address range of the secure world shall be the MCU, the hardware-assisted controller resets the target included in the trace stream to filter noisy trace packets MCU via the debug probe. (2) Once the execution engine generated from the normal world. reads a valid payload from the specific memory space and Second,thereisagapbetweentherawETMtracepack- starts to execute it, the hardware-assisted controller starts ets to the quantitive coverage results. To precisely recover the ETM component to record the instruction trace for each thebranchcoverageinformation,wehavetodecodetheraw syscall from the payload. Specifically, in the meanwhile trace packets and map them to disassembled binary instruc- of executing the syscalls, the generated instruction trace is tionaddress.Afterthat,wecanrecovertheinstructiontraces synchronously recorded and delivered to the fuzzing engine and construct the branch coverage information [69] [65]. via the debug probe. (3) After completing a payload, the However,disassemblingcodeintroducessignificantrun-time hardware-assisted controller computes the coverage based overhead as it incurs high computation cost [18]. Thus, we on the instruction traces for the manager. calculate the branch coverage directly using the raw trace packets [37]. At a high level, this calculation mechanism 1 extern int32 t start event; 2 extern int32 t stop event; utilizes a special basic block generated with Linear Code 3 void TA ProcessEachPayload(){ Sequence and Jump (LCSAJ) [66] to reflect any change in 4 RecvPayloadFromCA(); basic block transitions. LCSAJ basic blocks consist of the 5 DecodePayload(); 6 //Start invoking syscalls one by one basicaddressofarawETMbranchpacketandasequenceof 7 do { branch conditions. The branch conditions indicate whether 8 // Data access event is triggered , start tracing 9 start event++; the instructions followed by the basic address are executed. 5This mechanism performs several hash operations on the Test Cases LCSAJ basic blocks to transform them into random IDs, TEE_AllocateOperation which we utilize as branch coverage feedback. (TEE_OperationHandle *operation, uint32_t algorithm, uint32_t mode, uint32_t maxKeySize) TEE_ResetOperation(…) 4.4. State Variable Inference and Monitoring Test Harness … Output Buffer 0000 0000 00c0 0000 0000 0008 0000 Trusted OS Here we introduce how to identify the internal state State Variable 0000 0000 00a8 2d00 2000 0000 0000 … of the Trusted OS and how to obtain the state coverage Inferer feedback through the state variable monitor component. In particular, the state variable inference provides the state struct __TEE_OperationHandle{ variablemonitorwiththeaddressrangesoftheinferredstate [0:3] algorithm: 0000 0000 00c0 0000 … Hardware-assisted State … variables. Then the state variable monitor tracks the values [40:43] operationState: 0001 Variables Monitor … of these state variables synchronously when the execution }
engine executes test cases. After completing a test case, Figure 3: State variable inference. the state variable monitor calculates the state coverage and deliversittothemanagerasfeedback.Belowarethedetails about state variable inference and monitoring. implementations. This method is based on the assumption According to the GP TEE Core API specification, that the state variables in the above two handles will have Trusted OSes have to maintain several complex state ma- different values according to different cryptographic oper- chines to achieve the cryptographic algorithm in a highly ation configurations. For instance, in Samsung’s Trusted secure way. To explore all states of Trusted OS, the OS implementation, after executing several syscalls to set fuzzer needs to feed syscall sequences in several specific cryptographic arguments, the value of a state variable from orders with different specific state-related argument val- TEE_OperationHandle changes from 0 to 1, which ues. Coverage-based fuzzers are unlikely to explore further means a cryptographic operation is initialized. states, causing the executions to miss some vulnerabilities Based on the assumption, SYZTRUST uses a test har- hidden in a deep state. For instance, a syscall sequence ness to generate and execute test cases carefully. Then achieves a new DES cryptographic operation configuration SYZTRUST records the buffers of the two handles and ap- by filling different arguments, whose code coverage may plies state variable inference to detect the address ranges of be the same as the syscall sequence to achieve a new statevariablesintherecordedbuffers,asshowninFigure3. AEScryptographicoperationconfiguration.Preservingsuch SYZTRUST firstfiltersrandomlychangeablebytesequences syscall sequences as a seed and further exploring them that store pointers, encryption keys, and cipher text. Specif- will achieve new cryptographic operations and gain new ically, SYZTRUST conducts a 24-hour basic fuzzing proce- code coverages. However, a coverage-based fuzzer may dure on the initial seeds from Section 4.1 and collects the discard such syscall sequences that seem to have no code buffersofthetwohandles.SYZTRUSTthenparsesthebuffer coverage contribution but trigger new internal states. Thus, intofour-bytesequencestorecognizechangeablevalues.Af- SYZTRUST additionally adopts state coverage as feedback terthat,SYZTRUSTcollectsalldifferentvaluesforeachbyte to handle the statefulness of Trusted OSes. sequenceinthefuzzingprocedureandcalculatesthenumber State variable inference. By referring to the GP of times that these different values occur. SYZTRUST con- TEE Internal Core API specification and several open- sidersthebytesequencesthatoccurover80timesasbuffers source TEE implementations from Github, we find that and excludes them from the state variables. For the remain- Trusted OSes maintain two important structures, includ- ing byte sequences, SYZTRUST applies the following infer- ing the struct named TEE_OperationHandle and encetoidentifystatevariables.Inourobservations,thecryp- the struct named TEE_ObjectHandle, which present tographic operation configurations are determined by the the internal states and control the execution context. We operation-related arguments, including the operation mode further find that several vital variables (with names such (encryption, decryption, or verification) and cryptographic as operationState and flags) in the two com- algorithm (AES, DES, or MAC). The cryptographic oper- plex structs determine the Trusted OS’ internal state. ation configurations are also determined by the operation- Thus, we utilize the value combinations of state variables related syscall sequences. We identify such arguments and to present the states of Trusted OS and track all state- syscallsbyreferringtoGPTEEInternalCoreAPIspecifica- related variables to collect their values. Then, we con- tion and conclude the operation-related syscalls include the sider a new value of the state variable combination as a syscalls that accept the two handles as an input argument new state. However, the TEE_OperationHandle and and are specified to allocate, init, update, and finish a cryp- TEE_ObjectHandle implementations are customized tographic operation, such as TEE_AllocateOperation andclosesource,makingrecognizingthestatevariablesand and TEE_AllocateTriansientObject. SYZTRUST their addresses challenging. performs mutated seeds that include the operation-related To handle it, we come up with an active infer- syscalls and records the buffers of the two handles. These ence method to recognize the state variables in the bytesequencesthatvarywithcertainsyscallsequenceswith TEE_OperationHandle and TEE_ObjectHandle certain arguments are considered state variables. Finally, 6State Hashes 𝑆𝑡(cid:2869) 𝑆𝑡(cid:2870) . . . 𝑆𝑡(cid:2898) 𝑆𝑑(cid:2869) 𝑆𝑑(cid:2870) Seed Buckets 𝑆𝑑(cid:2871) 2022/12/3 11 . . . preserves and prioritizes seeds that trigger new code or Hit Times 𝐻𝑖𝑡(cid:2869) 𝐻𝑖𝑡(cid:2870) . . . 𝐻𝑖𝑡(cid:2898) states. We design two maps as the seed corpus to preserve HitMap seeds according to code coverage and state feedback. With suchseedcorpus,SYZTRUSTthenperiodicallyselectsseeds SeedMap from the hash table to explore new codes and new states. This section introduces how SYZTRUST fine-tunes the evo- lution through a novel composite feedback mechanism, in- A Seed Node cluding seed preservation and seed selection strategy. •A Syscall Sequence Seed preservation. Given the composite feedback with Arguments •Branch Coverage mechanism, we thus provide two maps as the seed corpus 𝑆𝑑(cid:3014) to store seeds that discover new state values and new code, respectively.AsshowninFigure4,intwomaps,SYZTRUST
Figure 4: Seed corpus in SYZTRUST. calculates the hash of combination values of state variables as the keys. SYZTRUST maps the state hashes to their hit times in the HitMap and maps the state hashes to seed SYZTRUST outputs the address ranges of these byte se- buckets in the SeedMap. In the SeedMap, for a certain quences and feeds them to the hardware-assisted controller. state hash, the mapping seed bucket contains one or several State variable monitor. The identified state variables seedsthatcanproducethematchingstatevariablevalues.To and their address ranges are then used as configurations construct the SeedMap, SYZTRUST handles the following of the hardware-assisted controller. SYZTRUST utilizes the two situations. First, if a syscall from a test case triggers a debug probe to monitor the state variables. When a new new value combination of state variables, SYZTRUST adds TEE_OperationHandle or TEE_ObjectHandle is a new state hash in the SeedMap. Then, SYZTRUST maps allocated, SYZTRUST records its start address. Given the the new state hash to a new seed bucket. To construct the start address of the two handles and address ranges of state seedbucket,SYZTRUSTutilizesthetestcasewithitsbranch variables, SYZTRUST calculatesthememoryrangesofstate coverage to construct a seed node and appends the seed variablesanddirectlyreadsthememoryviathedebugprobe. bucket with the newly constructed seed node. Second, if Inthefuzzingloop,foreachsyscall,besidethebranchcov- a syscall from a test case triggers a new code coverage, erage, SYZTRUST records the hash of combination values SYZTRUST adds a new seed node in a seed bucket. Specifi- of the state variables as the state coverage. cally, SYZTRUST calculates the hash of combination values ofstatevariablesproducedbythesyscall.Then,SYZTRUST 4.5. Fuzzing Loop looks up the seed bucket that is mapped to the state hash and appends a new seed node that contains the test case After collecting code and state coverage feedback, the and its branch coverage to the seed bucket. Noted, a test fuzzer enters the main fuzzing loop implemented in the case could be stored in multiple seed buckets if it triggers manager. SYZTRUST has a similar basic fuzzing loop to multiple feedbacks at the same time. As for the HitMap, SYZKALLER. It schedules two tasks for test case gener- eachtimeaftercompletingatestcase,SYZTRUSTcalculates ation: generating a new test case from syscall templates all the state hashes it triggers and updates the hit times for or mutating an existing one based on selected seeds. As these state hashes according to their hit times. for the generation task, SYZTRUST faithfully borrows the Seed selection. Given the preserved seed corpus, generation strategy from SYZKALLER. As for the muta- SYZTRUST applies a special seed selection strategy to tion task, SYZTRUST utilizes a composite feedback mech- improve the fuzzing efficiency. Algorithm 1 shows how anism to explore rarely visited states while increasing code SYZTRUST selectsseedsfromthecorpus.First, SYZTRUST coverage. Notably, SYZTRUST does not adopt the Triage chooses a state and then chooses a seed from the map- scheduling tasks from SYZKALLER due to two key reasons. ping seed buckets according to the state. In state selection, First, SYZTRUST fuzzes directly on the MCU, enabling SYZTRUST is more likely to choose a rarely visited state swift MCU resets after each test case, thus mitigating false by a weighted random selection algorithm. The probability positives in branch coverage. Branch coverage validation of choosing a seed is negatively correlated with its hit through triaging is, therefore, unnecessary. Second, exe- times. Thus, we assign each seed a weight value, which cuting test cases for Trusted OSes (averaging 30 syscalls) is the reciprocal of its hit times. Finally, The probability of is time-consuming, incurring a significant overhead for choosing a seed is equal to the proportion of its weight in SYZTRUST to minimize a syscall sequence by removing thesumofallweights.Inseedselection,SYZTRUSTismore syscalls one by one. Appendix A further experimentally likely to choose a seed with high branch coverage. To this demonstrates our intuition for the new scheduling tasks. end,SYZTRUSTchoosesaseedbasedonaweightedrandom selectionalgorithm,andtheprobabilityofchoosingacertain 4.6. Composite Feedback Mechanism seed is equal to the proportion of its branch coverage in all SYZTRUST adopts a novel composite feedback mecha- coverages.Notably,theprobabilitiesofchoosingastateand nism, leveraging both code and state coverage to guide mu- seedaredynamicallyupdatedsincethehittimesandthesum tationtaskswithinthefuzzingloop.Specifically,SYZTRUST of coverage is updated in the fuzzing procedure. 7Algorithm 1: Seed Selection Algorithm structuresintheTrustedOSandtheuseofourstatevariable inference to collect address ranges. These structures can Input: C, SeedMap that maps state hashes to a set be extracted from the documents and header files. We rely of seeds on two heuristics to help extract them. First, state-related Input: H, HitMap that maps state hashes to hit data structures usually have common names, e.g., related to times contextorstate.Second,thestatestructureswillbethein- Output: s, the selected seed putsandoutputsofseveralcryptooperation-relatedsyscalls. 1 Sum W ←0; For example, on Link TEE Air, a pointer named context 2 Map W = initMap() //maps state hashes to their is used among cryptographic syscalls such as tee aes init weights ; and tee aes update, and can be further utilized to infer 3 for each key ∈H do state variables. This information can be obtained from the 4 t← getValue(H,key); 5 Sum W ←Sum W +t−1; crypto.h header file. 6 end 7 for each key ∈H do 5. Implementation
8 t← getValue(H,key); 9 Map W ←Map W ∪(key,t−1/Sum W); We have implemented a prototype of SYZTRUST on 10 end top of SYZKALLER. We replaced SYZKALLER’s execution 11 St← WeightedRandom StateSelection(Map W); engine with our custom CA and TA pair, integrating our 12 seedSets← getValue(C,St); extendedsyscalltemplates,eliminatingSYZKALLER’striage 13 s← WeightedRandom SeedSelection(seedSets); scheduling task, and implementing our own seed preserva- tion and selection strategy. Sections 4.2, 4.5, and 4.6 detail these adaptations. 4.7. Scope and Scalability of SYZTRUST. Below are details about our implementations. (1) As for theoverallfuzzingframework,weusetheSEGGERJ-Trace SYZTRUST targets Trusted OSes provided by IoT ven- Pro debug probe to control the communication between dors and assumes that (i) a TA can be installed in the the fuzzing engine and the execution engine, as shown in Trusted OS, and (ii) target devices have ETM enabled. Figure 5. The pair of CA and TA is developed following These assumptions align with typical IoT Trusted OS sce- the GP TEE Internal Core API specification and is loaded narios. First, given that IoT device manufacturers often into the MCU following the instructions provided by the needtoimplementdevice-specificTAs,TrustedOSbinaries IoT vendors. To control the debug probe, we developed a supplied by IoT vendors generally allow TA installation. hardware-assisted controller based on the SEGGER J-Link Second, SYZTRUST tests IoT Trusted OSes by deploying SDK. The hardware-assisted controller receives commands them on development boards where ETM is enabled by fromthemanagerandsendsfeedbackcollectedontheMCU default. Moreover, SYZTRUST can directly test the Trusted tothemanagerviasocketcommunications.(2)Fortheselec- OSesfollowingtheGPTEEInternalCoreAPIspecification tive instruction trace track, SYZTRUST integrates the ETM withMCU-specificconfigurations.Ithasbuilt-insupportfor tracingcomponentoftheSEGGERJ-TraceProandrecords testing on alternative Trusted OSes, including proprietary theinstructiontracesfromTrustedOSesnon-invasively.The ones. Appendix B extends discussion with concrete data. raw ETM packets decoder and branch coverage calculation SYZTRUST can support other Trusted OSes. To test a are accomplished in the hardware-assisted controller. (3) newTrustedOSesonadifferentMCU, SYZTRUST requires For the state variable inference and monitoring, SYZTRUST MCU configurations, including the address ranges of spe- follows the testing strategy in Section 4.4 and utilizes an cific memory for storing payloads, the addresses of the data RTT component [2] from the SEGGER J-Trace Pro to events for the event-based filter, and the address ranges of record related state variable values and deliver them to the secure memory for the address-based filter. We developed fuzzing engine. The RTT component accesses memory in tooling in the CA and TA to automatically help the analyst the background with high-speed transmission. obtain all required addresses. In addition, by following the Several tools help us analyze the root cause of detected development documents from IoT TEE vendors, the CA crashes. We utilize CmBacktrace, a customized backtrace and TA may require slight adjustments to meet the format tool [70], to track and locate error codes automatically. Ad- required by the new Trusted OSes and loaded into the Rich ditionally, we develop TEEKASAN based on KASAN [34] OS and Trusted OS. andMCUASAN[59]tohelpidentifyout-of-boundanduse- To extend SYZTRUST to proprietary Trusted OSes, we after-free vulnerabilities. We integrate TEEKASAN with augment the syscall templates and the API declarations in lightweight compiler instrumentation and develop shadow our designed TA and test harness with the new version memory checking for bug triaging on the open source of these customized APIs. This can be done by referring Trusted OSes. Since TEEKASAN only analyzed a small to the API documents provided by IoT vendors, which is numberofvulnerabilitiesduetothelimitationoftheinstru- simple and requires minimum effort. To enable the state- mentationtoolandmostTrustedOSesareclosed-source,we aware feature, we need expert analysis of the state-related manually triage the remaining vulnerabilities. 8 Power interface USB interface MCU (connected to  computer) SWD/ETM interface  Debug probe  5HVHW 7UDQVIHU ([HFXWLRQ &RY&DOF Figure 5: SYZTRUST setup in fuzzing a target TEE imple- mentation on the Nuvoton M2351 board. The debug probe accesses memory and tracks instruction traces on the MCU via the SWD/ETM interface. It also delivers data to the PC and receives commands from the PC via a USB interface. 6. Evaluation In this section, we comprehensively evaluate the SYZTRUST, demonstrating its effectiveness in fuzzing IoT Trusted OSes. First, we evaluate the overhead breakdown of SYZTRUST. Second, we conduct experiments explor- ing the effectiveness of our designs. Third, we examine SYZTRUST’s state variable inference capabilities. Finally, we apply SYZTRUST on fuzzing three real-world IoT Trusted OSes and introduce their vulnerabilities. In sum- mary, we aim to answer the following research questions: RQ1: What is the overhead breakdown of SYZTRUST? (Section 6.2) RQ2:IsSYZTRUSTeffectiveforfuzzingIoTTrustedOSes? (Section 6.3) RQ3: Is the state variable inference method effective, and are the inferred state variables expressive? (Section 6.4) RQ4: How vulnerable are the real-world Trusted OSes fromdifferentIoTvendorsfromSYZTRUST’sresults?(Sec- tion 6.5) 6.1. Experimental Setup Target Trusted OSes. We evaluate SYZTRUST on three Trusted OSes designed for IoT devices, mTower, TinyTEE,
and Link TEE Air. The reason for selecting these targets is as follows. mTower and TinyTEE both provide the standard APIsfollowingtheGPTEEInternalCoreAPIspecification. Inaddition,theyaredevelopedbytwoleadingIoTvendors, Samsung and Tsinglink Cloud (which serve more than 30 downstreamIoTmanufacturers,includingChinaTELECON and Panasonic), respectively. Link TEE Air is a proprietary Trusted OSes developed by Ali Cloud for SYZTRUST to evaluateitsbuilt-insupportofclosed-sourceandproprietary TrustedOSes.Moreover,thethreetargetshavebeenadopted inanumberofIoTdevicesandtheirsecurityvulnerabilities have a practical impact. In this paper, we evaluate mTower v0.3.0, TinyTEE v3.0.1, Link TEE Air v2.0.0, and each of them is the latest version during our experiments. VP HPL7     Figure 6: Overhead breakdown of SYZTRUST. Experiment settings. We perform our evaluation under the same experiment settings: a personal computer with 3.20GHz i7-8700 CPU, 32GB RAM, Python 3.8.2, Go version 1.14, and Windows 10. Evaluation metrics. We evaluate SYZTRUST with the fol- lowingthreeaspects.(1)Wemeasurethebranchcoverageto evaluate the capability of exploring codes, which is widely used in recent research [15] [32]. The branch calculation is introduced in Section 4.3. (2) To evaluate the capability of exploring the deep state, we measure the state coverage and the syscall sequence length of each fuzzer. Specifically, we consider the value combinations of state variables as a state andmeasurethenumberofdifferentstates.(3)Wecountthe number of unique vulnerabilities. SYZTRUST relies on the built-in exception handling mechanism to detect abnormal behaviors of Trusted OSes. We explore the dedicated fault statusregisters[5]toidentifytheHardFaultexceptionin concerns. These exceptions indicate critical system errors andthuscanbeusedasacrashsignal[22].Forthecrashes, wereproducethemandreporttheirstacktracesbyCmBack- Trace to track and locate error codes. We filter stack traces into unique function call sequences to collect the explored unique bugs on target programs, which are widely used for deduplication in the CVE dataset [1] and debugging for vendors. Following best practices, we extract the top three function calls in the stack traces to de-duplicate bugs [36], [38]. We then analyze the root cause of the bugs manually. State transition analysis. We further develop a script to automatically construct a state transition tree, which helps visually understand the practical meaning of the state vari- ables. Specifically, we utilize the state hashes calculated basedonstatevariablestopresentthestatesofTrustedOSes and take the syscall sequences as state transition labels. 6.2. Overhead Breakdown (RQ1) We measure the execution time of sub-processes of SYZTRUST toassessitsimpactontheoveralloverhead.For eachfuzzinground,SYZTRUSTperformsthefollowingsub- processes:(1)ResetmeansthetimespentontheMCUre- setting.(2)Transfermeansthetimespentonthemanager engine sending the test case. (3) Execution means the timespentontheexecutionengineexecutingthetestcase.In 9TABLE 1: The unique vulnerabilities found by the meanwhile, the debug probe records the raw instruction trace packets and state variable values and transfers them SYZKALLER, SYZTRUST BASIC, SYZTRUST STATE, to the PC via RTT. (4) CovCalc means the time spent on and SYZTRUST FSTATE in 48 hours. The vulnerabilities whose IDs are marked with * have already received CVEs. the hardware-assisted controller decoding the collected raw trace packets and calculating the branch coverage. In the Vul.ID Syzkaller-Baseline SyzTrust-Basic SyzTrust-State SyzTrust-FState meanwhile, the hardware-assisted controller calculates the 1* state hashes based on the state variable values. We only 2* 3 evaluate the sub-processes that are related to interacting 4 with the resource-limited MCU, which mainly determines 5* 6 the speed of SYZTRUST. As introduced in Section 4.3, the 7* subprocesses of Transfer, Execution, and CovCalc 8* 9* are designed to run in parallel. 10 11 The results are shown in Figure 6, from which we 12* have the following conclusions. First, the overheads of 13* 14* Reset, Transfer, and CovCalc in SYZTRUST are 15* relatively low. Thus, using Reset to mitigate the false 16 17 positive coverage and vulnerabilities is acceptable. Second, 18 19 Executiontakesthemosttime.TestcasesforIoTTrusted 20 OSes include 14.4 syscalls on average and are complex. In 21 22 addition, for Execution, the Nuvoton M2351 board used 23 in our manuscript has no Embedded Trace Buffer (ETB), 24 25 SYZTRUST utilizes the debug probe to stream all the ETM 26 27 trace data to the host PC in real time. We conducted a 48- 28 hour fuzzing and found the peak tracing speed is 168 KB/s, 29 30 and 92% of the trace files for single syscalls are smaller 31 32 than 2KB. In conclusion, the ETM tracing takes less time 33 than the syscall execution. 34 35 36 RQ1: It takes SYZTRUST 6,290 ms on average to com- 37 plete a test case and to collect its feedback. The sub- 38 process of executing a test case on the MCU takes the Total 26 28 31 35 mosttime,whiletheorchestrationandanalysistakeonly roughly 1% of the overall time. them, SYZTRUST BASIC archives the highest code cov- erage among the four fuzzers with 1,983 branches, which 6.3. Effectiveness of SYZTRUST (RQ2) showstheeffectivenessofournewschedulingtasksandthe extended syscall templates. When state coverage feedback In this section, we evaluate SYZKALLER, is integrated into SYZTRUST, the branch coverage explored
SYZTRUST BASIC, SYZTRUST STATE, and bytheSYZTRUST STATEandSYZTRUST FSTATEislower SYZTRUST FSTATEonmTowertoexploretheeffectiveness than SYZTRUST BASIC. It is because their adopted com- of our designs, with each experiment running for 48 hours. posite feedback mechanisms drive them to preserve more To measure our new scheduling tasks and composite seeds that trigger new states, which might have no con- feedback mechanism, we construct two prototypes of tribution to the code coverage. Given the same evaluation SYZTRUST only with the new scheduling tasks and time, a certain percentage of time is assigned to mutating only with the composite feedback mechanism, which and testing such seeds that trigger new states, and the code are named SYZTRUST BASIC and SYZTRUST FSTATE, coverage growth will result in slower growth. Thus, we respectively. In addition, to measure the necessity of our additionally perform a long-time fuzzing experiment, and state variable inference, we construct a prototype that theresultshowsthatSYZTRUST FSTATEcanachieve1,984 considers the complete buffer values of two state handles branches in 74 hours. as state variables named SYZTRUST STATE. We evaluate Among the four fuzzers, SYZTRUST FSTATE trig- three prototypes of SYZTRUST against the state-of-the-art gers the most states with 2,132 states on average, and fuzzer SYZKALLER, which has found a large number of SYZKALLER triggers the least states with 284 states on vulnerabilitiesonseveralkernelsandisactivelymaintained. averageshowninFigure7b. SYZTRUST STATE hassimilar To reduce the randomness, we repeat all experiments ten performances on the state growth with SYZTRUST BASIC. times. The results are shown in Figure 7 and Table 1. It is because SYZTRUST STATE spends lots of time ex- Branch coverage is calculated in terms of LCSAJ ba- ploring the test cases that trigger new values of non-state sic block number introduced in Section 4.3. SYZKALLER variables since SYZTRUST STATE utilizes the two whole triggers 1,191 branches on average and is exceeded by all handles’ values to present state. To further illustrate this three versions of SYZTRUST shown in Figure 7. Among deduction, we present Figure 7c, where we calculate the 10 6\]7UXVWB)6WDWH 6\]7UXVWB)6WDWH   6\]7UXVWB6WDWH  6\]7UXVWB6WDWH    6 6\ \] ]7 NDUX OOV HW UB B% %D DV VL HF OLQH  6 6\ \] ]7 NDUX OOV HW UB B% %D DV VL HF OLQH          6\]7UXVWB)6WDWH  6\]7UXVWB)6WDWH    6 6\ \] ]7 7U UX XV VW WB B6 %W DD VW LH F    6 6\ \] ]7 7U UX XV VW WB B6 %W DD VW LH F  6\]NDOOHUB%DVHOLQH    6\]NDOOHUB%DVHOLQH                                                     (a) Branch coverage. (b) States (based on variables). (c) States (based on handles). (d) Unique vulnerabilities. Figure 7: The branch coverage growth, state numbers growth, and unique vulnerability growth discovered by SYZKALLER, SYZTRUST BASIC, SYZTRUST STATE, and SYZTRUST FSTATE. TABLE 2: The number of state variables inferred by growth of the states based on the whole handle values. In such settings, SYZTRUST STATE triggers more states than SYZTRUST. False postive is denoted as FP. SYZTRUST BASIC and SYZKALLER.However,thesestates Target Handle Number FP Precision are not expressive and effective for guiding the fuzzer to TEE ObjectHandle 11 1 trigger more branch coverage and vulnerabilities. mTower 87.5% TEE OperationHandle 13 2 TEE ObjectHandle 13 3 TinyTEE 82.6% TEE OperationHandle 10 1 The exploration space of unique vulnerabilities trig- gered by SYZKALLER with 16 vulnerabilities on aver- OP-TEE TEE ObjectHandle 10 1 87.0% age is fully covered and exceeded by the three ver- TEE OperationHandle 13 2 sions of SYZTRUST shown in Figure 7. Among them, LinkTEEAir context(AES) 6 2 71.4% SYZTRUST FSTATE detects 21 vulnerabilities on average, context(Hash) 8 2 which achieves the best vulnerability-finding capability. SYZTRUST STATE has similar performances on vulnerabil- ity detection with SYZTRUST BASIC and they detect 20 RQ2: The design of SYZTRUST is effective as the three vulnerabilities on average. versionsofSYZTRUSToutperformSYZKALLERinterms of code and state coverage and number of detected vul- nerabilities. New task scheduling with extended syscall templates significantly improves the fuzzer’s code ex- In addition, Table 1 shows the unique vulnerabili- ploration, and the composite feedback mechanism helps ties detected by SYZKALLER and the three versions of trigger more states and detect more vulnerabilities. SYZTRUST in ten trials during 48 hours. The vulnera- bility ID in Table 1 is consistent with Table 6 in Ap- pendix C. First, SYZTRUST finds all the unique vulnerabil- 6.4. State Variable Inference (RQ3) ities that SYZKALLER finds. Using the currently assigned CVE as ground truth, SYZTRUST detected more CVEs As for state variable inference evaluation, we first eval- than SYZKALLER. Second, SYZTRUST FSTATE finds the uate the precision of our state variable inference method. most vulnerabilities and finds eight vulnerabilities that Second, we utilize the executed syscall sequences and their SYZKALLER and SYZTRUST BASIC cannotfind.Theeight statehashestoconstructastatetransitiontreeandpresentan vulnerabilities are all triggered by syscall sequences whose example to show the expressiveness of our state variables. lengthsaremorethan10,whichindicatestheyaretriggered For the precision evaluation, we manually analyze the
in a deep state. For instance, the vulnerability of ID 7 usage of the inferred state variables in the Trusted OSes. occurswhentheTrustedOSentersthekey set&initialized We check if a state variable is used in condition statements state after a MAC function is configured and the function to control the execution context. As for mTower and OP- TEE MACUpdateisinvokedwithanexcessivesizevalue TEE, we obtain their source codes and manually read the of ”chunkSize”. SYZTRUST FSTATE can detect more vul- state variable-related codes. As for TinyTEE and Link TEE nerabilities,whichisbenefitedfromourcompositefeedback Air,weinvitefiveexpertswithsoftwarereverseengineering mechanism. Specifically, the fuzzer preserves the seeds that experiences to manually analyze their binary codes. trigger new states and then can detect more vulnerabili- Table 2 shows the results. For Trusted OSes, including ties by exploring these seeds. In summary, this evaluation aproprietaryone,ouractivestatevariableinferencemethod reveals two observations. First, although coverage-based iseffectiveandachieves83.3%precisiononaverage.These fuzzerachieveshighcoverageeffectively,theirvulnerability validated state variables are expressive and meaningful, detection capability will be limited when testing stateful including algorithm, operationClass (description systems. Second, for stateful systems, understanding their identifier of operation types, e.g., CIPHER, MAC), mode internal states and utilizing the state feedback to guide (descriptionidentifierofoperation,e.g.,ENCRYPT,SIGN), fuzzing will be effective in finding more vulnerabilities. and handleState (describing the current state of the 11TABLE 3: The number of unique vulnerabilities, branches TEE_PopulateTransientObject start 5 4 and states found by SYZTRUST in 90 hours. TEE_AllocateOperation T TE EE E_ _I Rn eit sR ete Tf rA at nt sri ib enu tt Oe, b ject T T TE E EE E E_ _ _R S Cee ipts O he et pO re Irp nae it tr ia ot nio Kn e, y , Target Uniquebugs Branches States mTower 38 2,105 3,994 1 4 6 TEE_CipherUpdate TinyTEE 13 1,072 2,908 TEE_AllocateTransientObject TEE_SetOperationKey TEE_FreeOperation LinkTEEAir 19 10,710 182,324 2 3 7 6.5. Real World Trusted OSes (RQ4) TEE_InitRefAttribute, TEE_PopulateTransientObject Figure 8: An example of state transition path from the constructed state transition tree in mTower. We apply SYZTRUST on mTower from Samsung, Tiny- TEE from Tsinglink Cloud, and Link TEE Air for 90 hours. The results are shown in Table 3. The branch and operation, e.g., an operation key has been set). The false state coverage explored by SYZTRUST on Link TEE Air is relativelylowbecauseLinkTEEAirhasamorecomplicated positive state variables are of two types. One is some and large code base. As for vulnerability detection, a total variables that indicate the length of several specific buffers, e.g., digest length, and have specific values. Another type of 70 vulnerabilities are found by SYZTRUST, and 28 of them are confirmed, while vendors are still investigating is several buffers that do not likely have changeable values. the remaining bug reports. We have reported confirmed Both of them generate a few false positive new states and vulnerabilitiestoMITRE,and10ofthemhavealreadybeen have little impact on the fuzzing procedure. assigned CVEs. Toevaluatestatevariables’expressiveness,weconstruct We categorize vulnerabilities into seven types following a state transition tree to help visualize their practical the Common Weakness Enumeration (CWE) List [24] and meanings. As shown in Figure 8, we present an example we have the following conclusions. First, the Trusted OSes state transition path from our constructed state transition suffer from frequent null/untrusted pointer dereference vul- tree. We replace the state hashes with numerical values nerabilities. They lack code for validating supplied input in the node label to enhance the readability. Below is pointers. When a TA tries to read or write a malformed the meaning of the example state transition tree. First, pointer by invoking Trusted OS syscalls, a crash will be TEE_OperationHandle and TEE_ObjectHandle triggered, resulting in a DoS attack. Even worse, care- are allocated by TEE_AllocateOperation and fully designed pointers provided by a TA to the syscall TEE_AllocateTriansientObject, respectively. can compromise the integrity of the execution context and Once a handle is allocated, the state transition is lead to arbitrary code execution, posing a significant risk. triggered. Second, TEE_ObjectHandle loads a dummy The problem is severe since Trusted OSes frequently rely key by executing TEE_InitRefAttribute and on pointers to transfer complex data structures, which are TEE_PopulateTransientObject. The dummy used in cryptographic operations. Second, the Trusted OSes key is then loaded into the TEE_OperationHandle allocate resources without checking bounds. They allow by TEE_SetOperationKey. Then state node 4 is a TA to achieve excessive memory allocation via a large triggered, indicating mTower is in the key_set state, len value. Since the IoT TEEs are resource-limited, these which is consistent with the GP TEE Internal Core API vulnerabilities may easily cause denial of service. Third, specification. Third, TEE_ObjectHandle is reset and theTrustedOSessufferfrombufferoverflowvulnerabilities. TEE_OperationHandle loads an encryption key by Theymissedchecksforbufferaccesseswithincorrectlength executing the same syscalls of the second procedure with values and allowed a TA to trigger a memory overwriting, a valid key. Fourth, mTower turns to a new state after DoS, and information disclosure. We discuss the mitigation TEE_CipherInit, which is specified as key_set &
inSection7andlistthevulnerabilitieswiththeirrootcause initialized state in the specification. This state means analysis found on mTower and TinyTEE in Table 6 in that mTower loads the key and the initialization vector Appendix C. for encryption. When mTower is in the key_set & initialized, mTower processes the ciphering operation Case study 1: buffer overflow (CVE-2022-35858). on provided buffers by TEE_CipherUpdate. Finally, SYZTRUST identifies a stack-based buffer overflow vulnerability in TEE_PopulateTransientObject mTower turns to a new state after it frees all the encryption configurations by TEE_FreeOperation. As a result, syscall in mTower, which has 7.8 CVSS Score according to CVE Details [3]. Specifically, the our state presentation mechanism truthfully reflects the TEE_PopulateTransientObject syscall workflow of symmetric encryption. creates a local array directly using the parameter RQ3: On average, our active state variable inference attrCount without checking its size. Once method is effective and achieves 83.3% precision on TEE_PopulateTransientObject is invoked with a average. In addition, from the state transition tree, the large number in attrCount, a memory overwrite will be inferred state variables are meaningful. triggered, resulting in Denial-of-Service (DoS), information leakage, and arbitrary code execution. 12Case study 2: null pointer dereference (CVE-2022- lightweightTrustedOSimplementationsshouldbedesigned 40759). SYZTRUST uncovers a null pointer dereference in to minimize the overhead brought by security mechanisms. TEE_MACCompareFinal in mTower. A DoS attack can Limitations and future work. While SYZTRUST pro- be triggered by invoking TEE_MACCompareFinal with vides an effective way to fuzz IoT Trusted OSes, it also a null pointer for the parameter operation. This bug is hard exposes some opportunities for future research. First, our to trigger by traditional fuzzing, as it requires testing on a current prototype of SYZTRUST primarily targets Trusted specificsyscallsequencetoenteraspecificstate.SYZTRUST OSes following GP TEE Internal Core API specification. It can effectively identify such syscall sequences that trigger has built-in support for alternative Trusted OSes, including new states and prioritize testing them. In this way, this proprietary ones, requiring certain modifications and con- syscall sequence can be fuzzed to cause a null pointer figurations. We demonstrated this flexibility by extending dereference vulnerability; otherwise, it will be discarded. to a proprietary Trusted OS and plan to extend SyzTrust for broader applicability. Second, SYZTRUST assumes that RQ4: mTower, TinyTEE and Link TEE Air are all aTAcanbeinstalledintheTrustedOSforassistingfuzzing vulnerable. SYZTRUST identifies 38 vulnerabilities on and ARM ETM is enabled for collecting the traces. In the mTower, 13 vulnerabilities on TinyTEE, and 19 vulner- case of certain Trusted OSes, such as ISEE-M, which are abilities on Link TEE Air, resulting in 10 CVEs. developed and used within a relatively tight supply chain, we will need to engage with the providers of these Trusted OSes to help assess the security of their respective Trusted 7. Discussion OSes. Finally, SYZTRUST targets the Trusted OS of TEE, leaving several security aspects of TEEs to be studied, e.g., Ethic.Wepayspecialattentiontothepotentialethicalis- TAsandtheinteractionmechanismbetweenperipheralsand suesinthiswork.First,weobtainallthetestedTrustedOSes the Trusted OSes. from legitimate sources. Second, we have made responsible disclosure to report all vulnerabilities to IoT vendors. 8. Related Work Lessons. Based on our evaluations, we provide several security insights about the existing popular IoT Trusted TEE vulnerability detection. Several researchers have OSes. First, mTower and TinyTEE are similar to OP-TEE, studied and exploited vulnerabilities in TrustZone-based an open-source TEE implementation designed for Cortex- TEEs. Marcel et al. reverse-engineer HUAWEI’s TEE com- A series MCUs, and they have several similar vulnerabili- ponents and present a critical security review [16]. Some ties. For instance, they both implement vulnerable syscalls researchers studied the design vulnerabilities of TEE com- TEE_Malloc and TEE_Realloc, which allow an exces- ponents [56], [67], such as Samsung’s TrustZone keymaster sive memory allocation via a large value. In a resource- [58] and the interaction between the secure world and the constrained MCU, such implementations can cause a crash normal world [43]. Recently, Cerdeira et al. analyzed more and result in a DoS attack. Thus, the security principles than 200 bugs in TrustZone-assisted TEEs and presented a should be rethought to meet the requirements of the IoT systematic view [17]. Since those works require manual ef- scenario. As a mitigation, we suggest adding checks when fortsforvulnerabilitydetectionoronlyprovideanautomatic allocating memory; this suggestion is adopted by Samsung tool to target a specific vulnerability, some literature works andTsingLinkCloud.Second,forthenull/untrustedpointer onautomaticallytestingtheTEEs.Someworkstrytoapply dereference and buffer overflow vulnerabilities, the input other analysis tools, e.g., utilizing concolic execution [14] pointers and critical parameters should be especially care- and fuzzing. A number of TA fuzzing tools are developed, fully checked. For the untrusted pointer dereferences on such as TEEzz [15], PartEmu [32], Andrey’s work [4] and handlers, in addition to checking if the pointer address is Slava’s work [44]. On the contrary, Trusted OS fuzzing valid,wesuggestedaddingahandlerlisttomarkiftheyare receives little attention. To the best of our knowledge, the
allocatedorreleased.Third,wefoundmTowerandTinyTEE only tool OP-TEE Fuzzer [55] is for open-source TEEs, implement several critical syscalls provided by Trusted OS which is not applicable to closed-source IoT TEEs. innon-privilegedcodes,whichexposesanumberofnull/un- ETM-based fuzzing. Firmware analysis primarily trusted pointer dereference vulnerabilities. Thus, a TA can adopts two approaches: on-device testing [40], [41] and easily reach these syscalls and damage the Trusted OSes. rehosting [19], [51]. For on-device testing, a few ETM- Even worse, mTower and TinyTEE do not have memory assisted analysis methods have been proposed for ARM protectionmechanisms,suchasASLR(AddressSpaceLay- platforms [48], [50]. For instance, Ninjia [49] utilizes ETM out Randomization). Trusted OSes and TAs are all loaded to analyze malware transparently. HApper [65] and NScope intothesamefixedaddressinthevirtualaddressspace.The [69] utilize ETM to unpack Android applications and ana- above two problems make the exploitation of Trusted OSes lyzetheAndroidnativecode.Recently,twostudieshavein- easier. However, implementing privileged codes and mem- tegrated ETM features into fuzzing projects. One is AFL++ ory protection mechanisms requires additional overhead, CoreSightmode[46],whichtargetstheapplicationsrunning whichmaybeunacceptableforIoTdevices.Wesuggestthat on ARM Cortex-A platforms. Another is µAFL [37] to the downstream TA developers should be aware of it and fuzz Linux peripheral drivers. However, these works have carefully design their TAs to mitigate this security risk, or differentpurposes.AFL++CoreSightmodeandµAFLfocus 13on the application, whereas SYZTRUST focuses on Trusted References OSes for IoT devices and incurs many challenges due to the constrained resource and inability to instrumentation. [1] “CVEdetails,” https://www.cvedetails.com. Moreover, SYZTRUST proposes a novel fuzzing framework [2] “J-LinkRTT-realtimetransfer,” https://www.segger.com/products/ and the state-aware feature to effectively test the state- debug-probes/j-link/technology/about-real-time-transfer. ful Trusted OSes. Consequently, SYZTRUST is a novel [3] “NVD, CVE: Common vulnerabilities and exposures,” https://cve. hardware-assisted fuzzing approach proposed in this paper. mitre.org/. State-aware fuzzing. Recently, state-aware fuzzing has [4] A. Andrey, “Launching feedback-driven fuzzing on trustzone tee,” emergedandgainedtheattentionoftheresearchcommunity. https://zeronights.ru/wp-content/themes/zeronights-2019/public/mat erials/5 ZN2019 andrej akimovLaunching feedbackdriven fuzzing To understand the internal states of target systems, existing on TrustZone TEE.pdf. studies utilize the response code of protocol servers [28], [5] ARM, “Arm Cortex-M23 Devices Generic User Guide r1p0 - [52] or apply model learning [23], [63] to identify the Fault Handling,” https://developer.arm.com/documentation/dui1095/ server’s states. StateInspector [45] utilizes explicit proto- a/The-Cortex-M23-Processor/Fault-handling. col packet sequences and run-time memory to infer the [6] ——, “TF-M platforms,” https://tf-m-user-guide.trustedfirmware.or server’s state machine. However, they are not applicable to g/platform/index.html. software and OSes since software and OSes do not have [7] C.Aschermann,S.Schumilo,A.Abbasi,andT.Holz,“Ijon:Explor- suchresponsecodesorpacketsequences.IJON[7]proposes ingdeepstatespacesviafuzzing,”inProceedingsofIEEESymposium an annotation mechanism that allows the user to infer the onSecurityandPrivacy(S&P),2020. states during the fuzzing procedure manually. After that, [8] N. Asokan, T. Nyman, N. Rattanavipanon, A.-R. Sadeghi, and numbersofstudiesworkonautomaticallyinferringthestate G. Tsudik, “ASSURED: Architecture for secure software update of of software and OSes [26], [47], [62], such as StateFuzz realisticembeddeddevices,”IEEETransactionsonComputer-Aided DesignofIntegratedCircuitsandSystems,vol.37,no.11,pp.2290– [68], SGFUZZ [9], and FUZZUSB [35]. However, these 2300,2018. studies either require the source codes of targets or precise [9] J.Ba,M.Bo¨hme,Z.Mirzamomen,andA.Roychoudhury,“Stateful dynamicinstrumentationtools.Evenworse,theirtargetsare greybox fuzzing,” in Proceedings of USENIX Security Symposium Linuxkernels,protocols,anddrivers,andtheirintuitionsand (SEC),2022. observations are not suitable for IoT Trusted OSes. [10] Beanpod, “ISEE trusted execution environment platform (Secure OS),” https://www.beanpodtech.com/en/products/. 9. Conclusion [11] G. Beniamini, “Extracting Qualcomm’s keymaster keys - breaking androidfulldiskencryption,” https://bits-please.blogspot.com/2016/ 06/extracting-qualcomms-keymaster-keys.html. WepresentSYZTRUST,thefirstautomatedandpractical [12] ——,“HijackingthelinuxkernelfromQSEE,” https://bits-please.bl fuzzing system to fuzz Trusted OSes for IoT devices. We ogspot.com/2016/05/war-of-worlds-hijacking-linux-kernel.html. evaluatetheeffectivenessofthe SYZTRUST ontheNuvoton [13] ——,“TrustZonekernelprivilegeescalation,” http://bits-please.blog
M2351 board with a Cortex M23, and the results show spot.com/2016/06/trustzone-kernel-privilege-escalation.html. SYZTRUST outperforms the baseline fuzzer SYZKALLER [14] M. Busch and K. Dirsch, “Finding 1-day vulnerabilities in trusted with 66% higher code coverage, 651% higher state cover- applications using selective symbolic execution,” in Proceedings of the27thAnnualNetworkandDistributedSystemSecuritySymposium age,and31%improvedvulnerability-findingcapability.Fur- (NDSS),SanDiego,CA,USA,2020. thermore, we apply SYZTRUST to evaluate real-world IoT [15] M. Busch, A. Machiry, C. Spensky, G. Vigna, C. Kruegel, and Trusted OSes from three leading IoT vendors and detect 70 M.Payer,“TEEzz:FuzzingTrustedApplicationsonCOTSAndroid previously unknown vulnerabilities with security impacts. devices,”inProceedingsofIEEESymposiumonSecurityandPrivacy In addition, we present the understanding of Trusted OS (S&P),2023. vulnerabilities and discuss the limitation and future work. [16] M.Busch,J.Westphal,andT.Mueller,“UnearthingtheTrustedCore: Webelieve SYZTRUST providesdeveloperswithapowerful A critical review on Huawei’s trusted execution environment,” in tooltothwartTEE-relatedvulnerabilitieswithinmodernIoT Proceedings of 14th USENIX Workshop on Offensive Technologies (WOOT20),2020. devices and complete the current TEE fuzzing scope. [17] D.Cerdeira,N.Santos,P.Fonseca,andS.Pinto,“Sok:Understanding the prevailing security vulnerabilities in trustzone-assisted tee sys- Acknowledgments tems,” in Proceedings of IEEE Symposium on Security and Privacy (S&P),2020. [18] Y. Chen, D. Mu, J. Xu, Z. Sun, W. Shen, X. Xing, L. Lu, and Wesincerely appreciate ourshepherdand alltheanony- B.Mao,“Ptrix:Efficienthardware-assistedfuzzingforcotsbinary,” mous reviewers for their insightful comments. This work inProceedingsofthe2019ACMAsiaConferenceonComputerand was partly supported by NSFC under No. U1936215, the CommunicationsSecurity,2019,pp.633–645. StateKeyLaboratoryofComputerArchitecture(ICT,CAS) [19] A. A. Clements, E. Gustafson, T. Scharnowski, P. Grosen, D. Fritz, under Grant No. CARCHA202001, the Fundamental Re- C. Kruegel, G. Vigna, S. Bagchi, and M. Payer, “{HALucinator}: Firmware re-hosting through abstraction layer emulation,” in 29th search Funds for the Central Universities (Zhejiang Univer- USENIXSecuritySymposium(USENIXSecurity20),2020,pp.1201– sity NGICS Platform), SNSF PCEGP2 186974, ERC StG 1218. 850868, Google Scholar Award, Meta Faculty Award, and [20] A.Cloud,“AboutAlibabaCloudLinkTEE,” https://iot.aliyun.com China Scholarship Council. /products/tee. 14[21] P. Cohen, “Winbond partners with Nuvoton, Qinglianyun to release [41] P. Liu, S. Ji, X. Zhang, Q. Dai, K. Lu, L. Fu, W. Chen, P. Cheng, fully integrated reference design for OTA firmware updating),” W. Wang, and R. Beyah, “IFIZZ: Deep-state and efficient fault- https://embeddedcomputing.com/technology/iot/winbond-partners-w scenario generation to test IoT firmware,” in 2021 36th IEEE/ACM ith-nuvoton-qinglianyun-to-release-fully-integrated-reference-desig InternationalConferenceonAutomatedSoftwareEngineering(ASE). n-for-ota-firmware-updating. IEEE,2021,pp.805–816. [22] A.Community,“Cortex-M23andCortex-M33-securityfoundation [42] L.Luo,Y.Zhang,C.Zou,X.Shao,Z.Ling,andX.Fu,“Onruntime for billions of devices,” https://community.arm.com/arm-communi softwaresecurityoftrustzone-mbasediotdevices,”inProceedingsof ty-blogs/b/architectures-and-processors-blog/posts/cortex-m23-and-c GLOBECOM2020-2020IEEEGlobalCommunicationsConference, ortex-m33---security-foundation-for-billions-of-devices. 2020. [23] P.M.Comparetti,G.Wondracek,C.Kruegel,andE.Kirda,“Prospex: [43] A. Machiry, E. Gustafson, C. Spensky, C. Salls, N. Stephens, Protocolspecificationextraction,”inProceedingsofIEEESymposium R. Wang, A. Bianchi, Y. R. Choe, C. Kruegel, and G. Vigna, onSecurityandPrivacy(S&P),2009. “BOOMERANG: Exploiting the semantic gap in trusted execution [24] CWE,“CWElistversion4.9,” https://cwe.mitre.org/data/index.html. environments.” in Proceedings of Network and Distributed System SecuritySymposium(NDSS),2017. [25] A.Developer,“ARMcoresightarchitecturespecificationversion3.0,” https://developer.arm.com/documentation/ihi0029/e. [44] S. Makkaveev, “The road to Qualcomm TrustZone apps fuzzing,” https://research.checkpoint.com/2019/the-road-to-qualcomm-trustzo [26] A. Fioraldi, D. C. D’Elia, and D. Balzarotti, “The use of likely in- ne-apps-fuzzing. variantsasfeedbackforfuzzers,”inProceedingsofUSENIXSecurity Symposium(SEC),2021. [45] C. McMahon Stone, S. L. Thomas, M. Vanhoef, J. Henderson, N. Bailluet, and T. Chothia, “The closer you look, the more you [27] A. Fitzek, F. Achleitner, J. Winter, and D. Hein, “The ANDIX learn: A grey-box approach to protocol state machine learning,” in research OS—ARM TrustZone meets industrial control systems se- Proceedingsofthe2022ACMSIGSACConferenceonComputerand
curity,” in Proceedings of IEEE 13th International Conference on CommunicationsSecurity,2022,pp.2265–2278. IndustrialInformatics(INDIN),2015. [46] A. Moroo and S. Yuichi, “ARMored core- [28] H. Gascon, C. Wressnegger, F. Yamaguchi, D. Arp, and K. Rieck, sight: Towards efficient binary-only fuzzing,” “Pulsar:Statefulblack-boxfuzzingofproprietarynetworkprotocols,” https://ricercasecurity.blogspot.com/2021/11/armored-coresight- inInternationalConferenceonSecurityandPrivacyinCommunica- towards-efficient.html,November2021. tionSystems,2015. [47] R. Natella, “Stateafl: Greybox fuzzing for stateful network servers,” [29] GlobalPlatform, “GlobalPlatform TEE spec adoption to reach 10 EmpiricalSoftwareEngineering,vol.27,no.7,pp.1–31,2022. billion,” https://globalplatform.org/latest-news/globalplatform-tee-s pec-adoption-to-reach-10-billion. [48] Z. Ning, C. Wang, Y. Chen, F. Zhang, and J. Cao, “Revisiting arm debuggingfeatures:Nailgunanditsdefense,”IEEETransactionson [30] ——,“TEEInternalCoreAPISpecificationv1.3.1,” https://globalpl DependableandSecureComputing,2021. atform.org/specs-library/tee-internal-core-api-specification. [31] Google, “syzkaller - kernel fuzzer,” https://github.com/google/syzk [49] Z. Ning and F. Zhang, “Ninja: Towards transparent tracing and aller. debuggingonARM,”inProceedingsofUSENIXSecuritySymposium (SEC),2017. [32] L. Harrison, H. Vijayakumar, R. Padhye, K. Sen, and M. Grace, “PARTEMU: Enabling dynamic analysis of real-world TrustZone [50] ——, “Understanding the security of arm debugging features,” in softwareusingemulation,”inProceedingsofUSENIXSecuritySym- Proceedings of IEEE Symposium on Security and Privacy (S&P), posium(SEC),2020. 2019. [33] Z. Hua, J. Gu, Y. Xia, H. Chen, B. Zang, and H. Guan, “vTZ: [51] H. Peng and M. Payer, “{USBFuzz}: A framework for fuzzing Virtualizing ARM TrustZone,” in Proceedings of USENIX Security {USB}driversbydeviceemulation,”in29thUSENIXSecuritySym- Symposium(SEC),2017. posium(USENIXSecurity20),2020,pp.2559–2575. [34] T.L.Kernel,“Thekerneladdresssanitizer(KASAN),” https://www. [52] V.-T.Pham,M.Bo¨hme,andA.Roychoudhury,“AFLNet:agreybox kernel.org/doc/html/v5.0/dev-tools/kasan.html. fuzzer for network protocols,” in 2020 IEEE 13th International Conference on Software Testing, Validation and Verification (ICST). [35] K. Kim, T. Kim, E. Warraich, B. Lee, K. R. Butler, A. Bianchi, IEEE,2020,pp.460–465. andD.J.Tian,“FUZZUSB:HybridstatefulfuzzingofUSBgadget stacks,”inProceedingsofIEEESymposiumonSecurityandPrivacy [53] S.Pinto,T.Gomes,J.Pereira,J.Cabral,andA.Tavares,“IIoTEED: (S&P),2022. Anenhanced,trustedexecutionenvironmentforindustrialIoTedge devices,”IEEEInternetComputing,vol.21,no.1,pp.40–47,2017. [36] G. Klees, A. Ruef, B. Cooper, S. Wei, and M. Hicks, “Evaluating fuzz testing,” in Proceedings of the 2018 ACM SIGSAC conference [54] S.PintoandN.Santos,“Demystifyingarmtrustzone:Acomprehen- oncomputerandcommunicationssecurity,2018,pp.2123–2138. sive survey,” ACM computing surveys (CSUR), vol. 51, no. 6, pp. [37] W. Li, J. Shi, F. Li, J. Lin, W. Wang, and L. Guan, “µAFL: Non- 1–36,2019. intrusive feedback-driven fuzzing for microcontroller firmware,” in [55] Riscure,“OP-TEEfuzzer,” https://github.com/Riscure/optee fuzzer. 2022 IEEE/ACM 44th International Conference on Software Engi- neering(ICSE),2022. [56] K. Ryan, “Hardware-backed heist: Extracting ECDSA keys from qualcomm’s trustzone,” in Proceedings of the ACM SIGSAC Con- [38] Y.Li,S.Ji,Y.Chen,S.Liang,W.-H.Lee,Y.Chen,C.Lyu,C.Wu, ferenceonComputerandCommunicationsSecurity,2019. R. Beyah, P. Cheng et al., “UNIFUZZ: A holistic and pragmatic metrics-drivenplatformforevaluatingfuzzers.”inUSENIXSecurity [57] Samsung,“mTower,” https://github.com/Samsung/mTower. Symposium,2021,pp.2777–2794. [58] A.Shakevsky,E.Ronen,andA.Wool,“Trustdiesindarkness:Shed- [39] Linaro, “Open portable trusted execution environment,” https://ww dinglightonsamsung’sTrustZonekeymasterdesign,”inProceedings w.op-tee.org. ofUSENIXSecuritySymposium(SEC),2022. [40] P.Liu,S.Ji,L.Fu,K.Lu,X.Zhang,J.Qin,W.Wang,andW.Chen, [59] E. Styger, “Finding memory bugs with Google address sanitizer “Howiotre-usingthreatensyoursensitivedata:Exploringtheuser- (ASAN) on microcontrollers,” https://mcuoneclipse.com/2021/05/ data disposal in used IoT devices,” in 2023 IEEE Symposium on 31/finding-memory-bugs-with-google-address-sanitizer-asan-on-mic SecurityandPrivacy(SP). IEEE,2023,pp.3365–3381. rocontrollers. 15[60] Trustonic, “Mircochip first to use Turstonic revolutionary Kinibi-M platformformicrocontrollers,” https://www.sourcesecurity.com/new  s/mircochip-turstonic-kinibi-m-microcontrollers-co-1530084457-g
a-co-1530085342-ga-npr.1530086842.html.  [61] TsinglinkCloud,“QinglianCloud’sTinyTEE,” https://www.qinglian  yun.com/Front/Secure/safe. [62] H. Wang, X. Xie, Y. Li, C. Wen, Y. Li, Y. Liu, S. Qin, H. Chen,  and Y. Sui, “Typestate-guided fuzzer for discovering use-after-free vulnerabilities,” in Proceedings of IEEE/ACM 42nd International  @ @ @ @ @ @ @ @ @ @ @ @ @ ConferenceonSoftwareEngineering(ICSE),2020. /HQJWKLQWHUYDO [63] Q.Wang,S.Ji,Y.Tian,X.Zhang,B.Zhao,Y.Kan,Z.Lin,C.Lin, S.Deng,A.X.Liuetal.,“MPInspector:Asystematicandautomatic approachforevaluatingthesecurityofIoTmessagingprotocols,”in 30thUSENIXSecuritySymposium(USENIXSecurity21),2021,pp. 4205–4222. [64] Wikipedia, “Trusted execution environment,” https://en.wikipedia.o rg/wiki/Trusted execution environment. [65] L. Xue, H. Zhou, X. Luo, Y. Zhou, Y. Shi, G. Gu, F. Zhang, and M. H. Au, “Happer: Unpacking android apps via a hardware- assisted approach,” in Proceedings of IEEE Symposium on Security andPrivacy(S&P),2021. [66] D.F.YatesandN.Malevris,“TheeffortrequiredbyLCSAJtesting: anassessmentviaanewpathgenerationstrategy,”SoftwareQuality Journal,vol.4,no.3,pp.227–242,1995. [67] B. Zhao, S. Ji, X. Zhang, Y. Tian, Q. Wang, Y. Pu, C. Lyv, and R. Beyah, “UVSCAN: Detecting third-party component usage vi- olations in IoT firmware,” in 32nd USENIX Security Symposium (USENIXSecurity23),2023. [68] B. Zhao, Z. Li, S. Qin, Z. Ma, M. Yuan, W. Zhu, Z. Tian, and C. Zhang, “StateFuzz: System call-based state-aware linux driver fuzzing,” in Proceedings of USENIX Security Symposium (SEC), 2022. [69] H.Zhou,S.Wu,X.Luo,T.Wang,Y.Zhou,C.Zhang,andH.Cai, “NCScope: hardware-assisted analyzer for native code in android apps,”inProceedingsofthe31stACMSIGSOFTInternationalSym- posiumonSoftwareTestingandAnalysis,2022,pp.629–641. [70] T. Zhu, “CmBacktrace: ARM Cortex-M series MCU error tracking library,” https://github.com/armink/CmBacktrace. RLWD5 7UXVWHG26 /LQX[NHUQHO .90 Figure 9: The seed length ratio. Appendix A. The Motivation of the New Scheduling To justify our motivation for the new scheduling tasks design for fuzzing the IoT Trusted OS implementation, we compare the seed length ratio when testing Trusted OSes, Linux kernel and KVM. In the evaluation, we utilize SYZKALLER to test mTower, Linux kernel (git checkout 356d82172), and KVM v5.19 for 24 hours, and the re- sults are shown in Figure 9. As shown in Figure 9, the average seed length when fuzzing Trusted OSes is 27.8 while the others are less than 7. In the triage scheduling task, SYZKALLER removes syscalls one by one in a syscall sequence. Then SYZKALLER tests the modified syscall se- quences to get the smallest syscall sequence that maintains the same code coverage. For those syscall sequences whose length is more than 30, the triage scheduling tasks prob- ably spend lots of time on removing syscalls and testing the modified syscall sequences. In addition, we count the number of minimized syscalls when fuzzing mTower. In a 48-hours fuzzing, only 56 syscall sequences are minimized, and among them, 18 syscall sequences only are removed with one syscall in the triaging tasks. Thus, SYZTRUST doesn’t have to perform triaging tasks since most test cases will not be minimized. Appendix B. Scope and Scalability of SYZTRUST We first provide an overview of the major Trusted OSes from leading IoT vendors and use the objective data to validate our assumptions. Then, we justify how to extend SYZTRUST for Cortex-A TEE OSes. TABLE4:AnoverviewofthemajorTrustedOSimplemen- tations provided by leading IoT vendors. Vendor TrustedOS Standards (insS tau lp lip no grt TA) Someofsupporteddevices Samsung mTower GPStandards NuMaker-PFM-M2351 Alibaba LinkTEEAir Proprietary NuMaker-PFM-M2351 TsingLinkCloud TinyTEE GPStandards NuMaker-PFM-M2351/LPC55S69/STM32L562 Beanpod ISEE-M GPStandards LPC55Sseries/GD32W515/STM32L5series Trustonic Kinibi-M PSACertifiedAPIs MicroChipSAML11 ARM TF-M PSACertifiedAPIs NuMaker-PFM-M2351,STM32L5,... As shown in Table 4, there are six major Trusted OSes forIoTdevices,whicharewidelyadoptedbythemajorIoT MCUs [6], [10], [20], [21], [57], [60], [61]. Three of them follow the GP standards, and they all allow TA installation. 16TABLE 5: ETM feature on IoT devices. 2) Identifies an Impactful Vulnerability. Several vulnera- bilities were identified, and CVEs are disclosed. PriviligeSecureDebug DebugAuthentication Manufacturer Device (includingETM) Managerment 3) Provides a Valuable Step Forward in an Established Nuvoton NuMaker-PFM-M2351 Enableindefault ICPprogrammingtool Field. This paper leverages hardware-assisted features NXPSemiconductors LPC55S69 Enableindefault Debugcredentialcertificate STMicroelectronics STM32L562 Enableindefault STM32CubeProgrammer such as Arm ETM to further improve the effectiveness GigaDevice GD32W515 Enableindefault Efuse MicroChip SAML11 Enableindefault Externdebugger of TEE OS fuzzing on IoT devices.
In addition, as shown in Table 5, though the devices have D.4. Noteworthy Concerns multipledebugauthenticationtodisabletheprivilegesecure debug(debuginsecureprivilegedmodes),theyenablepriv- The proposed fuzzing framework targets Trusted OSes ilege secure debug by default, thereby enabling the ETM following GP TEE internal API specification. It is assumed feature to collect execution traces. Thus, SYZTRUST can that a TA can be installed in the trusted OS for assisting be directly deployed on half of the major Trusted OSes. fuzzing. Arm ETM needs to be enabled for collecting the As for other Trusted OSes, SYZTRUST can be applied with traces. modification introduced in Section 5. Though our paper focuses on Cortex-M Trusted OS for Appendix E. IoT devices, SYZTRUST can be applied to those Trusted Response to the Meta-Review OSes for Cortex-A that allows installing a TA and having their ETM feature enabled. For instance, OP-TEE from Themeta-reviewnotesthatourproposedfuzzingframe- Linaro, Link TEE Pro from Ali Cloud, and iTrustee from work, SyzTrust, targets trusted OSes following GP TEE Huaweimeettheseassumptions.However,fortheCortex-A Internal API specification. To clarify, SyzTrust also has Trusted OSes that are developed and employed in a rela- built-in support for testing alternative Trusted OSes, includ- tively private supply chain, such as QSEE from Qualcomm, ing proprietary ones, as demonstrated by our extension of wecancollaboratewiththosemobilevendorsandhelpthem SyzTrust to the proprietary ”Link TEE Air”. We are also vet the security of their Trusted OSes. extending our SyzTrust prototype to other OSes. The meta-review notes that SyzTrust assumed that a Appendix C. TA can be installed in the trusted OS for assisting fuzzing Vulnerabilites Found by SYZTRUST and ARM ETM needs to be enabled for collecting the traces. We agree and note that these assumptions align with We present all the vulnerabilities found by SYZTRUST typical IoT Trusted OS scenarios. SyzTrust focuses on the on mTower, TinyTEE and Link TEE Air with their root Trusted OS binaries provided by IoT vendors, delivering cause in Table 6. We update the vulnerabilities disclosure security insights for device manufacturers and end users. progress in the GitHub link https://github.com/SyzTrust. First,giventhatIoTdevicemanufacturersoftenneedtoim- plement device-specific TAs, Trusted OS binaries supplied Appendix D. by IoT vendors generally allow TA installation (similar to Meta-Review smartphones where manufacturers can similarly install TAs in the respective TEEs). Second, SyzTrust tests IoT Trusted OSesbydeployingthemondevelopmentboardswhereETM D.1. Summary is enabled by default. For certain Trusted OSes that are developed and usedwithin a relatively private supply chain, This paper presents a fuzzing framework for TEEs on we will need to engage with the providers of these Trusted IoT devices. It leverages hardware-assisted features such OSes to help assess the security of their respective Trusted as Arm ETM to collect traces. It uses the state and code OSes.AppendixBprovidesadetaileddiscussionalongwith coverage as composite feedback to guide the fuzzer to supporting data. effectively explore more states. D.2. Scientific Contributions • Creates a New Tool to Enable Future Science • Identifies an Impactful Vulnerability • Provides a Valuable Step Forward in an Established Field D.3. Reasons for Acceptance 1) Creates a New Tool to Enable Future Science. This paper presents a new fuzzing tool that targets IoT Trusted OSes. 17TABLE 6: Vulnerabilities detected by SYZTRUST. Vul.ID Target Description Status RootCause&Impact Allocationofresources CVE-2022-38155 1 mTower TEEMallocallowsaTAtoachieveexcessivememoryallocationviaalargelenvalue withoutlimitsorthrottling (7.5HIGH) Allocationofresources CVE-2022-40762 2 mTower TEEReallocallowsaTAtoachieveexcessivememoryallocationviaalargelenvalue withoutlimitsorthrottling (7.5HIGH) Allocationofresources 3 mTower Confirmed TEEAllocateOperationallowsaTAtoachieveexcessivememoryallocationviaalargelenvalue withoutlimitsorthrottling Allocationofresources 4 mTower Confirmed TEEAllocateTransientObjectallowsaTAtoachieveexcessivememoryallocationviaalargelenvalue withoutlimitsorthrottling CVE-2022-40761 ThefunctionteeobjfreeallowsaTAtotriggeradenialofservicebyinvokingthefunctionTEEAllocateOperationwithadisturbedheaplayout,related 5 mTower Improperinputvalidation (7.5HIGH) touteecrypobjalloc ABufferAccesswithIncorrectLengthValuevulnerabilityintheTEEMemMovefunctionallowsaTAtotriggeradenialofservicebyinvokingthefunction 6 mTower Bufferoverflow Reported TEEMemMovewitha”size”parameterthatexceedsthesizeof”dest”. CVE-2022-40760 ABufferAccesswithIncorrectLengthValuevulnerabilityintheTEEMACUpdatefunctionallowsaTAtotriggeradenialofservicebyinvokingthefunction 7 mTower Bufferoverflow (7.5HIGH) TEEMACUpdatewithanexcessivesizevalueofchunkSize. CVE-2022-40757 ABufferAccesswithIncorrectLengthValuevulnerabilityintheTEEMACComputeFinalfunctionallowsaTAtotriggeradenialofservicebyinvoking 8 mTower Bufferoverflow (7.5HIGH) thefunctionTEEMACComputeFinalwithanexcessivesizevalueofmessageLen. CVE-2022-40758 ABufferAccesswithIncorrectLengthValuevulnerabilityintheTEECipherUpdatefunctionallowsaTAtotriggeradenialofservicebyinvokingthefunction 9 mTower Bufferoverflow (7.5HIGH) TEECipherUpdatewithanexcessivesizevalueofsrcLen. ABufferAccesswithIncorrectLengthValuevulnerabilityintheTEEDigestDoFinalfunctionallowsaTAtotriggeradenialofservicebyinvokingthefunction 10 mTower Bufferoverflow Reported TEEDigestDoFinalwithanexcessivesizevalueofchunkLen. ABufferAccesswithIncorrectLengthValuevulnerabilityintheTEEDigestUpdatefunctionallowsaTAtotriggeradenialofservicebyinvokingthefunction 11 mTower Bufferoverflow Reported TEEDigestUpdatewithanexcessivesizevalueofchunkLen. Missingreleaseofmemory CVE-2022-35858 TheTEEPopulateTransientObjectand uteefromattrfunctionsallowaTAtotriggeramemoryoverwrite,denialofservice,andinformationdisclosure 12 mTower aftereffectivelifetime (7.8HIGH) byinvokingthefunctionTEEPopulateTransientObjectwithalargenumberintheparameterattrCount. CVE-2022-40759 13 mTower NULLpointerdereference TEEMACCompareFinalcontainsaNULLpointerdereferenceontheparameteroperation (7.5HIGH) CVE-2022-36621 14 mTower NULLpointerdereference TEEAllocateTransientObjectcontainsaNULLpointerdereferenceontheparameterobject (7.5HIGH) CVE-2022-36622
15 mTower NULLpointerdereference TEEGetObjectInfo1containsaNULLpointerdereferenceontheparameterobjectInfo (7.5HIGH) 16 mTower NULLpointerdereference Confirmed TEEGetObjectInfocontainsaNULLpointerdereferenceontheparameterobjectInfo 17 mTower Untrustedpointerdereference Reported Uncertain(providedthePoCtothevendor) 18 mTower Untrustedpointerdereference Reported Uncertain(providedthePoCtothevendor) TEEGetObjectInfoanduteecrypobjgetinfofunctionsallowacorruptiononthelinkfieldofobjecthandleandthenaDenialofService(DoS)will 19 mTower Untrustedpointerdereference Reported betriggeredbyinvokingthefunctionteeobjget 20 mTower Untrustedpointerdereference Reported AninvalidpointerdereferencecanbetriggeredwhenaTAtriestoreadamalformedTEEOperationHandlebytheTEEMACComputeFinalfunction 21 mTower Untrustedpointerdereference Reported AninvalidpointerdereferencecanbetriggeredwhenaTAtriestoreadamalformedTEEOperationHandlebytheTEESetOperationKey2function 22 mTower Untrustedpointerdereference Reported AninvalidpointerdereferencecanbetriggeredwhenaTAtriestoreadamalformedTEEOperationHandlebytheTEEMACUpdatefunction 23 mTower Untrustedpointerdereference Reported AninvalidpointerdereferencecanbetriggeredwhenaTAtriestoreadamalformedTEEOperationHandlebytheTEEGetOperationInfoMultiplefunction 24 mTower Untrustedpointerdereference Reported AninvalidpointerdereferencecanbetriggeredwhenaTAtriestoreadamalformedTEEOperationHandlebytheTEEAEEncryptFinalfunction 25 mTower Untrustedpointerdereference Reported AninvalidpointerdereferencecanbetriggeredwhenaTAtriestoreadamalformedTEEOperationHandlebytheTEEMACInitfunction 26 mTower Untrustedpointerdereference Reported AninvalidpointerdereferencecanbetriggeredwhenaTAtriestoreadamalformedTEEOperationHandlebytheTEESetOperationKeyfunction 27 mTower Untrustedpointerdereference Reported AninvalidpointerdereferencecanbetriggeredwhenaTAtriestoreadamalformedTEEOperationHandlebytheTEEResetOperationfunction 28 mTower Untrustedpointerdereference Reported AninvalidpointerdereferencecanbetriggeredwhenaTAtriestoreadamalformedTEEOperationHandlebytheTEEDigestUpdatefunction 29 mTower Untrustedpointerdereference Reported AninvalidpointerdereferencecanbetriggeredwhenaTAtriestoreadamalformedTEEOperationHandlebytheTEEAEDecryptFinalfunction 30 mTower Untrustedpointerdereference Reported AninvalidpointerdereferencecanbetriggeredwhenaTAtriestoreadamalformedTEEOperationHandlebytheTEECipherInitfunction 31 mTower Untrustedpointerdereference Reported AninvalidpointerdereferencecanbetriggeredwhenaTAtriestoreadamalformedTEEOperationHandlebytheTEEFreeOperationfunction 32 mTower Untrustedpointerdereference Reported AninvalidpointerdereferencecanbetriggeredwhenaTAtriestoreadamalformedTEEOperationHandlebytheTEEDigestDoFinalfunction 33 mTower Untrustedpointerdereference Reported AninvalidpointerdereferencecanbetriggeredwhenaTAtriestoreadamalformedTEEOperationHandlebytheTEEAllocateOperationfunction 34 mTower Untrustedpointerdereference Reported AninvalidpointerdereferencecanbetriggeredwhenaTAtriestoreadamalformedTEEOperationHandlebytheTEEFreeTransientObjectfunction 35 mTower Untrustedpointerdereference Reported AninvalidpointerdereferencecanbetriggeredwhenaTAtriestoreadamalformedTEEOperationHandlebytheTEEAEUpdatefunction 36 mTower Untrustedpointerdereference Reported AninvalidpointerdereferencecanbetriggeredwhenaTAtriestoreadamalformedTEEOperationHandlebytheTEEResetOperationfunction 37 mTower Untrustedpointerdereference Reported Uncertain(providedthePoCtothevendor) 38 mTower Untrustedpointerdereference Reported Uncertain(providedthePoCtothevendor) Allocationofresources 39 TinyTEE Confirmed TEEMallocallowsatrustedapplicationtoachieveExcessiveMemoryAllocationviaalargelenvalue withoutlimitsorthrottling Allocationofresources 40 TinyTEE Confirmed TEEReallocallowsatrustedapplicationtoachieveExcessiveMemoryAllocationviaalargelenvalue withoutlimitsorthrottling 41 TinyTEE NULLpointerdereference Confirmed TEEAllocateTransientObjectcontainsaNULLpointerdereferenceontheparameterobject 42 TinyTEE Untrustedpointerdereference Confirmed AninvalidpointerdereferencecanbetriggeredwhenaTAtriestoreadamalformedTEEOperationHandlebytheTEEDigestUpdatefunction 43 TinyTEE Untrustedpointerdereference Confirmed AninvalidpointerdereferencecanbetriggeredwhenaTAtriestoreadamalformedTEEOperationHandlebytheTEESetOperationKeyfunction 44 TinyTEE Untrustedpointerdereference Confirmed AninvalidpointerdereferencecanbetriggeredwhenaTAtriestoreadamalformedTEEOperationHandlebytheTEESetOperationKeyfunction 45 TinyTEE Untrustedpointerdereference Confirmed AninvalidpointerdereferencecanbetriggeredwhenaTAtriestoreadamalformedTEEOperationHandlebytheTEEResetOperationfunction 46 TinyTEE Untrustedpointerdereference Confirmed AninvalidpointerdereferencecanbetriggeredwhenaTAtriestoreadamalformedTEEOperationHandlebytheTEEFreeOperationfunction 47 TinyTEE Untrustedpointerdereference Confirmed AninvalidpointerdereferencecanbetriggeredwhenaTAtriestoreadamalformedTEEOperationHandlebytheTEECipherDoFinalfunction 48 TinyTEE Untrustedpointerdereference Confirmed AninvalidpointerdereferencecanbetriggeredwhenaTAtriestoreadamalformedTEEOperationHandlebytheTEECipherInitfunction 49 TinyTEE Untrustedpointerdereference Confirmed AninvalidpointerdereferencecanbetriggeredwhenaTAtriestoreadamalformedTEEOperationHandlebytheTEEAllocateOperationfunction 50 TinyTEE Untrustedpointerdereference Confirmed AninvalidpointerdereferencecanbetriggeredwhenaTAtriestoreadamalformedTEEOperationHandlebytheTEEAsymmetricSignDigestfunction 51 TinyTEE Untrustedpointerdereference Confirmed AninvalidpointerdereferencecanbetriggeredwhenaTAtriestoreadamalformedTEEOperationHandlebytheTEECipherUpdatefunction 52 LinkTEEAir NULLpointerdereference Reported teememcpycallsteeosamemcpy,whichcontainsaNULLpointerdereferenceontheresultobject 53 LinkTEEAir NULLpointerdereference Reported teestrcpycallsteeosastrcpy,whichcontainsaNULLpointerdereferenceontheresultobject 54 LinkTEEAir NULLpointerdereference Reported teememsetcallsteeosastrcpy,whichcontainsaNULLpointerdereferenceontheresultobject teehashupdatedoesnotcheckthesizeofitssecondparameter”size”andcallsdevioctl,whichtriggersaninvalidmemoryaccesswithlarge”size”value 55 LinkTEEAir Bufferoverflow Reported whenconsecutivelycopying64bytesinteeosamemcpy 56 LinkTEEAir NULLpointerdereference Reported teememmovecallsteeosamemmove,whichcontainsaNULLpointerdereferenceontheresultobject 57 LinkTEEAir NULLpointerdereference Reported teestrcatcallsteeosastrcat,whichcontainsaNULLpointerdereferenceontheresultobject teehashdigestdoesnotcheckthesizeofitsthirdparameter”size”andcallsdevioctl,whichtriggersanheapoverflowwithlarge”size”valueandcauses 58 LinkTEEAir Bufferoverflow Reported aninvalidpointerdereferenceinteeosafree Aninvalidpointerdereferencecanbetriggeredwhenatrustedapplicationtriestoaccessaninvalidaddressinthepoolfreefunctioncalled 59 LinkTEEAir Untrustedpointerdereference Reported byteeosafreeandteefree 60 LinkTEEAir NULLpointerdereference Reported teestrncpycallsteeosastrncpy,whichcontainsaNULLpointerdereferenceontheresultobject 61 LinkTEEAir NULLpointerdereference Reported teestrcasecmpcallsteeosastrcasecmp,whichcontainsaNULLpointerdereferenceonthes1ands2object 62 LinkTEEAir NULLpointerdereference Reported teememcmpcallsteeosamemcmp,whichcontainsaNULLpointerdereferenceonthes1ands2object 63 LinkTEEAir Untrustedpointerdereference Reported Aninvalidpointerdereferencecanbetriggeredwhenatrustedapplicationtriestoaccessaninvalidaddressintheteehashinitfunction 64 LinkTEEAir NULLpointerdereference Reported teestrncatcallsteeosastrncat,whichcontainsaNULLpointerdereferenceontheresultobject 65 LinkTEEAir NULLpointerdereference Reported teestrnlencallsteeosastrnlen,whichcontainsaNULLpointerdereferenceonthesobject teebase64encodedoesnotcheckthesizeofitsparameters”srclen”and”dst”,whichtriggersabufferoverflowif”srclen”islargerthanthesize 66 LinkTEEAir Bufferoverflow Reported of”dst”andruinsthemetadataofthenextchunk 67 LinkTEEAir NULLpointerdereference Reported teestrlencallsteeosastrlen,whichcontainsaNULLpointerdereferenceonthesobject 68 LinkTEEAir NULLpointerdereference Reported teestrcmpcallsteeosastrcmp,whichcontainsaNULLpointerdereferenceonthes1ands2object teehashfinaldoesnotcheckthesizeofitsfirstparameter”dgst”andcallsdevioctl,whichtriggersanheapoverflowifthesizeof”dgst”issmallerthan 69 LinkTEEAir Bufferoverflow Reported 48bytesandcausesaninvalidpointerdereferenceinteeosafree Aninvalidpointerdereferencecanbetriggeredwhenatrustedapplicationtriestoaccessaninvalidaddressintheteeosamemcpyfunction 70 LinkTEEAir Untrustedpointerdereference Reported calledbyteeaesinit 18
2309.15324 1 DefectHunter: A Novel LLM-Driven Boosted-Conformer-based Code Vulnerability Detection Mechanism Jin Wang‡†, Zishan Huang‡†, Hengli Liu‡, Nianyi Yang‡, Yinhao Xiao ‡∗ ‡School of Information Science, Guangdong University of Finance and Economics, Guangzhou, China. †These authors contributed equally to this work ∗Corresponding Author Email: 20191081@gdufe.edu.cn Abstract—One of the most pressing threats to computing tools like Flawfinder [1] and Findbugs [2]. Dynamic analysis systems is software vulnerabilities, which can compromise both involves monitoring a running program to identify potential hardware and software components. Existing methods for vul- security issues. Despite their utility, manual methods are nerability detection remain suboptimal. Traditional techniques labor-intensive and susceptible to human error. Additionally, are both time-consuming and labor-intensive, while machine- learning-based approaches often underperform when applied automated tools may fall short in detecting intricate vulner- to complex datasets, due to their inability to capture high- abilities that necessitate an in-depth understanding of system dimensional relationships. Previous deep-learning strategies also architectureandoperationallogic.Withtheadventofmachine fall short in capturing sufficient feature information. Although learning technologies, novel approaches like VCCFinder [3], self-attention mechanisms can process information over long which utilizes Support Vector Machines, and Ghaffarian et distances, they fail to capture structural information. In this paper, we introduce DefectHunter, an innovative model for vul- al. conducted a comparison commonly used machine-learning nerabilityidentificationthatemploystheConformermechanism. techniques and found that the Random Forest algorithm out- Thismechanismfusesself-attentionwithconvolutionalnetworks performed the others. [4] These strategies, however, may lack to capture both local, position-wise features and global, content- efficacy when applied to complex, industrial-scale data sets based interactions. Furthermore, we optimize the self-attention due to insufficient feature capture. The rise of deep learning mechanisms to mitigate the issue of excessive attention heads introducing extraneous noise by adjusting the denominator. We techniques offers new avenues for vulnerability identifica- evaluatedDefectHunteragainsttenbaselinemethodsusingsixin- tion, with graph-based methods like FUNDED [5] and De- dustrialandtwohighlycomplexdatasets.OntheQEMUdataset, vign[6],andsemantic-basedapproacheslikeCodebert[7]and DefectHunterexhibiteda20.62%improvementinaccuracyover CodeT5 [8]. Moreover, large language models such as GPT- Pongo-70B, and for the CWE-754 dataset, its accuracy was 4 [9] have substantially augmented the potential of semantic- 14.64% higher. To investigate how DefectHunter comprehends vulnerabilities, we conducted a case study, which revealed that based techniques. Nonetheless, these methods still encounter our model effectively understands the mechanisms underlying challenges in effectively fusing structural and semantic data, vulnerabilities. limiting their industrial applicability. To address these shortcomings, we introduce Defec- tHunter, a comprehensive framework comprising three key I. INTRODUCTION modules: structural information processing, a pre-trained ma- The escalating reliance on computer systems and the chinelearningmodel,andtheConformermechanism[10].The Internethasengenderedtransformativeshiftsacrossvariousdi- Conformer mechanism synergizes self-attention with convo- mensionsofhumanactivity.Simultaneously,ithasexacerbated lution, enabling the capture of both localized and extended vulnerabilities associated with computational infrastructure. dependencies in input sequences. This architectural choice According to the U.S. Department of Commerce’s National facilitates intricate relationship modeling and optimizes com- Institute of Standards and Technology (NIST)1, the National putational efficiency, surpassing prior models like the Trans- Vulnerability Database (NVD) cataloged an unprecedented former. DefectHunter processes structural information from 18,378 vulnerabilities in 2021, underscoring a concerning code snippets and tokenizes them into feature arrays encap- landscape in computer security. sulating contextual semantics. Moreover, we have refined the Over the years, scholarly research and professional ini- self-attention mechanisms within the Conformer architecture tiativesincybersecurityhaveyieldedsignificantadvancements to mitigate the issue of noise introduced by the utilization of in vulnerability identification through various methodologies softmaxinattentionmechanisms.[11].DefectHunterhasbeen and tools. Conventional vulnerability identification techniques fullyimplementedanditssourcecodeispubliclyavailableon often employ static and dynamic analyses. In static analysis, GitHub. experts scrutinize source code, frequently utilizing automated In our experiment, we employed an array of industrial datasets, as well as more intricate and challenging datasets 1https://www.nist.gov/ like FFmpeg and QEMU, to evaluate the performance of 3202 peS 72 ]RC.sc[ 1v42351.9032:viXraour novel vulnerability identification model, DefectHunter. B. Self-Attention Based Methods Our model was benchmarked against 10 established baseline Self-attention based methods offer various trade-offs be- methods. The results demonstrate that the incorporation of tweencomputationalefficiencyandtheabilitytocapturelong- structural information substantially enhances capacity of De- range dependencies, making them valuable tools in the fields fectHunter for identifying vulnerabilities. Compared to other
of deep learning and natural language processing. transformer-based models, DefectHunter excels in terms of Transformer: Introduced by Vaswani et al. [12], the ACC. Specifically, for CWE datasets, DefectHunter achieved Transformerarchitecturesurmountsthelimitationsintrinsicto an ACC that is 14.64% higher than that of Pongo-70B. In both recurrent and convolutional neural networks in capturing QEMUdatasets,themodelsurpassedPongo-70Bbyachieving long-range dependencies. This makes it particularly apt for a 20.62% improvement in ACC. This research makes the tasks that require the interpretation of sequential or structured following contributions: data, such as machine translation and natural language under- • We introduce DefectHunter, a novel vulnerability identi- standing. fication model, fully implemented and publicly available Central to the Transformer is the self-attention mech- on GitHub 2. anism, which endows the model with the capability to se- • We utilize Conformer blocks, an innovative architec- lectively focus on various portions of the input sequence ture that seamlessly combines convolutional layers with during predictive tasks. This mechanism allows the model self-attention mechanisms, to enhance performance in to effectively grasp both local and global contexts, thereby sequence modeling tasks. The Conformer is adept at ensuringhighparallelizabilityandreducingtrainingdurations. efficiently capturing both local contextual information The architecture is constituted by an encoder-decoder design, and intricate feature patterns, whereas the self-attention featuringmultiplelayersofbothself-attentionandfeedforward module facilitates long-range interactions. neural networks. • We refined the self-attention mechanisms within the Conformer: Developed by Anmol Gulati et al. [10], the Conformer block to mitigate the problem of excessive Conformer architecture enhances the Transformer by amelio- attention heads introducing unwarranted noise. rating some of its deficiencies. While the Transformer excels • We leveraged a pre-trained large language model to in the capture of global contexts, it is challenged by long extract semantic information from code snippets, leading sequences due to its quadratic computational complexity. The to a marked improvement in training efficiency. Conformer rectifies this by ingeniously combining convolu- Paper Organization. The rest of the paper is organized tional and self-attention mechanisms. A noteworthy innova- as follows. Section II presents recently advanced background tion within the Conformer is the convolutional feed-forward knowledge of our approach. Section III details the design module, facilitating efficient capture of local dependencies. and technical components of the DefectHunter. Section IV This architectural nuance allows the Conformer to sustain demonstrates our implementation of DefectHunter. Section high performance levels even in scenarios involving extended V reports our evaluation results and case studies on Defec- sequences. Moreover, the Conformer retains the prowess of tHunter. Section VI outlines the most related work. Section the self-attention in effectively capturing global contexts. VII concludes the paper with a future research discussion. C. Large Lanuage Models II. BACKGROUND Over the past decade, large language models have A. Graph emerged as revolutionary entities in the field of natural lan- guage processing. These models are characterized by their We employ both data flow graphs and control flow gargantuan parameter counts, often ranging from hundreds graphs as inputs to our computational model, thereby offering of millions to billions, enabling them to apprehend complex essential structural insights. These graph-based frameworks linguistic structures and produce contextually pertinent text. constitute the cornerstone for the systematic interpretation of ProminentexamplesincludetheGPTseriesbyOpenAI,BERT injected code snippets. Graphs confer multiple benefits in the by Google, and T5 by Microsoft. realm of code analysis. Firstly, they encapsulate the inherent These models hold significant potential in the domain structuralaspectsofcode,therebyfacilitatingacomprehensive of code analysis. When supplied with code snippets, they understanding of both data dependencies and control flow are capable of aiding developers in a variety of tasks, such constructs. This exhaustive viewpoint allows for the discern- as code auto-completion, bug identification, and refactoring ment of complex relationships within the software ecosystem. recommendations.Theirintrinsicabilitytounderstandboththe Secondly,graphsaffordabstractionbydistillingthehigh-level context and syntax of programming languages renders them facets of the code and disregarding non-essential elements. invaluable assets in software development endeavors. Thisabstractionconsiderablymitigatestheanalyticalcomplex- ity. Lastly, graph-based formulations are naturally compatible III. DESIGNOFDEFECTHUNTER with machine learning paradigms and other computational This section describes the architecture of DefectHunter, methodologies, thus offering a structured input conducive to which comprises three fundamental components: structural predictive modeling. information processing, a pre-trained model, and the Con- 2https://github.com/WJ-8/DefectHunter formermechanism.Theworkflowbeginswiththeextractionof 2structural information using an open-source tool. A semantic model with essential information about the ordering and rela- information feature matrix is subsequently generated through tive positions of sequence elements. The sinusoidal encodings a specialized large language model designed for code em- are generated according to the formulas below: bedding. The Conformer mechanism is then applied to distill vulnerability features from both the structural and semantic for even i data.Adjustmentshavebeenmadetotheself-attentionmecha- (cid:104) (cid:16) pos (cid:17) (cid:16) pos (cid:17)(cid:105)
nismswithintheConformertoaddresstheissueofsuperfluous pos = sin ,cos (2) i 100002i/d 100002i/d attention heads contributing to extraneous noise. Ultimately, a for odd i multi-layer perceptron is employed to ascertain the presence (cid:104) (cid:16) pos (cid:17) (cid:16) pos (cid:17)(cid:105) or absence of vulnerabilities. pos = cos ,sin (3) i 100002i/d 100002i/d where i represents the position within the sequence, d is the A. Structural Information dimensionality of the sinusoidal encoding, and pos corre- i sponds to the sinusoidal encoding of the i−th token. Prior to Structural information serves as a foundational compo- the self-attention process, the input sequence is element-wise nent in the architecture of DefectHunter, generating a diverse multiplied by these sinusoidal encodings. set of graphs such as Abstract Syntax Trees (AST), Control TheConformerisasophisticatedarchitecturaldesignthat Flow Graphs (CFG), and Data Flow Graphs (DFG) from synergizes the capabilities of Convolutional Neural Network source code. (CNN) and self-attention mechanisms to augment sequence The AST provides a hierarchical framework that delin- modeling tasks. This amalgamation enables the Conformer eatestheabstractsyntaxofaprogram,witheachnodesymbol- to efficiently capture both local and global dependencies in izing a syntactic construct and edges representing hierarchical a sequence, thereby overcoming some limitations inherent relationships. This model substantially aids in the analysis in conventional Transformer models. Compared to traditional and understanding of code. The CFG outlines possible execu- Transformers, the Conformer is adept at identifying both tion pathways within a program, employing nodes to signify position-wise local features and content-based global interac- program constructs and edges to mark transitions contingent tions.TheincorporationofaCNNmoduleallowsforeffective on branching operations. This graph clearly identifies entry extractionoflocalcontextandintricatefeaturepatterns,while and exit points, furnishing a visual guide to the sequence of theself-attentionmechanismisresponsibleforcapturinglong- program execution. Similarly, the DFG captures the interplay range dependencies. ofdataanddependenciesamongoperations,spotlightingvari- The Conformer block comprises three primary modules: able instantiation, modification, and usage. In the DFG, nodes aconvolutionalmodule,aself-attentionmodule,andtwofeed- stand for variables or operations, while edges signify data forwardneuralnetworks,asillustratedinFig.2.Eachmodule dependencies. Collectively, AST, CFG, and DFG contribute performs a vital function in processing various aspects of to a comprehensive understanding of a program structure, the input sequence, collectively contributing to the Conformer logic, and data flow, thereby enhancing the capabilities of efficacy. We have also implemented specific alterations to DefectHunter in defect identification and analysis. the traditional Conformer block. Rather than solely utilizing sinusoidal positional encodings for multi-head attention, we B. Code Sequence Embedding (CSE) combine these encodings with the input matrix before feeding the result into a fully connected layer. This adjustment opti- At the heart of Code Sequence Embedding (CSE) lies mizestheencodingprocessattheinputstageoftheConformer the use of pre-trained models to generate word embeddings. model. Each token is transformed into a feature vector, referred to as 1) Convolutional Module: The Convolutional Module acontextualtokenrepresentation,throughapre-trainedmodel. withintheConformerarchitectureemploysCNNtoeffectively Distinct from traditional word-embedding methods, which capture local dependencies in sequential data. This module is necessitate substantial training, CSE leverages pre-training comprised of an array of convolutional layers, which operate techniques to mitigate the risk of overfitting in situations concurrently and hierarchically to distill pertinent features withlimitedorbiasedtrainingdata.Consequently,pre-training fromtheinputsequence.Mathematically,theoutputofagiven affords more relevant features. x denotes a given piece of i convolutional layer can be formulated as: code, and the acquired representation P through Eq.1 is as i follows: Conv(x)=ReLU(BatchNorm(W ∗x+b)) (4) M =model(x ) (1) i i whereConv(·)denotestheconvolutionallayer,ReLU(·) modelrepresentsthemathematicalexpressionoftheemployed represents the rectified linear unit activation function, pre-trained model. BatchNorm(·) denotes batch normalization, W represents the convolutional kernel, x is the input sequence, and b represents the bias term. C. Conformer 2) Self-AttentionModule: Theself-attentionmoduleem- The Conformer employs sinusoidal positional embed- ploys multi-head self-attention to capture global interdepen- dings to encode the input sequences, thereby supplying the dencies within the input sequence. This is achieved by si- 3Conformer N Fla�en Block AST Conformer N Fla�en 1voidloop() { Block 2 inti=0; 3 intsum=0; DFG 4 for(i=1;i<=10;i++) { Concat MLP So�max 5 sum+=i; 6 } 7} Conformer N Fla�en A Code Snippet Block CFG Conformer N Fla�en Block CSE Fig. 1: Design of the DefectHunter Model operation is expressed as: MHSA(Q,K,V)=Concat(head ,head ,··· ,head )W (5) 1 2 h O Mul�-Head Posi�on Embedding A�en�on wherethehheadsrepresentdifferentsetsoflearnedprojection weight matrices W , W , and W for the queries, keys, Q K V
and values respectively. Each head operates on transformed versions of the input sequences, as follows: FeedForward Module FeedForward Module head =Attention(QW ,KW ,VW ) (6) i Qi Ki Vi (cid:18) Q KT(cid:19) Attention(Q ,K ,V )=Softmax √i i V (7) i i i i d k 3) Attention Calculation Modification: A critical issue Convolu�onModule arises in Eq.7. The operation QKT aims to establish correla- Layernorm tionsamongtoken(embedding)vectorspositioneddifferently, essentially constructing a square correlation matrix. This ma- trixiscomposedofdot-productvalues,scaledby √1 ,wherein d eachcolumnandrowcorrespondstoaspecifictokenposition. Subsequently, each row of this square matrix undergoes a Fig. 2: Conformer Block softmax operation, generating probabilities that serve as a mechanism for combining the value vectors within the matrix V. The resultant probability-weighted V matrix is then added to the initial input vector. This cumulative sum is propagated through the neural network for further layers of processing. multaneously attending to different portions of the sequence. Specifically, attention scores are computed using the dot In the context of multi-head attention, this entire pro- product of the query (Q) and key (K) vectors, normalized cedure is repeated multiple times in parallel for each layer. by the square root of the dimension of the key/query space In essence, the embedding vector is partitioned, and each (d ). These scores are subsequently passed through a softmax attention head employs the complete vector’s information to k functiontoderiveweightsrepresentingtheimportanceofeach annotateadistinctandnon-overlappingsectionoftheresulting sequenceelementrelativetoothers.Theseweightsareutilized vector. to modulate the value (V) vectors, culminating in the output However, the use of softmax introduces a drawback, of the module. Mathematically, the multi-head self-attention compellingeachattentionheadtoannotateevenwhenitlacks 4relevant information. While effective for discrete selection drc _ set _ unusable tasks,softmaxislesssuitedforoptionalannotation,especially when the output is summative. This challenge is exacerbated inmulti-headattention,wherespecializedheadsarelesslikely Tokenize to contribute compared to their general-purpose counterparts. This results in unnecessary noise, undermining the perfor- “drc_set_unusable” mance of the model. To rectify this issue, inspired by Evan Miller [11], we modify the denominator in the softmax equation by adding Fix-Tokenize 1. This ensures a positive derivative and a bounded output, thereby stabilizing the model. drc_set_unusable Attention(Q ,K ,V )=Softmax(cid:18) Q iK √iT (cid:19) V (8) Fig. 3: Tokenizing Result i i i i 1+ d k 4) Feed-Forward Neural Network: The feed-forward into UniXcoder to generate Code Source Embeddings (CSEs) neural network in Conformer introduces non-linear transfor- as matrices. It is important to note that UniXcoder supports mations to capture complex relationships between features. It up to 768 input tokens, necessitating the configuration of the consists of two linear layers, with a ReLU activation function max_length parameter in the tokenizer. in between, as defined by the following equation: FFN(x)=ReLU(W (ReLU(W x+b ))+b ) (9) 2 1 1 2 C. Network Implementation whereFFN(·)representsthefeed-forwardneuralnetwork,W 1, We implied these networks through custom layers of W 2, b 1, and b 2 are weight and bias terms, respectively. Keras [16]. 1) Multi-Head Self-Attention: We introduce a multi- IV. IMPLEMENTATION head self-attention layer that accepts the embedding dimen- This section provides a comprehensive description of the sion, the number of heads, and an optional dropout rate as DefectHunter implementation. its parameters. This layer comprises multiple components: query_dense, key_dense, and value_dense perform lineartransformationsontheinput,whiletheattentionmecha- A. Building Graph nism computes the attention scores. The separate_heads We utilize the tree-sitter library3 to generate AST from functionreshapesandtransposestheinputtogeneratemultiple source code snippets. Our methodology consists of parsing heads, and the call function orchestrates these components the source code using the C language parser supplied by to yield the final output. The attention mechanism follows the tree-sitter, which results in AST objects. These ASTs are formuladetailedinEq.8,wherethequery,key,andvalue subsequently converted into Graphviz Dot format. From these tensors are multiplied to calculate the score. This score is √ Dot representations, edges are extracted and transformed into scaled by 1+ d ,and the softmax function is subsequently k adjacencymatrices.ForCFGandDFG,thepreliminaryphase applied to produce the attention weights. Finally, the attention involves parsing the code using tree-sitter. AST nodes are output is obtained by multiplying the weights with the value traversed according to specific rules to generate either a CFG, tensor. which contains nodes corresponding to program execution 2) SinusoidalPositionEncoding: Tointegratesinusoidal statements, or a DFG, which includes nodes that represent position encoding into DefectHunter, we implement a sepa- variable declarations and modifications. rate layer named Sinusoidal Position Embedding. This layer processes the input sequence by applying sinusoidal position B. Building CSE embeddings, which are computed based on generated position We leverage the transformer library [13] to import IDsandindices.Theinputsequenceisthenmultipliedbythese
UniXcoder [14], a unified cross-modal pre-training model embeddings. for programming languages. In our experiments, we encoun- tered challenges in tokenizing the source code, which led V. EVALUATION to imprecise word embeddings. As illustrated in Fig.3, us- A. Experimental Setup ing a function name from the QEMU dataset—specifically, Inthissection,wefirstgiveapresentationoftheprepara- drc_set_unusable—we observed that UniXcoder tok- tory measures taken before the experiments were carried out, enizes it inaccurately. To mitigate this issue, we utilized includingthesetupoftheenvironmentandtheselectionofthe the Natural Language Toolkit (NLTK) [15] to assist in the model training parameters. Then, we provide an elaborative tokenization process. Furthermore, we generated a specialized description of the datasets employed, notably focusing on the word list that UniXcoder uses during tokenization, ensuring twodatasetswecurated.Oneistailoredforpre-trainingalarge thepreservationofspecificwords.Thesourcecodeisthenfed language model (LLM), and the other is specifically tailored 3https://tree-sitter.github.io/tree-sitter/ for training the DefectHunter model. 5We proceed to present comparative experiments against has been previously employed in state-of-the-art research [5], establishedbenchmarks,therebydemonstratingtheefficacyof [6], [8]. These datasets fall into two primary categories. DefectHunter in defect detection. Additionally, we conduct a The first category consists of standard datasets derived from decompositionanalysistoinvestigatetherelationshipbetween the Software Assurance Reference Dataset (SARD) [17], modelperformanceandkeyinfluencingfactors.Inconcluding specifically CWE-362, CWE-476, CWE-754, and CWE-758. our evaluation, we provide an in-depth case study exploring These datasets cover a spectrum of software vulnerabilities, the proficiency of DefectHunter in identifying software vul- such as race conditions, null pointer dereferences, improper nerabilities and its potential real-world applications. handling of exceptional conditions, and reliance on unde- 1) Performance Metrics: We examined the performance fined behavior. To thoroughly evaluate the performance of of the model in terms of F1-Score and Accuracy, and here is our model performance, we introduce a second category that a brief description of these performance metrics. includesmorecomplexdatasets:FFmpegandQEMU.FFmpeg Accuracy: This metric measures the proportion of cor- serves as a multimedia processing framework, while QEMU rectly classified samples to the total number of samples. operates as an emulator; both play crucial roles in software Mathematically, it’s calculated as: developmentandfeaturecomplexcodebases.UnliketheCWE datasets, vulnerabilities in FFmpeg and QEMU are inherently Number of Correct Predictions complex, thereby imposing stringent requirements on our Accuracy= Total Number of Predictions model’s vulnerability detection capabilities. Comprehensive F1-Score: F1-Score is the harmonic mean of precision detailsofthedatasets,includingthenumberofvulnerabilities, and recall, offering a balance between the two metrics. It is thedistributionacrosstraining,validation,andtestingsubsets, particularly useful when the classes are imbalanced. Mathe- and the overall dataset size, are presented in Table I. matically, it is the harmonic mean of precision and recall: 1) DatasetForLLM: Wehavecollectedadataset,termed CWE-basic, sourced from the Common Weakness Enumer- Precision×Recall ation Specification to augment the capabilities of LLM in F1=2× Precision+Recall identifying a broader array of vulnerabilities. This dataset Precision: Precision quantifies the proportion of positive encompasses 77 distinct types of Common Weakness Enu- identificationsthatwereactuallycorrect.Itfocusesontherate meration (CWE) vulnerabilities. Specifically, we employed of false positives. Mathematically: the CWE instances cataloged by the National Vulnerability Database and constructed our dataset based on the examples True Positives provided therein. To automate this process, we developed a Precision= True Positives+False Positives web crawler4. Each example in the dataset was segmented into four components: a code snippet, a vulnerability indica- Recall:Recallquantifiestheproportionofactualpositive tor, the associated programming language, and a contextual cases that were correctly identified. It focuses on the rate of description. Subsequently, we isolated code snippets featuring false negatives. Mathematically: vulnerabilities and formatted them for LLM training as de- picted in Fig.4. Specifically, the LLM requires a specialized True Positives Recall= inputformat,denotedastheAlpacaformat.Toadheretothis True Positives+False Negatives requirement,the FFmpeg, QEMU,and CWEdatasets mustbe 2) Environment Configuration: The computational envi- modified to align with the format illustrated in Figure 4. ronment for this study is configured with a high-performance workstation,outfittedwithdualAMDEPYC7543processors, C. Baseline Methods each containing 32 cores. The system is supplemented by 320GB of RAM and an array of four NVIDIA A40 GPUs, WeevaluateDefectHunterbycontrastingitsperformance each equipped with 48GB of VRAM. On the software front, with ten disparate baseline methods, encompassing traditional the implementation utilizes TensorFlow v2.7.0 in conjunction neural network architectures such as CNN and GNN, as well with Keras v2.7.0. The employed neural network architecture as Transformers and LLMs. incorporatesaConformerencodercomposedof12Conformer 1) CNN[18]: WeutilizeWord2Vecforcodeembedding,
blocks. Each block consists of eight attention heads and a subsequently applying a CNN for feature extraction. The fully connected network with 1,024 dimensions. Optimization model culminates with a singular dense classification layer is carried out using the Adam optimizer, configured with a to achieve categorization. learning rate of 1e−5 and a batch size of 64. Experiments 2) CodeBERT [7]: CodeBERT is a bimodal pre-trained on the FFmpeg and QEMU datasets are conducted over 50 modeloptimizedforbothprogrammingandnaturallanguages. epochs, whereas the training on the CWE dataset is restricted Built upon a Transformer architecture, it is trained with a to30epochs.Themodel’sarchitecturecomprisesasubstantial hybrid objective function incorporating a replaced token de- 0.19 billion parameters. tection pre-training task. CodeBERT can generate generalized representations applicable to various natural language and B. Dataset programming language tasks. For vulnerability identification, To validate the effectiveness of our proposed model, we utilize an extensive set of six datasets, each of which 4https://github.com/Yuzu815/nist-vul-crawler 6TABLE I: Dataset Information Project Training Set Validation Set Test Set Total Vul Non-Vul FFmpeg 3958 462 499 4919 2438 2481 QEMU 10903 1378 1318 13600 5687 7913 CWE-362 451 56 57 564 189 375 CWE-476 1298 162 163 1623 396 1227 CWE-754 4034 504 505 5043 1359 3684 CWE-758 1089 136 137 1362 367 995 CWE-basic 222 0 0 222 222 0 { "instruction":"find potential security issues in the following code. If it has vulnerability, output: Vulnerabilities Detected: type of vulnerability. otherwise output<no vulnerability detected>:, " "input":"code", "output":"There are no obvious vulnerabilities in this program fragment"or"This program snippet has a vulnerability" } Fig. 4: Alpaca Dataset Format its output is integrated into a multi-layer fully connected 8) Codex [21]: Codex is an advanced GPT-based lan- network. guage model fine-tuned on publicly accessible GitHub code. 3) SELFATT [12]: We tokenize source code using a Although proficient in Python programming, Codex has lim- tokenizer, yielding a sequence of code tokens. Employing a itations in understanding complex docstrings and in mapping multi-headself-attentionmechanism,weprocessthesequence, operations to variables effectively. culminating in a fully connected layer for classification. 9) Pongo-13B: Pongo-13Bleveragesthellama2-13b[22] 4) Devign [6]: Devign is a model based on GNN that base model, fine-tuned through 600 iterations using the aims to identify software vulnerabilities within source code. QLORA [23] technique. Employing four A40 48GB GPUs, The model employs graph-level classification techniques fa- it is trained on intricate code segments from the FFmpeg cilitated by a diverse set of code semantic representations. andQEMUdatasets,supplementedbytheCWE-basicdataset. A unique aspect of Devign is the inclusion of a novel Pongo-13B is open-sourced on Hugging Face5. Convolutional module, which adeptly extracts salient features Guiding the model responses is a meticulously from pre-learned node representations to augment the graph- structured prompt: “Find potential security level classification. This methodology is predicated on the issues in the following code. If it has use of comprehensive and multifaceted code semantic rep- a vulnerability, output: Vulnerabilities resentations. Utilizing a GNN-based framework enables the Detected: type of vulnerability. otherwise discernment of intricate relationships and patterns in code output<no vulnerability detected>:” structures, often serving as indicators of vulnerabilities. Under this prompt, the model outputs exhibit discernible 5) VulDeepecker[19]: VulDeePeckerisadeeplearning- patterns: when vulnerabilities are detected in the provided based framework for automated software vulnerability detec- code, the model generates the response “This program tion. It utilizes code gadgets—clusters of semantically related snippet has a vulnerability.” followed by the lines of code—converted into vectors as its representation specific type of vulnerability detected. In cases where the methodology. code is deemed free of vulnerabilities, the model pro- 6) FUNDED [5]: FUNDED is a novel learning frame- duces the response “This program snippet has no work for building effective vulnerability detection models. vulnerability detected.” Leveraging graph neural networks, FUNDED employs a 10) Pongo-70B: Built upon the llama2-70b model, unique graph-based learning approach to capture program Pongo-70B undergoes 800 training steps, maintaining other control,data,andcalldependencies.Unlikepreviousmethods, parametersconsistentwithPongo-13B.Itisalsoopen-sourced which often consider programs sequentially or as untyped on Hugging Face6. graphs, FUNDED utilizes a graph representation of source code. This representation connects statements through rela- tional edges, encapsulating syntax, semantics, and flows for D. Experimental Results improved code representation in vulnerability detection. In this subsection, we provide a detailed analysis of the 7) DeepVulSeeker [20]: DeepVulSeeker is a framework experimental results obtained from evaluating our proposed forvulnerabilityidentificationthatintegratescodegraphstruc- model, DefectHunter, on different datasets. The results are tures and semantic features. It overcomes the limitations presented in Tables III, II. We compare our model against of existing methods by utilizing Graph Representation Self- Attention and pre-training mechanisms to achieve high accu- 5https://huggingface.co/wj2003/Pongo-13B
racy. 6https://huggingface.co/wj2003/Pongo-70B 7TABLE II: Experimental Results on CWE CWE-476 CWE-758 CWE-362 CWE-754 Method ACC F1 ACC F1 ACC F1 ACC F1 CNN [18] 0.5819 0.3812 0.6731 0.2031 0.6162 0.5083 0.7371 0.343 CodeBERT [7] 0.9083 0.8387 0.8356 0.7177 0.7691 0.8695 0.5246 0.4706 SELFATT [12] 0.8401 0.6213 0.9121 0.8656 0.6627 0.0307 0.9219 0.8769 Devign [6] 0.8250 0.5333 0.8822 0.8164 0.8571 0.8148 0.9291 0.8794 VulDeepecker [19] 0.8070 0.8932 0.7887 0.8818 0.9500 0.9048 0.9574 0.9351 FUNDED [5] 0.8889 0.9087 0.9583 0.9615 0.9446 0.9521 0.9286 0.9331 DeepVulSeeker [20] 0.9080 0.8052 0.9927 0.9859 0.9123 0.8387 0.9999 0.9999 Pongo-13B 0.7485 0.6239 0.7591 0.3529 0.5965 0.4889 0.6950 0.2735 Pongo-70B 0.8405 0.6829 0.7445 0.05405 0.7368 0.5714 0.8535 0.6606 DefectHunter 0.9141 0.8293 0.9980 0.9922 0.9474 0.9518 0.9999 0.9999 TABLE III: Experimental Results on FFmpeg and QEMU based models such as CodeBERT, SELFATT, and Deep- VulSeeker, it becomes evident that DefectHunter consis- FFmpeg QEMU Method tently outperforms them in terms of accuracy. This supe- ACC F1 ACC F1 riority can be attributed to the specialized architecture of theConformerblock,whichintegratesdeformableconvo- CNN [18] 0.5493 0.5341 0.5892 0.1330 VulDeePecker [19] 0.5637 0.5702 0.5980 0.6141 lutionsandself-attentionmechanisms,therebyenhancing SELFATT [12] 0.5710 0.5275 0.6049 0.4904 the capture of intricate structural information in code. CodeBERT [7] 0.5264 0.6138 0.5354 0.7126 • Despite the prodigious advancements of LLMs in var- Devign [6] 0.5904 0.6015 0.6039 0.3244 ious natural language processing endeavors, our results FUNDED [5] 0.5420 0.6800 0.5970 0.7480 indicate that for vulnerability identification, models with DeepVulSeeker [20] 0.6354 0.6703 0.6409 0.5293 smaller parameter sizes like DeepVulSeeker and Defec- Pongo-13B 0.5991 0.6428 0.4238 0.5945 tHunter offer distinct advantages. These smaller models Pongo-70B 0.6132 0.5832 0.4397 0.6003 are more resource-efficient, necessitating lower hardware Codex [21] F1: 0.5919 requirements and facilitating easier deployment. DefectHunter 0.6653 0.7023 0.6459 0.7095 E. Ablation Study To comprehensively understand the impact of individual 10 baseline methods, comprising both traditional neural net- moduleswithinourproposedmodel,weperformedanablation works (CNN, GNN) and modern transformer-based models study. The study focused on three critical components: the (SELFATT, CodeBERT, and DeepVulSeeker), as well as large Conformer module, the Attention-modified layer, and a LLM. language models (LLM) such as Codex, Pongo-13B and Wetestedtheperformanceofourmodelontwotasks,FFmpeg Pongo-70B. and Qemu, using ACC and F1-score as performance metrics. • The inclusion of structural information significantly en- The results are summarized in Table IV. hances the vulnerability identification performance of 1) ConformerModule: TheConformermoduleiscrucial our model. Comparing the results of CNN, a traditional for enhancing both self-attention and convolution layers for network, with Devign, we observe substantial improve- betterfeatureextractionandrepresentation.Whenweremoved mentsinbothaccuracyandF1scoresacrossthedatasets. this module, the performance for FFmpeg dropped notably This underscores the effectiveness of learning local code from an ACC of 0.6653 to 0.6052 and an F1-score of 0.7023 characteristics through structural information. Moreover, to 0.6099. The drop was even more substantial for the Qemu FUNDED outperforms CodeBERT, indicating that struc- task, where ACC reduced from 0.6459 to 0.6118 and the F1- tural information contributes positively to vulnerability score plummeted from 0.7095 to 0.3663. This steep decline, identification. especially in the F1-score for Qemu, underscores the impor- • Theadvantagesofattentionmechanismsandpre-training tance of the Conformer module. are manifest. SELFATT demonstrates superior perfor- 2) Attention-modified Layer: The incorporation of our mance in terms of accuracy on the FFmpeg and QEMU Attention-modified layer enhances the model capability to datasets compared to VulDeePecker. Furthermore, Code- prioritize salient features in the input data. When this layer BERT, as a pre-trained model, consistently surpasses was ablated, there was a decline in accuracy (ACC) from SELFATT across a multitude of metrics, thus underscor- 0.6653 to 0.6112 and a corresponding reduction in F1-score ing the benefits of leveraging pre-trained representations. from 0.7023 to 0.6381 in the case of FFmpeg. For Qemu, a • A salient observation is the significant contribution similar decline was observed: the ACC fell from 0.6459 to of the Conformer block to transformer-based models. 0.5944, and the F1-score decreased from 0.7095 to 0.5078. Upon comparing DefectHunter with other transformer- Although the deterioration in performance was evident, it was 8TABLE IV: Ablation Study FFmpeg Qemu Method ACC F1 ACC F1 DefectHunter 0.6653 0.7023 0.6459 0.7095 DefectHunter w/o AST 0.6232 0.6713 0.6338 0.3970
DefectHunter w/o DFG 0.6253 0.6492 0.6331 0.4550 DefectHunter w/o CFG 0.6333 0.6376 0.6293 0.5606 DefectHunter w/o Conformer 0.6052 0.6099 0.6118 0.3663 DefectHunter w/o Attention-modified 0.6112 0.6381 0.5944 0.5078 DefectHunter w/o LLM 0.5230 0.5131 0.5777 0.5831 less pronounced compared to the removal of the Conformer buffers is allocated for a size of 32 * 32 * 2 bytes. This module, suggesting that the Attention-modified layer, while discrepancy implies that the comparison may overlook differ- valuable, may not be as indispensable as the Conformer ences beyond size bytes, potentially failing to detect vari- module. ances in the additional bytes. The corrected code in Figure 5b 3) LLM: The LLM aims to provide a structured un- ensures proper memory operation by rectifying the allocation derstanding of the input data, interpreting it in a way that and copying procedures, thereby constraining them within the adds meaningful context for the task at hand. Eliminating the bounds of allocated memory buffers. Consequently, the risk LLMresultedinaseveredropinperformance.Specifically,for of buffer overflows and memory comparison inconsistencies FFmpeg, the ACC plummeted from 0.6653 to 0.5230, and the is considerably reduced. F1-score sank from 0.7023 to 0.5131. Similarly, in the Qemu The analysis of Fig.5 emphasizes the vital significance task, ACC reduced from 0.6459 to 0.5777 and F1-score from of accurate memory management in software development. 0.7095 to 0.5831. These considerable decreases highlight the Mishandling of memory functions such as memcpy and critical role that the LLM plays in both tasks. memcmpcanresultinseveresecurityvulnerabilities,including 4) StructuralInformation: Todelveintothecontribution buffer overflows and incorrect memory comparisons. The ofdifferentstructuralcomponents,weperformedablationtests vulnerabilities depicted in Figure 5a have been effectively on our model. The results, as summarized in the provided addressedinFigure5bthroughappropriatefixes.DefectHunter table, clearly demonstrate the pivotal role of structural in- no longer marks fixed code as a vulnerability. It is imperative formation in enhancing performance. The removal of any of for developers to maintain vigilance in applying appropriate the graphs, namely AST, CFG or DFG, led to consistent memory-handlingtechniquestomitigatepotentialexploitsand declines in both accuracy and F1-score. This underscores the prevent security breaches. importance of preserving the structural integrity of the model. 2) Case 2: In Case 2 (Seeing Figure 6), the func- In conclusion, our ablation study vividly shows that each tion av_image_fill_pointers is called to populate the module inthe DefectHuntermodel significantly contributesto pointers array, returning the necessary buffer size (ret) for its overall performance. The removal of any one component the data. led to a considerable decline in both ACC and F1-scores Subsequently, the code allocates memory for the buffer across tasks, emphasizing the importance of the synergistic buf using av_malloc, potentially resulting in an incorrect relationship among them. size if align is added to it. The correct buffer size should be determined by multi- plying ret (representing the required size for each line) by F. Case Study h (the number of lines). We conducted a case study to explore how DefectHunter However, as shown in Figure 6a, the buffer size is understands vulnerability. calculatedasret + align,whichcanbeincorrectandmay 1) Case 1: Case 1, as depicted in Figure 5a, exhibits resultinbufferoverflowissues,particularlyifalignexceeds vulnerabilities related to memory operations. Specifically, im- the size needed for each line. proper utilization of the memcpy and memcmp functions can The modification calculates the correct buffer size by lead to buffer overflow and memory comparison issues. Line multiplying ret (representing the size of each line) by h 16 demonstrates the memcpy function copying a memory (representing the number of lines) (see Figure 6b). This block of size bytes from the source buffer res0 to the ensures that the allocated buffer buf is sufficiently large to destination buffer res1. However, the destination buffer is accommodate all the data without encountering any buffer allocatedforasizeof32 * 32 * 2bytes.Thismismatchin overflow problems. buffersizescanleadtoabufferoverflow,causingthecopying By incorporating this modification, the code becomes operation to write data beyond the allocated memory space. safer and less prone to potential security vulnerabilities as- Thisoverflowhasthepotentialtocorruptadjacentmemoryor sociated with buffer overflows. even cause an application crash. Furthermore, after implementing this fix, DefectHunter’s Furthermore, Line 20 introduces a vulnerability asso- assessmentindicatesthattherepairedcodeisnolongerflagged ciated with the memcmp function. However, each of these as vulnerable. DefectHunter accurately acknowledges that the 91staticvoidcheck_add_res(HEVCDSPContexth,intbit_depth) 2{ 3 inti; 4 LOCAL_ALIGNED_32(int16_t, res0, [32*32]); 1staticvoidcheck_add_res(HEVCDSPContexth,intbit_depth) 5 LOCAL_ALIGNED_32(int16_t,res1,[32*32]); 2{ 6 LOCAL_ALIGNED_32(uint8_t,dst0,[32*32*2]);
3 4 5 i L Ln O Ot C Ci ; A AL L_ _A AL LI IG GN NE ED D_ _3 32 2 ( ( i in nt t1 16 6_ _t t , , r re es s0 1 , , [ [3 32 2* *3 32 2] ] ) ) ; ; 7 8 L foO rC (i A =L 2_ ;A i L <IG =N 5;E iD ++_ )32(uint8_t, dst1, [32*32*2]); 6 LOCAL_ALIGNED_32(uint8_t,dst0,[32*32*2] ); 9 { 7 LOCAL_ALIGNED_32 ( uint8_t , dst1 , [32*32*2] ) ; 10 intblock_size=1<<i; 8 for( i =2; i <=5; i ++ ) { 11 intsize=block_size*block_size; 9 intblock_size =1<< i ; 12 ptrdiff_t stride = block_size << (bit_depth >8); 10 intsize=block_size*block_size; 13 declare_func_emms(AV_CPU_FLAG_MMX,void, uint8_t *dst, int16_t *res, ptrdiff_t stride); 11 ptrdiff_tstride=block_size<<(bit_depth>8) ; 14 randomize_buffers(res0, size); 12 declare_func_emms ( AV_CPU_FLAG_MMX ,void, uint8_t * dst , int16_t * res , ptrdiff_t stride ) ; 15 randomize_buffers2(dst0,size); 13 randomize_buffers ( res0 , size ) ; 16 memcpy(res1,res0,sizeof(*res0)*size); 14 randomize_buffers2 (dst0 , size ) ; 17 memcpy(dst1, dst0, size *2);//Fix 15 memcpy(res1,res0,sizeof( *res0)*size); 18 if(check_func(h.add_residual[i-2]," ", block_size, block_size, bit_depth)) 16 memcpy(dst1,dst0,size); 19 { 17 if( check_func ( h . add_residual [ i-2] ," ", block_size , block_size , bit_depth ) ) { 20 call_ref(dst0,res0,stride); 1 1 28 9 0 c c ifa a (l l l l m_ _r n ee e mf w ( c d m( s d pt s0 (t 1, d r , se trs 0e0 s , 1, d s s, t tsr 1i td r ,ie d s e) iz ; ) e ; )) 2 21 2 c ifa (l ml_ en mew cm(d ps (t1 d, str 0e ,s d1 s, ts 1t ,r i sd ie z) e; *2))//Fix 21 fail(); 23 fail(); 22 bench_new ( dst1 , res1 , stride ) ; 24 bench_new(dst1, res1, stride); 23 } 25 } 24 } 26 } 25} 27} (a) Vulnerable (b) Patched Fig. 5: Case 1 1intav_image_alloc ( uint8_t * pointers [4] ,intlinesizes [4] , 1intav_image_alloc(uint8_t *pointers[4],intlinesizes[4], 2 intw ,inth ,enumPixelFormat pix_fmt ,intalign ) 2 intw,inth,enumPixelFormat pix_fmt,intalign) 3{ 3{ 4 inti ,ret; 4 inti,ret; 5 uint8_t * buf; 5 uint8_t *buf; 6 7 8 i if f( ( r( (e r rtue ert t n= =r a aev vt _ _;i im ma ag ge e_ _c filh l_e lc ink e_ ss ii zz ee s ( (w li n, eh s , iz0 es, N , U piL xL _ f) m ) t < , w0 ) ) ) <0) 6 7 8 9 i if f( (( (r rr re ee et tu ut t r r= =n n a ar rv ve e_ _t t; ;i im ma ag ge e_ _c filh l_e lc ink e_ ss ii zz ee s(w (li, n h e, s0 iz, eN sU , pL iL x) _) f m< t0 , ) w)) <0) 9 returnret; 10 for(i =0; i <4; i++) 10 for( i =0; i <4; i ++ ) 11 linesizes[i] = FFALIGN(linesizes[i], align); 11 linesizes [ i ] = FFALIGN ( linesizes [ i ] , align ) ; 12 if((ret = av_image_fill_pointers(pointers, pix_fmt, h, NULL, linesizes)) <0) 12 if( ( ret = av_image_fill_pointers ( pointers , pix_fmt , h , NULL , linesizes ) ) <0) 13 returnret; 13 returnret ; 14 intbuffer_size=ret*h;//Fix 14 buf=av_malloc(ret+align); 15 buf = av_malloc(buffer_size);//Fix 15 //body code 16 //body code 16} 17} (a) Vulnerable (b) Patched Fig. 6: Case 2 1static intchannelmap_query_formats(AVFilterContext *ctx) 2{ 3 if(!ctx||!ctx->priv) {//Fix 4 av_log(ctx, AV_LOG_ERROR,"Invalid context or private data pointer.\n"); 1static intchannelmap_query_formats( AVFilterContext * ctx ) 5 returnAVERROR(EINVAL); 2{ 6 } 3 ChannelMapContext*s= ctx-> priv; 7 ChannelMapContext*s= ctx->priv;//Fix 4 ff_set_common_formats ( ctx , ff_planar_sample_fmts ( ) ) ; 8 ff_set_common_formats(ctx,ff_planar_sample_fmts()); 5 ff_set_common_samplerates ( ctx , ff_all_samplerates ( ) ) ; 9 ff_set_common_samplerates(ctx, ff_all_samplerates()); 6 ff_channel_layouts_ref ( ff_all_channel_layouts ( ) , & ctx->inputs[0]->out_channel_layouts) ; 10 ff_channel_layouts_ref(ff_all_channel_layouts(), &ctx->inputs[0]->out_channel_layouts); 7 ff_channel_layouts_ref ( s->channel_layouts, & ctx->outputs[0]->in_channel_layouts) ; 11 ff_channel_layouts_ref(s->channel_layouts, &ctx->outputs[0]->in_channel_layouts); 8 return0; 12 return0; 9} 13} (a) Vulnerable (b) Patched Fig. 7: Case 3 1intattribute_align_argavcodec_decode_video2(AVCodecContext* avctx, AVFrame* picture, 1intattribute_align_argavcodec_decode_video2(AVCodecContext *avctx, AVFrame *picture, 2 int* got_picture_ptr, AVPacket* avpkt) {
2 3 4 5 6 7 8 9 10{ i * i a an f /g /v pt ( bo c p( rr oa et te l_ x yv t dt u -p _; c y>ri ptc n px cat - ku - or> t1 ar dce =; mo e_ ad _p ve ctr p hd k= a_ t nw ;0 gi; d et (h a v|| c a tin xv ,t c a* txg v-o p>t kc_ top );dic etu dr _e h_ ep igtr h, tA ) V &P &a ack ve _t im *a av gp ek _t c) heck_size(avctx->coded_width, avctx->coded_height,0, avctx)) 3 4 5 6 7 8 9 1 1 110 2 i i } * i an f f } /g /vt ( ( bo! ar r cr a oe ete v t_v t t dxct u upc; - ytr r >it xcn n x - p ct >u| - - k o| 1 1cr t de ! o; ; =p e_ di c p aet t vdu r p _r =e kw t0 ;| id| ; t! hg o &t_ &p aic vt cu tr xe -_ >p ct or d|| e! da _v hp ek it g) h{/ t/ F &i &x av_image_check_size(avctx->coded_width, avctx->coded_height,0, avctx)) { 11 returnret; 13 returnret; 12} 14} (a) Vulnerable (b) Patched Fig. 8: Case 4 corrected calculation of the buffer size reduces the likelihood 7b, we ensure that before utilizing ctx and ctx->priv, of buffer overflow, thereby enhancing the overall security of thorough validation checks are performed. If either pointer the function. is found to be invalid, it generates an error along with a correspondingmessage.Thefollowingoperationsareexecuted 3) Case 3: The main issue in Fig. 7a is the ab- only if the pointers are valid, effectively mitigating the risk sence of proper validation checks for the pointers ctx and of null pointer dereference or invalid memory access. After ctx->priv, which can potentially result in null pointer applying the fix, the code demonstrates improved robustness dereference or invalid memory access. This vulnerability can and safety. The introduced validation checks protect against resultinprogramcrashesorotherundesirablebehavior.InFig. 10possible runtime errors and crashes. Notably, the evaluation Ghaffarianetal.[4]conductedacomparativestudyofmachine of DefectHunter shows a change in its perception that the learning algorithms for vulnerability detection, highlighting corrected sections are free of any vulnerabilities. the superior performance of the Random Forest algorithm. 4) Case 4: The vulnerability occurs in Fig.8a at Lomio et al. [37] conducted an empirical study comparing line 6, where av_image_check_size is invoked with- the effectiveness of different machine learning approaches out validating the pointers avctx->coded_width and forvulnerabilityidentification,andconsidersavailablemetrics avctx->coded_height. This omission leaves the code seem to be not enough. In addition, this paper considers that susceptible to null pointer dereferencing or accessing invalid ensemble-based classifiers may perform better. Zolanvari et memory locations. Such vulnerabilities could potentially lead al. [38] assessed the suitability of machine learning-based to crashes, memory corruption, or unauthorized data expo- vulnerability detection in Internet of Things (IoT) settings. sure, depending on the context in which the code is exe- While effective, these methods sometimes falter when faced cuted. To rectify the vulnerabilities, an effective remediation with intricate challenges. strategy was implemented (Seeing Fig.8b). The main focus was on incorporating proper validity checks for each of the C. Deep Learning-Based Approaches involved pointers before performing any operations. In the repaired code, prior to any processing, avctx, picture, Deep learning-based methods employ intricate neural got_picture_ptr,andavpktaresubjectedtovalidation networks to augment the capability of vulnerability detection models in addressing complex issues. Li et al. introduced checks. Post-remediation, DefectHunter verifies that the code Vuldeepecker [39], which utilizes flat language sequences of is free from identified vulnerabilities. It validates the proper source code to train neural networks. However, this approach implementation of validity checks on pointers, ensuring null sacrifices the semantic nuances of the code. Consequently, pointerdereferencingorinvalidmemoryaccessvulnerabilities alternativeresearchhasexploredtheuseofgraphortreerepre- are mitigated. DefectHunter significantly enhances code secu- sentations,suchastheAbstractSyntaxTree(AST)orControl rity,identifyingsimilarvulnerabilitiesduringthedevelopment FlowGraph(CFG),fortrainingpurposes.Allamanisetal.[40] and testing phases. suggestedtechniquesfortransformingsourcecodeintographs and for extending gated graph neural networks. Wang et VI. RELATEDWORK al. [41] presented FUNDED, a graph-based framework for We categorize the related work into four distinct classi- identifying vulnerabilities. Steenhoek et al. [42] conducted an fications: traditional approaches, machine learning-based ap- empirical study to examine the correlation among predictions proaches, deep learning-based approaches, and large language made by different SOTP models and the relationship between model-based approaches. Our research falls within the realm dataset size and these models’ performance using widely of deep learning-based methods. used datasets. Hin et al. conducted LineVD [43] based on graph neural networks, which explores the application A. Traditional Approaches of graph neural networks for vulnerability identification and improvesthepredictionperformanceoffunctioncodeswithout Traditional approaches encompass methods that are vulnerabilities by addressing the conflicting results between grounded in algorithmic matching and manual verification,
the information at the function level and the information at eschewing the utilization of artificial intelligence. In the the statement level. Our work builds on these existing studies nascent stages of traditional vulnerability detection, experts to enhance the performance of vulnerability detection. manually crafted high-quality rule bases [1], [2], [24], [25]. To mitigate the financial burden of constructing these rule bases, researchers introduced semi-automated techniques like D. Large Language Model-Based Approaches taint tracking [26]–[30], symbolic execution [31], [32], and This category represents a specialized subset of deep fuzzing [33], [34], along with methods for code similarity- learning-basedapproaches.Researcherscommonlyutilizepre- based vulnerability matching [35]. These approaches achieve trained large language models, followed by domain-specific theirobjectivesbutoftensufferfromalackoffullautomation fine-tuning. Examples include Llama [44], CodeX [45], Chat- and high labor costs. GPT [46], and GPT-4 [9]. We developed the Pongo model based on the Llama architecture for vulnerability detection. B. Machine Learning-Based Approaches Pearce et al. [47] and Cheshkov et al. [48] conducted stud- ies evaluating the effectiveness of large language models in Machine learning-based approaches offer automated cat- this domain. Although these models do not offer substantial egorization as a core feature, thereby enhancing vulnerability advantages over general deep learning models, their extensive detection capabilities and reducing manual labor. Prominent parameter requirements contribute to higher training costs. techniquesinthiscategoryincludeLogisticRegression,Multi- Layer Perceptron, Support Vector Machines, and Random Forest. Al-Yaseen et al. [36] proposed a multi-level hybrid VII. CONCLUSIONSANDFUTURERESEARCH intrusiondetectionmodelemployingSupportVectorMachines In this paper, we present DefectHunter, a novel model and extreme learning machines. Additionally, they optimized for vulnerability identification. DefectHunter extracts features the training datasets using a modified k-means algorithm. fromcodeandidentifiesvulnerabilitiesbyconvertingthecode 11into four distinct structural representations, which are subse- [19] Z. Li et al., “Vuldeepecker: A deep learning-based system quently processed through Conformer blocks. Experimental for vulnerability detection,” in 25th Annual Network and Distributed System Security Symposium, NDSS 2018, San Diego, results indicate that DefectHunter sets a new technological California, USA, February 18-21, 2018. The Internet Society, benchmark for detecting vulnerabilities in real-world open- 2018.[Online].Available:http://wp.internetsociety.org/ndss/wp-content/ sourceprojectsusingmachinelearningtechniques.Inaddition, uploads/sites/25/2018/02/ndss2018 03A-2 Li paper.pdf [20] J. Wang et al., “Deepvulseeker: A novel vulnerability identification we conduct both ablation studies and case studies to delve framework via code graph structure and pre-training mechanism,” further into the intricacies of the model. In the future, we Future Generation Computer Systems, vol. 148, pp. 15–26, 2023. envision significant potential for integrating large existing [Online]. Available: https://www.sciencedirect.com/science/article/pii/ S0167739X23001978 languagemodelswithourdesignednetworkandworkingwith [21] M. Chen et al., “Evaluating large language models trained on code,” LangChain to establish a comprehensive knowledge base for 2021. vulnerability identification. [22] H. Touvron et al., “Llama 2: Open foundation and fine-tuned chat models,”2023. [23] T.Dettmersetal.,“Qlora:Efficientfinetuningofquantizedllms,”2023. ACKNOWLEDGMENT [24] “Checkmarx.”[Online].Available:https://www.checkmarx.com/ [25] S. Cui et al., “Vrust: Automated vulnerability detection for solana ThisstudywassupportedbytheNationalNaturalScience smartcontracts,”inProceedingsofthe2022ACMSIGSACConference Foundation of China (62002067) and the Guangzhou Youth onComputerandCommunicationsSecurity,ser.CCS’22. NewYork, NY, USA: Association for Computing Machinery, 2022, p. 639–652. Talent of Science (QT20220101174). [Online].Available:https://doi.org/10.1145/3548606.3560552 [26] M. Johns et al., “End-to-end taint tracking for detection and mitigation of injection vulnerabilities in web applications,” US Patent REFERENCES 10,129,285,2018,querydate:2023-09-1510:23:18.[Online].Available: https://patents.google.com/patent/US10129285B2/en [1] “Flawfinder.”[Online].Available:https://www.dwheeler.com/flawfinder/ [27] P. Wang et al., “Dftracker: detecting double-fetch bugs by multi-taint [2] “Findbugs.”[Online].Available:https://findbugs.sourceforge.net/ paralleltracking,”FrontiersofComputerScience,vol.13,pp.247–263, [3] H. Perl et al., “Vccfinder: Finding potential vulnerabilities in 2019. open-source projects to assist code audits,” in Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications [28] H. Zhang et al., “Statically discovering high-order taint style Security, ser. CCS ’15. New York, NY, USA: Association for vulnerabilitiesinoskernels,”inProceedingsofthe2021ACMSIGSAC Computing Machinery, 2015, p. 426–437. [Online]. Available: https: ConferenceonComputerandCommunicationsSecurity,ser.CCS’21. //doi.org/10.1145/2810103.2813604 New York, NY, USA: Association for Computing Machinery, 2021, p.
[4] S.M.Ghaffarianetal.,“Softwarevulnerabilityanalysisanddiscovery 811–824.[Online].Available:https://doi.org/10.1145/3460120.3484798 using machine-learning and data-mining techniques: A survey,” ACM [29] W. Kang et al., “Tracer: Signature-based static analysis for detecting Comput. Surv., vol. 50, no. 4, aug 2017. [Online]. Available: recurring vulnerabilities,” in Proceedings of the 2022 ACM SIGSAC https://doi.org/10.1145/3092566 ConferenceonComputerandCommunicationsSecurity,ser.CCS’22. [5] H. Wang et al., “Combining graph-based learning with automated New York, NY, USA: Association for Computing Machinery, 2022, datacollectionforcodevulnerabilitydetection,”IEEETransactionson p. 1695–1708. [Online]. Available: https://doi.org/10.1145/3548606. InformationForensicsandSecurity,vol.16,pp.1943–1958,2021. 3560664 [6] Y.Zhouetal.,“Devign:Effectivevulnerabilityidentificationbylearning [30] C. Luo et al., “Tchecker: Precise static inter-procedural analysis comprehensiveprogramsemanticsviagraphneuralnetworks,”Advances for detecting taint-style vulnerabilities in php applications,” in inneuralinformationprocessingsystems,vol.32,2019. Proceedings of the 2022 ACM SIGSAC Conference on Computer [7] Z. Feng et al., “Codebert: A pre-trained model for programming and Communications Security, ser. CCS ’22. New York, NY, USA: and natural languages,” CoRR, vol. abs/2002.08155, 2020. [Online]. Association for Computing Machinery, 2022, p. 2175–2188. [Online]. Available:https://arxiv.org/abs/2002.08155 Available:https://doi.org/10.1145/3548606.3559391 [8] Y. Wang et al., “Codet5: Identifier-aware unified pre-trained encoder- [31] R. Baldoni et al., “A survey of symbolic execution techniques,” ACM decodermodelsforcodeunderstandingandgeneration,”inProceedings Computing Surveys (CSUR), 2018, query date: 2023-09-15 10:28:28. of the 2021 Conference on Empirical Methods in Natural Language [Online].Available:https://dl.acm.org/doi/abs/10.1145/3182657 Processing,2021,pp.8696–8708. [32] D. Wang et al., “Wana: Symbolic execution of wasm bytecode for [9] “Gpt-4.”[Online].Available:https://openai.com/gpt-4 cross-platform smart contract vulnerability detection,” arXiv preprint [10] A. Gulati et al., “Conformer: Convolution-augmented transformer for arXiv:2007.15510, 2020, query date: 2023-09-15 10:28:28. [Online]. speechrecognition,”2020. Available:https://arxiv.org/abs/2007.15510 [11] E.Miller,“Attentionisoffbyoneevanmiller,”https://www.evanmiller. [33] S. T. Dinh et al., “Favocado: Fuzzing the binding code of javascript org/attention-is-off-by-one.html,072023,(undefined5/8/202313:48). engines using semantically correct test cases,” in Network and [12] A. Vaswani et al., “Attention is all you need,” CoRR, vol. Distributed System Security Symposium, 2021. [Online]. Available: abs/1706.03762, 2017. [Online]. Available: http://arxiv.org/abs/1706. https://api.semanticscholar.org/CorpusID:231591466 03762 [34] J.Heetal.,“Learningtofuzzfromsymbolicexecutionwithapplication [13] T. Wolf et al., “Transformers: State-of-the-art natural language to smart contracts,”in Proceedings of the2019 ACM SIGSAC Confer- processing,” in Proceedings of the 2020 Conference on Empirical enceonComputerandCommunicationsSecurity,2019,pp.531–548. Methods in Natural Language Processing: System Demonstrations. [35] H. Sun et al., “Vdsimilar: Vulnerability detection based on code Online: Association for Computational Linguistics, Oct. 2020, pp. similarity of vulnerabilities and patches,” Computers &Security, 38–45. [Online]. Available: https://www.aclweb.org/anthology/2020. 2021, query date: 2023-09-15 10:40:40. [Online]. Available: https: emnlp-demos.6 //www.sciencedirect.com/science/article/pii/S0167404821002418 [14] D. Guo et al., “Unixcoder: Unified cross-modal pre-training for code [36] W. L. Al-Yaseen et al., “Multi-level hybrid support vector machine representation,”2022. andextremelearningmachinebasedonmodifiedk-meansforintrusion [15] S.Birdetal.,NaturallanguageprocessingwithPython:analyzingtext detectionsystem,”ExpertSystemswithApplications,vol.67,pp.296– withthenaturallanguagetoolkit. ”O’ReillyMedia,Inc.”,2009. 303,2017. [16] F.Cholletetal.,“Keras,”https://keras.io,2015. [37] F. Lomio et al., “Just-in-time software vulnerability detection: Are we [17] “Nist software assurance reference dataset,” https://samate.nist.gov/ thereyet?”JournalofSystemsandSoftware,vol.188,p.111283,2022. SARD/,(Accessedon09/04/2022). [38] M. Zolanvari et al., “Machine learning-based network vulnerability [18] Y. Kim, “Convolutional neural networks for sentence classification,” analysis of industrial internet of things,” IEEE Internet of Things in Proceedings of the 2014 Conference on Empirical Methods in Journal,vol.6,no.4,pp.6822–6834,Aug2019. Natural Language Processing (EMNLP). Doha, Qatar: Association [39] D. Zou et al., “Vuldeepecker: A deep learning-based system for mul-
for Computational Linguistics, oct 2014, pp. 1746–1751. [Online]. ticlass vulnerability detection,” IEEE Transactions on Dependable and Available:https://aclanthology.org/D14-1181 SecureComputing,vol.18,no.5,pp.2224–2236,2021. 12[40] M.Allamanisetal.,“Learningtorepresentprogramswithgraphs,”arXiv preprintarXiv:1711.00740,2017. [41] H. Wang et al., “Combining graph-based learning with automated datacollectionforcodevulnerabilitydetection,”IEEETransactionson InformationForensicsandSecurity,vol.16,pp.1943–1958,2021. [42] B. Steenhoek et al., “An empirical study of deep learning models for vulnerability detection,” in 2023 IEEE/ACM 45th International ConferenceonSoftwareEngineering(ICSE),2023,pp.2237–2248. [43] D. Hin et al., “Linevd: Statement-level vulnerability detection using graph neural networks,” in Proceedings of the 19th International Conference on Mining Software Repositories, ser. MSR ’22. New York, NY, USA: Association for Computing Machinery, 2022, p. 596–607.[Online].Available:https://doi.org/10.1145/3524842.3527949 [44] “Llama.”[Online].Available:https://ai.meta.com/llama/ [45] “Codex.”[Online].Available:https://openai.com/blog/openai-codex/ [46] “Chatgpt.”[Online].Available:https://chat.openai.com [47] H. Pearce et al., “Examining zero-shot vulnerability repair with large language models,” in 2023 IEEE Symposium on Security and Privacy (SP),2023,pp.2339–2356. [48] A. Cheshkov et al., “Evaluation of chatgpt model for vulnerability detection,”arXivpreprintarXiv:2304.07232,2023. 13
2310.01152 Large Language Model-Powered Smart Contract Vulnerability Detection: New Perspectives Sihao Hu, Tiansheng Huang, Fatih ˙Ilhan, Selim Furkan Tekin, Ling Liu School of Computer Science Georgia Institute of Technology Atlanta, GA 30332, United States {sihaohu, thuang, filhan, stekin6, ling.liu}@gatech.edu Abstract—This paper provides a systematic analysis of the Compared to the existing representative analysis tools [3], opportunities, challenges, and potential solutions of harnessing [4], [11], [19], [24], [44] developed in the past years, LLM- Large Language Models (LLMs) such as GPT-4 to dig out powered detection features some unparalleled advantages: vulnerabilities within smart contracts based on our ongoing (1) Generality: Existing tools like Slither [11] require ex- research. For the task of smart contract vulnerability detection, achieving practical usability hinges on identifying as many true pert knowledge to design fixed-pattern detectors based on vulnerabilities as possible while minimizing the number of false control-flow or data-flow, restricting them to specific types positives. Nonetheless, our empirical study reveals contradictory of vulnerabilities [48]. In contrast, LLMs can emulate human yet interesting findings: generating more answers with higher linguistic understanding and reasoning and describe any type randomness largely boosts the likelihood of producing a correct of vulnerability using natural language, allowing them to answerbutinevitablyleadstoahighernumberoffalsepositives. To mitigate this tension, we propose an adversarial framework potentially detect a wider range of vulnerabilities, including dubbed GPTLENS that breaks the conventional one-stage detec- those that are unknown or uncategorized a priori. tionintotwosynergisticstages−generationanddiscrimination, (2) Interpretability: Generative LLMs can be utilized not forprogressivedetectionandrefinement,whereintheLLMplays only to detect vulnerabilities but also to offer intermediate dual roles, i.e., AUDITOR and CRITIC, respectively. The goal of reasoning about the detected vulnerabilities by following the AUDITOR is to yield a broad spectrum of vulnerabilities with the hope of encompassing the correct answer, whereas the goal of chain-of-thought [47]. For programming and software engi- CRITIC that evaluates the validity of identified vulnerabilities is neering tasks, LLMs can provide insights into code under- to minimize the number of false positives. Experimental results standing [46] and even suggest code repair solutions [28]. and illustrative examples demonstrate that AUDITOR and CRITIC Such capabilities, if exploited intelligently, hold the potential work together harmoniously to yield pronounced improvements tograntaheightenedleveloftransparencyandtrustworthiness over the conventional one-stage detection. GPTLENS is intuitive, strategic, and entirely LLM-driven without relying on specialist to the vulnerability detection process. expertiseinsmartcontracts,showcasingitsmethodicalgenerality Nevertheless, certain limitations hinder LLMs from being and potential to detect a broad spectrum of vulnerabilities. Our exploited to their full potential: code is available at: https://github.com/git-disl/GPTLens. (1)LLMscanproducealargenumberoffalsepositives[10], Index Terms—Large language model, GPT, smart contract, resultinginalowprecisionandnecessitateexhaustingmanual vulnerability detection verification efforts. These false positive cases, categorized as factual errors or potential vulnerabilities, suggesting that they I. INTRODUCTION should be differentiated from the true vulnerabilities in terms Smart contracts, commonly associated with cryptocurrency of metrics beyond just correctness. transactions on blockchains, were racked with financial losses (2)LLMs,ifusedinanaivemanner,tendtofailtouncover up to billions of dollars due to vulnerability exploitation [59]. all vulnerabilities within the smart contract, leading to false Due to the immutable nature once smart contracts are de- negatives. These undetected vulnerabilities can be categorized ployed, auditing acts an essential role in their development. into two primary groups: First, hard cases that exceed the Recently, generative Large Language Models [5], [26], [58] “cognitive ability” of current LLMs; Second, vulnerabilities (LLMs) are rapidly emerging and reshaping the domain of thataredetectablebutweremissedbecauseoftherandomness software engineering [15], facilitating tasks of code gener- ofthegeneration.Forthelatter,ourempiricalstudyshowsthat ation [9], code understanding [46], and code repair [28]. insteadofgeneratingdeterministicanswersinone-shot,gener- Leveraging the capabilities of LLMs to empower smart con- atingmultipleanswerswithhigherrandomness(diversity)can tract auditing presents a promising application opportunity. largelyboostthelikelihoodofthetrueanswerbeinggenerated. In this paper, we envision the development of LLM-powered Nevertheless, this strategy presents a Catch-22 dilemma [14] smart contract vulnerability detection techniques from a new as it inevitably introduces more false positives, i.e., the goal perspective, in tandem with a systematic analysis of the of detecting more true vulnerabilities is misaligned with the opportunities and challenges involved in this nascent research goal of reducing false positives. topic. To mitigate this tension, we propose GPTLENS, a frame- 3202 tcO 61 ]RC.sc[ 2v25110.0132:viXrawork that separates the conventional one-stage detection into Binaryprompt:Youareasmartcontractauditor.Review two adversarial yet synergistic stages: generation followed by
thefollowingsmartcontractcodeindetail.Isthecontract discrimination. vulnerable to {vul type}? Reply with YES or NO only. The primary goal of the generation stage is to enhance the {contract code} likelihood of true vulnerabilities being identified (generated). With this goal in mind, we ask an LLM to play the role of multiple auditor agents, and each auditor generates answers type [48], such as integer overflow/underflow, re-entrancy, (vulnerability and the reasoning) with high randomness, even or access control risk, and is expected to produce a binary though it could lead to a plethora of incorrect answers. In YES or NO answer. These studies [10], [40] also recommend contrast,thegoalofthediscriminationstageistodiscriminate including the definition or additional information about the between true and false answers generated in the generation vulnerabilitytypetoenhanceperformance.Givenncategories stage. To realize this, we prompt the LLM to play the role of of vulnerabilities, the LLM service should be queried n times a critic agent, which evaluates each identified vulnerability on for each smart contract. asetofcriteria,suchascorrectness,severity,andprofitability, and assigns corresponding scores. Subsequently, GPTLENS Multi-class prompt: You are a smart contract auditor. ranks the answers by these scores to select the top-k results. Here are {n} vulnerabilities: {vul type1, vul type2, ..., The primary advantage of GPTLENS is that it resolves vul typen}. Review the following smart contract code in the Catch-22 dilemma present in one-stage detection, i.e., detail.Use0or1toindicatethepresenceofspecifictypes the conflicting goals between increasing the probability of ofvulnerabilities,suchas{vul type1:0,vul type2:1,..., generating the correct answer and reducing the number of vul typen: 0}. {contract code} falsepositives.Furthermore, GPTLENS isapureLLM-driven framework without resorting to any expert knowledge during the end-to-end vulnerability detection process. An extension of binary prompting is multi-class prompt- Weconductpreliminaryexperimentson13real-worldsmart ing [8], which requires LLMs to categorize identified vul- contracts, all of which were reported to contain vulnerabil- nerabilities into multiple classes. Both binary and multi-class ities in the Common Vulnerabilities and Exposures (CVEs) promptingfallunderthecategoryofclose-endedprompting,as database [45]. Experiments indicate that compared to the theynecessitatethatthevulnerabilitycategoriesbepredefined. conventionalone-stagedetectionwhichidentifiestruevulnera- Nevertheless, there always exist vulnerabilities that are either bilitiesin38.5%ofcontractswiththetop-1output,GPTLENS unknownornotcategorized:Zhangetal.[57]foundthat80% succeeds in 76.9% of contracts. When comparing at the trial of exploitable bugs remain undetected by existing analysis level,theaccuracyfortop-1resultsrisesfrom33.3%to59.0%. tools. ThisenhancementisexcitingasGPTLENSissimpleanddoes not rely on any intricate design, suggesting its potential for Open-ended prompt: You are a smart contract auditor. broader application scenarios. Review the following smart contract code in detail and In summary, this paper makes three original contributions: identify vulnerabilities within it. {contract code} • We provide a systematic analysis of the advantages (opportunities) and challenges of LLM-powered smart Inthispaper,weproposeanewpromptingparadigmdubbed contract vulnerability detection techniques. open-ended prompting, which prompts LLMs to identify any • We introduce an innovative framework, GPTLENS, con- potentialvulnerabilitiestheythinkmightbe,anddescribethem stituted by two adversarial yet synergistic stages wherein in natural language without being constrained by predefined theLLMtakesontherolesoftheauditorandcriticagents vulnerabilitynames,theoreticallyenablingLLMstorecognize respectively. a broader range of vulnerability types. • GPTLENS is simple, effective and purely LLM-driven, eliminating the need for specialist expertise and showing B. Advantages of LLM-powered Detection the potential for generalization across a range of vulner- Interpretability:Beyondmerelyidentifyingvulnerabilities, ability types. we can ask LLMs to produce explanations for code, generate II. LLM-POWEREDVULNERABILITYDETECTION intermediate reasoning for vulnerability detection, generate examples of how to exploit identified vulnerabilities, and A. Standard Detection Paradigms suggest code repair solutions. Such interpretability offers a There are three standard prompting paradigms for vulnera- newdegreeoftransparencyandtrustworthiness,aswecangain bility detection, i.e., binary prompting, multi-class prompting insight into the step-by-step thought process [47] of LLMs. A and open-ended prompting. case study in Listing 3 presents the explanations provided by Existing works primarily follow the binary prompting GPT-4 for identified vulnerabilities. paradigm [10], [40]. In this paradigm, LLMs are prompted Generality: Traditional smart contract auditing tools [4], with the smart contract code and a specific vulnerability [11], [19], [24], [44] have difficulty detecting unknown oruncategorized vulnerabilities since detectors are predesigned detection demonstrates that GPT-4 can only identify 32 out of by human experts for fixed patterns of specific vulnerability 73 vulnerabilities on 52 DeFi attacks, but produces 740 false types. positivecases,leadingtoanextremelylowprecisionof4.15%
ExistingAI-powereddetectionmethods[17],[32],[60]also (Precision = TP ). A similar conclusion can be drawn TP+FP feature limited generality because they work in a supervised from the results of Claude [2], which achieves a precision of classificationmanner:vulnerabilitiesareclassifiedintoafixed 4.3%.Anothermeasurementstudy[8]showsthatGPT-3.5and setofpredefinedcategoriesbasedonknownthreats,whichare GPT-4achieveprecisionsof19.7%and22.6%respectively,in used as the ground-truth to train a detection model. detecting the 9 most common categories of vulnerabilities. For LLM-powered detection, although close-ended prompt- In practice, we observe that false positives can primarily be ing also requires vulnerabilities to be predefined, open-ended broken down into two cases: prompting breaks this constraint. To retain the characteristic • Factual error: LLMs are insensitive to certain types of of generality, we adopt open-ended prompting throughout the syntactic details, such as modifier statements, condition paper. In our experiments, when prompting GPT-4 [26] with statements, error handling statements (require, assert, an open-ended prompt, it identifies “condition logic error” for revert),andeventstatements,especiallywhenthenumber CVE 2018-11411 and “incorrect constructor name” for CVE ofinputtokensishuge.Forexample,inListing1,GPT-4 2019-15079, which are semantically precise and fall outside flags the multiTransfer function for re-entrancy risk be- of existing popular categorizations [48]. causeitbelieves“thefunctiondoesnotfollowtheCheck- Efficiency:LLMservicesprovideefficientonlineinference, Effects-Interaction pattern, e.g., the state should not be makingLLM-poweredmethodsoutputresultsmuchfasterthan updated before calling external contracts.” Nonetheless, many traditional methods [8]. However, pre-training an LLM this false alarm stems from mistaking Transfer as an ex- offline is prohibitively expensive in terms of both computa- ternalfunctioncallwhenitisactuallyaneventstatement tional resources and time [43]. for data logging without invoking external contracts. C. Limitations of Current LLM-powered Detection function setOwner(address _owner) returns ( 1 bool success) { Although LLM-powered detection offers promising advan- owner = _owner; tages and despite the growing hype and claims regarding 2 return true; 3 what LLMs can do, our empirical study has exposed some } 4 limitationsinherentincurrentLLMs,whichinhibitthemfrom ... 5 reachingtheirfullpotentialinpractice.Below,wediscusstwo 6 function uploadBalances(address[] addresses, uint256[] balances) onlyOwner { primary limitations. require(!balancesLocked); 7 function multiTransfer(address[] _addresses, require(addresses.length == balances. 1 8 uint[] _amounts) public returns (bool length); success) { uint256 sum; 9 require(_addresses.length <= 100 && for (uint256 i = 0; i < uint256(addresses. 2 10 _addresses.length == _amounts.length); length); i++) { uint totalAmount; sum = safeAdd(sum, safeSub(balances[i], 3 11 for (uint a = 0; a < _amounts.length; a++) balanceOf[addresses[i]])); 4 totalAmount += _amounts[a]; balanceOf[addresses[i]] = balances[i]; 5 12 require(totalAmount > 0 && balances[msg. } 6 13 sender] >= totalAmount); balanceOf[owner] = safeSub(balanceOf[owner 14 balances[msg.sender] -= totalAmount; ], sum); 7 for (uint b = 0; b < _addresses.length; b } 8 15 ++) { ... 16 if (_amounts[b] > 0) { 9 balances[_addresses[b]] += Listing2. CodesnippetfromthesmartcontractreportedinCVE2018-10666 10 _amounts[b]; 11 Transfer(msg.sender, _addresses[b • Potentialvulnerability:Inanothercase,theidentifiedrisk ], _amounts[b]); doesexistbutremainsunexploited,possiblybecauseitis } 12 neithersevereenoughnorfinanciallybeneficialforattack- } 13 ers. In Listing 2, GPT-4 highlights two vulnerabilities: return true; 14 } the “lack of access control” in the setOwner function, 15 ... whichallowsanyonetocallit,andthe“arbitrarybalance 16 manipulation” in the uploadBalances function, enabling Listing1. CodesnippetfromthesmartcontractreportedinCVE2018-13836 the owner to set balances for any addresses arbitrarily, 1) Large number of false positives: The paramount chal- whichcouldinflatethetokensupply.Despitebothofthem lenge is that LLMs can produce a large number of false posi- are correct vulnerabilities, only the former was exploited tives(FP),leadingtoexhaustingmanualverificationefforts.A by the attacker and labeled as a CVE while the latter is recent measurement study [10] on project-level vulnerability considered as a false positive, since the formor is moresevereandprofitable.Thisobservationindicatesthatvul- nerability detection should consider not only correctness butalsoseverityandprofitability.Vulnerabilitiesdetected in a smart contract should be ranked taking into account all these aspects. While a recent effort [40] seeks to mitigate the impact of false positives by utilizing sophisticated rules for filtering, designing such rules demands expert knowledge and remains effective only for predefined vulnerability types. 2) Large number of false negatives: LLMs fail to detect a large portion of true vulnerabilities, resulting in a low recall (Recall = TP ). As demonstrated in [10], GPT-4 and
TP+FN Fig.1. Pass@kagainstthenumberofsamples(k)w.r.t.varioustemperatures Claude-1.3 achieve recalls of 43.8% and 35.6% respectively reportedintheCodexpaper[9].Pass@kistheprobabilityofthebestresult on 52 DeFi attacks. In our experiments, we observe that false out of k generated samples, where the best sample is picked by an oracle (ground-truth knowledge). Higher the temperature t, higher the randomness negatives can also be divided into two categories: of samples. When t = 0 the LLM generates deterministic results. higher • Hard cases that are beyond the cognitive capabilities of temperaturesarebetterwhenthenumberofsamplesislarge. current LLMs. It is reasonable to assume that certain vulnerabilities surpass the detection abilities of existing Motivations: The design of GPTLENS is inspired by our LLMs, including intricate logic issues that might elude previous analyses, summarized as follows: even human auditors. To detect these hard cases, we (1) Open-ended prompting has good generalization across a expect more powerful LLMs in the future or more com- wide range of vulnerabilities, including those unknown plicated designs, which will be discussed in Section V. • Vulnerabilities that are detectable but undetected due to or uncategorized ones. (2) Reducing false positives is crucial for practical appli- the randomness of generation. As is known, GPT-like cations. False positives should be assessed not only for LLMs generate text by repeatedly estimating probability correctness but also for severity and profitability. distributions for next positions across the vocabulary (3) Generating a larger set of diverse samples can raise and sampling token-by-token [1]. During generation, a hyper-parametert(temperature)controlsthesharpnessof the likelihood of generating the correct answer, but it inevitably leads to more false positives. the distribution [6]. Low randomness leads the LLM to generate more credible results, which outperforms than The objective of identifying more correct vulnerabilities is high randomness when the number of generated samples in conflict with the goal of reducing false positives in the issmall.However,eventhoughhighrandomnessleadsto current one-shot detection paradigm. To mitigate this tension, lesscredibleresults,itismorelikelytogenerateacorrect we break one-shot detection into two adversarial stages, i.e., answerwhenthetimesofgenerationishuge.Thisobser- thegenerationstageanddiscrimination/rankingstage.Theidea vation cannot only be corroborated by our experiments of dividing one-stage into multiple stages is also employed in (Section IV), but also by the Codex paper [9]: Figure 1 industrial recommendation systems [7] to optimize respective shows the pass probability of the best result picked out goals at different stages. of k samples generated by Codex [9] (a sibling of GPT- Figure2showstheoverallframeworkof GPTLENS,where 3) on a code generation task (HumanEval). When the an LLM plays the roles of two adversarial agents, i.e., the au- number of samples reaches 100, a higher temperature ditorandthecritic,activatedbydifferentpromptsinrespective (0.8) outperforms a low temperature (0.2) with a huge stages. margin (13 absolute percentage). In the generation stage, multiple auditors independently au- Nevertheless, pass@k in Figure 1 is calculated utilizing ditthesmartcontractcode,generatingidentifiedvulnerabilities an oracle (ground-truth knowledge), which is not available in along with their associated functions and reasoning. The goal real-world applications and generating more diverse answers of this task is to yield a broad spectrum of answers, with the inevitablyleadstomorefalsepositives.Hence,howtoidentify hope of encompassing the correct one. more correct vulnerabilities without introducing more false In the discrimination stage, the identified vulnerabilities positives is a challenge for LLM-powered detection. and their associated reasoning are scrutinized, evaluated and ranked by the critic agent, taking into account factors such III. TWO-STAGEADVERSARIALDETECTIONFRAMEWORK as correctness, severity and profitability. The goal of this task Goal:Ourgoalistopresentasimple,effective,andentirely is to simulate the role of an oracle, i.e., to precisely discern LLM-driven methodology to shed light on the design of the correct answer and rank it above all other false positives. LLM-powered approaches. For every identified vulnerability, It is worth noting that the discrimination is not solely based it should indicate the associated function and provide the on the identified vulnerability, but heavily leans on reasoning reasoning. provided by auditors.Vul_name: {xx} Function: {xx} Reason: {xx} Vul_name: {xx} Function: {xx} Reason: {xx} Smart contract Vul_name: {xx} Function: {xx} Reason: {xx} Auditors … … Auditor prompt: Adversarial Critic prompt: You are a smart You are a agents contract auditor… vulnerability critic … LLM Vul_name: {xxx} Top-k results Function: {xxx} Criticism: {xxx} Vul_name: {xx} Merge Correctness: {score} Rank Function: {xx} Severity: {score} Reason: {xx} Profitability: {score} Critic Generation stage Discrimination/ranking stage Fig.2. GPTLENS:anadversarialframeworkthatbreakstheconventionalone-stagedetectionintothegenerationanddiscriminationstages. Proof of feasibility: Before diving into the details of Auditor prompt: You are a smart contract auditor, iden- GPTLENS, we raise a pertinent question: How is the critic tify and explain severe vulnerabilities in the provided agent able to discriminate between answers generated by the smart contract, ensuring they are real-world exploitable
auditor agents if all the agents are played by the same LLM? and beneficial to attackers. Include reasoning and corre- Theanswerliesinthedifficultyoftwotasks:thegeneration sponding function code as well. Output up to {m} most task is more challenging than the discrimination task because severe vulnerabilities. If no vulnerabilities are detected, the former demands the model to dig out all possible vulnera- output “null”. bilities within an entire smart contract project, while the latter {contract code} only requires an assessment of recognized vulnerabilities at Remember your output should adhere to the following the function level. Moreover, the success of the adversarial format: {json format}. framework has been previously corroborated by GANs [12], wherein a very simple discriminator can be employed to train a sophisticated generator [33]. Fig.4. Auditorprompt To provide a clearer picture, in Figure 3 we present the A. Generation Stage reasoning generated by an auditor along with the criticism from the critic for the code snippet in Listing 2. The critic Thegoalofthegenerationstageistoidentifyacandidateset concludes that the vulnerability identified by the auditor is a of vulnerabilities that includes the correct answer. Therefore, factual error. we set up n auditors and initialize them with the auditor prompt presented in Figure 4. Each auditor is responsible for identifying up to m vulnerabilities within the given smart Reasoning by the auditor: There is a potential re- contract code. The auditors’ output strictly follows a JSON entrancyattack withintheMutiTransfer functionbecause format for ease of parsing. Temperature t is set based on the itdoesnotfollowtheChecks-Effects-Interactionspattern. hyper-parameters n and k since the optional value for t varies Thestateisnotupdatedbeforecallingexternalcontracts, depending on the number of samples. If n is large, t should which could potentially lead to re-entrancy attacks. alsobesethighertopreventmultipleauditorsfromgenerating similar answers. Criticism from the critic: In the given multiTrans- fer function, there doesn’t appear to be any external calls before state changes. The function updates the Critic prompt: As a meticulous and harsh critic, your balances appropriately before emitting a Transfer event. duty is to scrutinize the function and evaluate the iden- It’s worth noting that events like Transfer don’t actually tified vulnerabilities and reasonings with scores in terms invoke external contract functions. They simply log data. of correctness, severity and profitability. Your criticism Therefore,thevulnerabilitydescription“PotentialforRe- should include an explanation for your scoring. Entrancy Attack” seems incorrect based on the provided {output of auditors} code. Remember your output should adhere to the following format: {json format}. Fig.3. ReasoningandcriticismforcodesnippetinListing2 Fig.5. CriticpromptB. Discrimination/ranking Stage TABLEI HITTIMESON13SMARTCONTRACTSLOGGEDINCVEDATABASE. The role of the critic agent is to emulate an oracle, i.e., to discernthebestanswerfromamultitudeoffalsepositives.To Method A A+R A+C A+O A+C A+O ascertain what is the best, we consider three distinct factors: Parameter n1m1 n1m3 n1m3 n1m3 n2m3 n2m3 correctness, severity and profitability, because although some 2018-10299 3 1 2 3 3 3 falsepositivesmaybecorrect,theymightpossessadiminished 2018-10666 3 1 3 3 3 3 level of severity or may not be profitable for attackers. 2018-11335 1 1 1 3 1 3 Concretely, the critic agent is activated using the critic 2018-11411 0 2 2 3 2 3 2018-12025 0 0 2 3 3 3 prompt shown in Figure 5, which directs the critic to evaluate 2018-13836 2 0 1 1 0 2 the vulnerability, assign scores based on its reasoning, and 2018-15552 0 0 1 2 2 3 2018-17882 0 0 2 2 3 3 provideexplanationforthesescores.Subsequently,werankall 2018-19830 0 1 2 2 3 3 vulnerabilities descendingly based on these scores and choose 2019-15078 0 0 0 0 0 0 2019-15079 0 0 0 0 0 0 the top-k vulnerabilities from the list as the output. Ideally, 2019-15080 3 1 2 3 3 3 if the input list of vulnerabilities contains the ground-truth 2018-18425 0 0 0 0 0 0 answer, the critic will place it at the forefront. Hit#(CVE) 5 6 10 10 9 10 In our experiments, we employ only one critic agent to Hitratio(CVE) 38.5% 46.2% 76.9% 76.9% 69.2% 76.9% ensure that the criticism and scoring remain consistent across Hit#(trail) 13 7 18 25 23 29 various vulnerabilities. Moreover, we set a low temperature Hitratio(trail) 33.3% 18.0% 46.2% 64.1% 59.0% 74.4% value to reduce randomness. IV. EXPERIMENT Foralltheauditors,thetemperaturetissetto0.7,whilefor In this section, we validate the previous analyses and the thecritic,tissetto0toachieveconfidentandconsistentscor- efficacy of GPTLENS via experimental results. ing. It should be noted that the oracle leverages the ground- truth label information, therefore A+O only demonstrates an A. Experimental Settings ideal performance. Due to the constraint on the token number Dataset:Wecollectedthesourcecodeof13smartcontracts per query, larger n and m are not tested in our experiments. fromEtherscan1,witheachcontainingareportedvulnerability. Wesourcedthelabelsanddescriptionsforthesevulnerabilities B. Performance Comparison from the CVE database [45]. Each detection is conducted over three trials. For each Competitors: The experiment involves six competitors un- trial, up to one vulnerability is selected as the output. A
der different settings. For methods, A, R, C, O represent trial is deemed successful only if the function, vulnerability, Auditor,Random,CriticandOracle.Forparameters,ndenotes and reasoning are all in alignment with the CVE report. The the number of auditors and m denotes the maximum number numberofsuccessfultrialsispresentedinTableI,whereHit# of vulnerabilities identified by each auditor. The notation (CVE) is the number of smart contracts for which the method (n=2,m=3) means there are 2 auditors, and each auditor can correctlydetectedvulnerabilitiesatleastonce,andHit#(trial) generate up to 3 vulnerabilities. GPT-4 is adopted as the representsthenumberofsuccessfultrialsconductedacross13 backend LLM. The descriptions for six competitors are as smart contracts. follows: From Table I we can make several observations: • A(n=1,m=1): One auditor identifies up to one vulnera- (1) At the trial level, A+R(n=1,m=3) performs worse than bility as the output (aka one-stage detection). A(n=1,m=1), suggesting that generating more answers • A+R(n=1,m=3): One auditor identifies up to three vul- introduces more false positives. Nonetheless, at the con- nerabilities and randomly pick one as the output. tract level, A+R(n=1,m=3) identifies some CVEs are • A+C(n=1,m=3): One auditor identifies up to three vul- not detected by A+R(n=1,m=1) like 2018-11411 and nerabilities, and the critic scores them. The vulnerability 2018-19830,whichimpliesthatgeneratingmoreanswers with the highest score is selected as the output. increases the likelihood of generating correct answer. • A+O(n=1,m=3): One auditor identifies up to three vul- (2) A more evident observation to support the above argu- nerabilities and an oracle is adopted to pick the best ment is that A+O(n=1,m=3) works significantly better answer as the output. than A(n=1,m=1). • A+C(n=2,m=3): Two auditors identify up to three vul- (3) A+C(n=1,m=3) outperforms A+R(n=1,m=3), indicating nerabilities per each and the critic scores them. The that the critic agent is very crucial to discern true vul- vulnerability with highest score is selected as the output. nerabilities from false positives introduced by generating • A+O(n=2,m=3): Two auditors identify up to three vul- more answers. A+C(n=1,m=3) also works better than nerabilities per each and an oracle is adopted to pick the A(n=1,m=1): the Hit ratio (CVE) increases from 38.5% best answer as the output. to 76.9%, and the Hit ratio (trail) increases from 33.3% 1https://etherscan.io to 46.2%.(4) Increasing the number of auditors can further im- { prove the performance: compare A+C(n=1,m=3) with "function_name": "approve", A+C(n=2,m=3),theHitratio(trail)increasesfrom46.2% "vulnerability": "Race Condition", "auditor": "This function does not reset to 59.0%. the allowance before setting the new one. This can be exploited by the C. Case Study spender by front running the approver, We provide a case study to demonstrate how GPTLENS allowing them to increase their performs by taking the smart contract code presented in allowance.", "critic": "The statement is correct in Listing 4 (see Appendix) as the input, with n and m set to pointing out that this function does 1 and 3 correspondingly. The outputs of both the auditor and not inherently reset the allowance. critic are presented in Listing 3. However, the function does include a In the generation stage, the auditor identifies three vul- check to ensure that if the allowance nerabilities and their associated reasoning: (1) “Race con- is non-zero, the new value must be zero (and vice versa). Therefore, the dition” in the approve function (line 101-105); (2) “Race risk is not as serve as stated.", condition” in the transferFrom function (line 93-100); (3) "correctness": 3, “Unexpectedbehvaior”intheUBSexTokenfunction(line115- "severity": 0, 119). Although all explanations seem plausible, the first and "profitability": 0 second vulnerabilities are actually false positives. For the first } { vulnerability, the LLM overlooks the condition statement in "function_name": "transferFrom", line 102. For the second vulnerability, the LLM disregards "vulnerability": "Race Condition", the fact that Ethereum transactions are executed sequentially "auditor": "The _allowance is fetched and atomically. As for the third vulnerability, its name differs before the transfer is made, making it from“Constructortypo,”whichislabeledasthegroundtruth, susceptible to a race condition. An attacker could call approve() in a Nevetheless,thereasoningandidentifiedfunctionareaccurate. transaction with a higher gas price, Inthediscriminationstage,thecriticusesthevulnerabilities, allowing them to increase the functioncode,andtheauditor’sreasoningasinputandoutputs allowance before the transfer is criticism and scores with respect to correctness, severity, and executed.", profitability. As shown in Listing 3, the critic determines "critic": "The statement is incorrect. The atomicity and sequential execution of that the reasoning for the first and second vulnerabilities is Ethereum transactions ensure that incorrect, assigning them low scores. Conversely, the critic there’s no race condition between recognizes that the reasoning and identification of the third fetching _allowance and executing the
vulnerability are accurate, assigning it the highest scores. token transfer.", Consequently, GPTLENS successfully identifies the genuine "correctness": 2, "severity": 0, vulnerability within the smart contract. "profitability": 0 As we can see, the entire detection process neither involves }, manually-defined heuristics and rules for fixed-pattern recog- { nition nor requires a predefined set of vulnerability types, "function_name": "UBSexToken", showing that GPTLENS holds the potential to generalize well "vulnerability": "Unexpected Behaviour", "auditor": "The function name does not across a wide range of vulnerabilities. match the contract name. This indicates that this function is V. FUTUREDIRECTIONS intended to be the constructor, but it To develop a general, practical LLM-powered smart con- is not. This means that anyone can tract vulnerability detection method, several directions can be call the function at any time and reset the totalSupply and owner of the explored in the future: token.", Diversity in generation: Increasing the diversity of the "critic": "The reasoning is correct. The generation task helps to enhancing the finding of more true function name does not match the positives. It is benefical to explore innovative prompt engi- contract name, indicating that it is neering strategies and diversity metrics to encourage more intended to be the constructor but it is not. This means that anyone can generation and auditing diversity while maintaining the same call the function at any time and number of generated samples. reset the totalSupply and owner of the Consistency in discrimination: Due to token count con- token, which is a severe problem.", straints,alargenumberofinputvulnerabilitiesforthediscrim- "correctness": 9, ination task need to be divided into multiple batches. This "severity": 9, "profitability": 9, division can lead to scoring inconsistencies across different } batches,evenwithalowtemperaturesetting.In-contextlearn- ing, using few-shot examples, could be explored to teach the Listing3. AcasestudyonCVE2018-19830.LLM for more consistent scoring on novel observations and NLPtechniqueslikeBERT.ABERT-basedapproach[16]also unseen events. demonstrates promising efficacy in Ethereum fraud detection Reasoning process optimization: A popular direction to tasks. enhance the efficacy of LLMs is to design intricate reasoning Generative LLM-powered methods: Very recent, some processes that mimic the thought process of humans, such as studies [8], [10] measure the performance of LLMs on the chain-of-thoughts [47], tree-of-thoughts [54] and cumulative real-world datasets, suggesting that LLMs face precision- reasoning [56], which can be adapted for the vulnerability related challenges due to a high occurrence of false positives. detection task. GPTScan [40] is introduced in an attempt to mitigate false Integrating Generative AI agents: Generative AI positives by utilizing rule-based pre-processing and post- agents [27], [51] employ LLMs as the core component and confirmation, which requires expert knowledge and exten- perform specific roles in various tasks like software devel- sive engineering efforts. In comparison, GPTLENS is more opment [30]. In this paper, we design only two synergistic lightweight and entirely LLM-driven, making it general for a roles for agents. Exploring additional roles for AI agents to broader range of vulnerabilities. achievesophisticatedfunctionalities,aswellasdesigninghow theseagentscollaborativelyinteracttosolvecomplexdetection VII. CONCLUSION tasks, are promising directions. This study provides a systematical analysis of harnessing Enabling external knowledge plug-in: LLMs feature ca- generative LLMs for smart contract auditing, especially on pabilities to use tools or call external APIs to expand their the challenges of balancing the generation of correct answers contextual knowledge [34]. It would be intriguing to explore against the backdrop of false positives. To address this Catch- this functionality by allowing the LLM to autonomously de- 22 dilemma, we present an innovative two-stage framework, terminewhenandwhatknowledgeisbeneficialforgenerating GPTLENS, by designing the LLM to play two adversarial the correct answer for the vulnerability detection task. agent roles: auditor and critic. The auditor focuses on un- LLM-assisted tools: Instead of serving as an end-to-end covering diverse vulnerabilities complemented by intermedi- solution,LLMcanbeutilizedasatooltoassistdevelopersand ate reasoning while the critic assesses the validity of these auditors throughout the entire software engineering, including vulnerabilities and the associated reasoning. Empirical results code generation [9], [20], code understanding [36], [46], demonstrate that GPTLENS delivers pronounced improve- vulnerability detection [42], [49] and code repair [28], [52], mentsovertheconventionalone-stagedetectionandisentirely to name a few. LLM-driven, which negates the dependency for specialist expertise in smart contracts and exhibits generalization to a VI. RELATEDWORK broad spectrum of vulnerabilities. Various research efforts are dedicated to detecting vulnera- bilities in smart contracts. VIII. ACKNOWLEDGMENT Traditional methods: Static analysis tools like Secu- ThisresearchispartiallysponsoredbytheNSFCISEgrants rify [44], Vandal [4], Zeus [19], and Slither [11] examine 2038029, 2302720, 2312758, an IBM faculty award, and a the source code without execution, aiming to detect potential grant from CISCO Edge AI program. vulnerabilities based on code patterns and structures. In com-
parison, dynamic analysis tools [13], [18], [50], [55] employ REFERENCES fuzz testing techniques that generates test inputs to identify [1] AlgoWriting. A simple guide to setting the gpt- anomalies during the actual execution of smart contracts, 3 temperature, 2020. https://algowriting.medium.com/ gpt-3-temperature-setting-101-41200ff0d0be. providing insights into runtime vulnerabilities. Symbolic exe- [2] Anthropic. Introducing claude. Anthropic Blog, 2022. https://www. cution tools like Manticore [24] and Mythril [25] investigate anthropic.com/index/introducing-claude. vulnerabilitiesacrossbothbytecodeandsourcecodelevelsby [3] L. Brent, N. Grech, S. Lagouvardos, B. Scholz, and Y. Smaragdakis. Ethainter: a smart contract security analyzer for composite vulnera- examining all possible execution paths. Additionally, formal bilities. In Proceedings of the 41st ACM SIGPLAN Conference on verification techniques, such as Verx [29] and VeriSmart [37], Programming Language Design and Implementation, pages 454–469, validate smart contracts against user-defined specifications, 2020. [4] L.Brent,A.Jurisevic,M.Kong,E.Liu,F.Gauthier,V.Gramoli,R.Holz, ensuring adherence to desired properties. andB.Scholz.Vandal:Ascalablesecurityanalysisframeworkforsmart DL-powered methods:Deeplearning(DL)-basedmethods contracts. arXivpreprintarXiv:1809.03981,2018. like sequence-based models [31], [41], CNN-based meth- [5] T.Brown,B.Mann,N.Ryder,M.Subbiah,J.D.Kaplan,P.Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al. Language ods [39], graph neural networks-based methods [22], [23], modelsarefew-shotlearners.Advancesinneuralinformationprocessing [60] are proposed to extract high-level representations to systems,33:1877–1901,2020. enhance the efficacy of vulnerability detection. Some hybrid [6] T.Brown,B.Mann,N.Ryder,M.Subbiah,J.D.Kaplan,P.Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al. Language methods [21], [35], [53] combines deep-learning techniques modelsarefew-shotlearners.Advancesinneuralinformationprocessing with traditional methods. For examples, ESCORT [35] and systems,33:1877–1901,2020. xFuzz[53]distilltheoutputsoftraditionalmethodslikeSlither [7] Y. Cao, S. Hu, Y. Gong, Z. Li, Y. Yang, Q. Liu, and S. Ji. Gift: Graph-guided feature transfer for cold-start video click-through rate and Mythril to achieve good generality and inference effi- prediction. In Proceedings of the 31st ACM International Conference ciency. Some works [17], [38] equipped with more advanced onInformation&KnowledgeManagement,pages2964–2973,2022.[8] C.Chen,J.Su,J.Chen,Y.Wang,T.Bi,Y.Wang,X.Lin,T.Chen,and [31] P. Qian, Z. Liu, Q. He, R. Zimmermann, and X. Wang. Towards Z. Zheng. When chatgpt meets smart contract vulnerability detection: automatedreentrancydetectionforsmartcontractsbasedonsequential Howfararewe? arXivpreprintarXiv:2309.05520,2023. models. IEEEAccess,8:19685–19695,2020. [9] M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. d. O. Pinto, J. Kaplan, [32] P. Qian, Z. Liu, Y. Yin, and Q. He. Cross-modality mutual learning H.Edwards,Y.Burda,N.Joseph,G.Brockman,etal. Evaluatinglarge for enhancing smart contract vulnerability detection on bytecode. In language models trained on code. arXiv preprint arXiv:2107.03374, ProceedingsoftheACMWebConference2023,pages2220–2229,2023. 2021. [33] A. Radford, L. Metz, and S. Chintala. Unsupervised representation [10] I. David, L. Zhou, K. Qin, D. Song, L. Cavallaro, and A. Gervais. learningwithdeepconvolutionalgenerativeadversarialnetworks. arXiv Do you still need a manual smart contract audit? arXiv preprint preprintarXiv:1511.06434,2015. arXiv:2306.12338,2023. [34] T.Schick,J.Dwivedi-Yu,R.Dess`ı,R.Raileanu,M.Lomeli,L.Zettle- [11] J.Feist,G.Grieco,andA.Groce. Slither:astaticanalysisframework moyer,N.Cancedda,andT.Scialom.Toolformer:Languagemodelscan forsmartcontracts. In2019IEEE/ACM2ndInternationalWorkshopon teachthemselvestousetools. arXivpreprintarXiv:2302.04761,2023. Emerging Trends in Software Engineering for Blockchain (WETSEB), [35] C. Sendner, H. Chen, H. Fereidooni, L. Petzi, J. Ko¨nig, J. Stang, pages8–15.IEEE,2019. A. Dmitrienko, A.-R. Sadeghi, and F. Koushanfar. Smarter contracts: [12] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, Detectingvulnerabilitiesinsmartcontractswithdeeptransferlearning. S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. InNDSS,2023. Advancesinneuralinformationprocessingsystems,27,2014. [36] D. Shen, X. Chen, C. Wang, K. Sen, and D. Song. Benchmark- [13] G.Grieco,W.Song,A.Cygan,J.Feist,andA.Groce.Echidna:effective, ing language models for code syntax understanding. arXiv preprint
usable, and fast fuzzing for smart contracts. In Proceedings of the arXiv:2210.14473,2022. 29thACMSIGSOFTInternationalSymposiumonSoftwareTestingand [37] S.So,M.Lee,J.Park,H.Lee,andH.Oh. Verismart:Ahighlyprecise Analysis,pages557–560,2020. safetyverifierforethereumsmartcontracts. In2020IEEESymposium [14] J.Heller. Catch-22:anovel,volume4. SimonandSchuster,1999. onSecurityandPrivacy(SP),pages1678–1694.IEEE,2020. [15] X. Hou, Y. Zhao, Y. Liu, Z. Yang, K. Wang, L. Li, X. Luo, D. Lo, [38] X.Sun,L.Tu,J.Zhang,J.Cai,B.Li,andY.Wang.Assbert:Activeand J.Grundy,andH.Wang. Largelanguagemodelsforsoftwareengineer- semi-supervisedbertforsmartcontractvulnerabilitydetection. Journal ing: A systematic literature review. arXiv preprint arXiv:2308.10620, ofInformationSecurityandApplications,73:103423,2023. 2023. [39] Y. Sun and L. Gu. Attention-based machine learning model for smart [16] S. Hu, Z. Zhang, B. Luo, S. Lu, B. He, and L. Liu. Bert4eth: A pre- contractvulnerabilitydetection.InJournalofphysics:conferenceseries, trainedtransformerforethereumfrauddetection. InProceedingsofthe volume1820,page012004.IOPPublishing,2021. ACMWebConference2023,pages2189–2197,2023. [40] Y. Sun, D. Wu, Y. Xue, H. Liu, H. Wang, Z. Xu, X. Xie, and [17] S.Jeon,G.Lee,H.Kim,andS.S.Woo.Smartcondetect:Highlyaccurate Y.Liu.Whengptmeetsprogramanalysis:Towardsintelligentdetection smart contract code vulnerability detection mechanism using bert. In of smart contract logic vulnerabilities in gptscan. arXiv preprint KDDWorkshoponProgrammingLanguageProcessing,2021. arXiv:2308.03314,2023. [18] B.Jiang,Y.Liu,andW.K.Chan. Contractfuzzer:Fuzzingsmartcon- [41] W. J.-W. Tann, X. J. Han, S. S. Gupta, and Y.-S. Ong. Towards safer tractsforvulnerabilitydetection.InProceedingsofthe33rdACM/IEEE smart contracts: A sequence learning approach to detecting security International Conference on Automated Software Engineering, pages threats. arXivpreprintarXiv:1811.06632,2018. 259–269,2018. [42] C. Thapa, S. I. Jang, M. E. Ahmed, S. Camtepe, J. Pieprzyk, and [19] S.Kalra,S.Goel,M.Dhawan,andS.Sharma. Zeus:analyzingsafety S.Nepal.Transformer-basedlanguagemodelsforsoftwarevulnerability ofsmartcontracts. InNdss,pages1–12,2018. detection. In Proceedings of the 38th Annual Computer Security [20] Y. Li, D. Choi, J. Chung, N. Kushman, J. Schrittwieser, R. Leblond, ApplicationsConference,pages481–496,2022. T.Eccles,J.Keeling,F.Gimeno,A.DalLago,etal. Competition-level [43] H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, codegenerationwithalphacode. Science,378(6624):1092–1097,2022. T. Lacroix, B. Rozie`re, N. Goyal, E. Hambro, F. Azhar, et al. [21] Z. Liao, Z. Zheng, X. Chen, and Y. Nan. Smartdagger: a bytecode- Llama:Openandefficientfoundationlanguagemodels. arXivpreprint basedstaticanalysisapproachfordetectingcross-contractvulnerability. arXiv:2302.13971,2023. InProceedingsofthe31stACMSIGSOFTInternationalSymposiumon [44] P. Tsankov, A. Dan, D. Drachsler-Cohen, A. Gervais, F. Buenzli, and SoftwareTestingandAnalysis,pages752–764,2022. M.Vechev. Securify:Practicalsecurityanalysisofsmartcontracts. In [22] Z. Liu, P. Qian, X. Wang, L. Zhu, Q. He, and S. Ji. Smart contract Proceedings of the 2018 ACM SIGSAC conference on computer and vulnerabilitydetection:frompureneuralnetworktointerpretablegraph communicationssecurity,pages67–82,2018. feature and expert pattern fusion. arXiv preprint arXiv:2106.09282, [45] N.vulnerabilitydatabase.Commonvulnerabilitiesandexposures(cves). 2021. https://cve.mitre.org/index.html. [23] Z. Liu, P. Qian, X. Wang, Y. Zhuang, L. Qiu, and X. Wang. Com- [46] Y. Wang, H. Le, A. D. Gotmare, N. D. Bui, J. Li, and S. C. Hoi. bininggraphneuralnetworkswithexpertknowledgeforsmartcontract Codet5+:Opencodelargelanguagemodelsforcodeunderstandingand vulnerability detection. IEEE Transactions on Knowledge and Data generation. arXivpreprintarXiv:2305.07922,2023. Engineering,2021. [47] J.Wei,X.Wang,D.Schuurmans,M.Bosma,F.Xia,E.Chi,Q.V.Le, [24] M.Mossberg,F.Manzano,E.Hennenfent,A.Groce,G.Grieco,J.Feist, D. Zhou, et al. Chain-of-thought prompting elicits reasoning in large T. Brunson, and A. Dinaburg. Manticore: A user-friendly symbolic languagemodels. AdvancesinNeuralInformationProcessingSystems, execution framework for binaries and smart contracts. In 2019 34th 35:24824–24837,2022. IEEE/ACMInternationalConferenceonAutomatedSoftwareEngineer- [48] D. Wong and M. Hemmel. Decentralized application security project ing(ASE),pages1186–1189.IEEE,2019. top10of2018,2018.
[25] Mythril. https://github.com/Consensys/mythril. [49] Y.Wu,N.Jiang,H.V.Pham,T.Lutellier,J.Davis,L.Tan,P.Babkin, [26] OpenAI. Gpt-4technicalreport,2023. https://arxiv.org/abs/2303.08774. and S. Shah. How effective are neural networks for fixing security [27] J.S.Park,J.C.O’Brien,C.J.Cai,M.R.Morris,P.Liang,andM.S. vulnerabilities. arXivpreprintarXiv:2305.18607,2023. Bernstein. Generativeagents:Interactivesimulacraofhumanbehavior. [50] V. Wu¨stholz and M. Christakis. Harvey: A greybox fuzzer for smart arXivpreprintarXiv:2304.03442,2023. contracts. InProceedingsofthe28thACMJointMeetingonEuropean [28] R.Paul,M.M.Hossain,M.Hasan,andA.Iqbal. Automatedprogram Software Engineering Conference and Symposium on the Foundations repair based on code review: How do pre-trained transformer models ofSoftwareEngineering,pages1398–1409,2020. perform? arXivpreprintarXiv:2304.07840,2023. [51] Z.Xi,W.Chen,X.Guo,W.He,Y.Ding,B.Hong,M.Zhang,J.Wang, [29] A. Permenev, D. Dimitrov, P. Tsankov, D. Drachsler-Cohen, and S. Jin, E. Zhou, et al. The rise and potential of large language model M.Vechev. Verx:Safetyverificationofsmartcontracts. In2020IEEE basedagents:Asurvey. arXivpreprintarXiv:2309.07864,2023. symposiumonsecurityandprivacy(SP),pages1661–1677.IEEE,2020. [52] C. S. Xia, Y. Wei, and L. Zhang. Automated program repair in [30] C. Qian, X. Cong, C. Yang, W. Chen, Y. Su, J. Xu, Z. Liu, and the era of large pre-trained language models. In Proceedings of the M.Sun.Communicativeagentsforsoftwaredevelopment.arXivpreprint 45th International Conference on Software Engineering (ICSE 2023). arXiv:2307.07924,2023. AssociationforComputingMachinery,2023.[53] Y.Xue,J.Ye,W.Zhang,J.Sun,L.Ma,H.Wang,andJ.Zhao. xfuzz:67 } Machinelearningguidedcross-contractfuzzing. IEEETransactionson68 functionfreezeAccount(addresstarget,boolfreeze)onlyOwnerpublic{ 69 frozenAccount[target]=freeze; DependableandSecureComputing,2022. 70 FrozenFunds(target,freeze); [54] S. Yao, D. Yu, J. Zhao, I. Shafran, T. L. Griffiths, Y. Cao, and71 } 72 functionaccountFrozenStatus(addresstarget)viewreturns(boolfrozen){ K.Narasimhan.Treeofthoughts:Deliberateproblemsolvingwithlarge73 returnfrozenAccount[target]; languagemodels. arXivpreprintarXiv:2305.10601,2023. 74 } 75 functiontransferOwnership(addressnewOwner)onlyOwnerpublic{ [55] W. Zhang, S. Banescu, L. Pasos, S. Stewart, and V. Ganesh. Mpro:76 if(newOwner!=address(0)){ Combining static and symbolic analysis for scalable testing of smart77 addressoldOwner=owner; 78 owner=newOwner; contract. In 2019 IEEE 30th International Symposium on Software79 OwnershipTransferred(oldOwner,owner); ReliabilityEngineering(ISSRE),pages456–462.IEEE,2019. 80 } 81 } [56] Y. Zhang, J. Yang, Y. Yuan, and A. C.-C. Yao. Cumulative reasoning82 functionswitchLiquidity(bool_transferable)onlyOwnerreturns(boolsuccess){ withlargelanguagemodels. arXivpreprintarXiv:2308.04371,2023. 83 transferable=_transferable; 84 returntrue; [57] Z.Zhang,B.Zhang,W.Xu,andZ.Lin. Demystifyingexploitablebugs85 } insmartcontracts. ICSE,2023. 86 functionliquidityStatus()viewreturns(bool_transferable){ 87 returntransferable; [58] W.X.Zhao,K.Zhou,J.Li,T.Tang,X.Wang,Y.Hou,Y.Min,B.Zhang,88 } J. Zhang, Z. Dong, et al. A survey of large language models. arXiv89 } 90 preprintarXiv:2303.18223,2023. 91 contractStandardTokenisBasicToken{ [59] L. Zhou, X. Xiong, J. Ernstberger, S. Chaliasos, Z. Wang, Y. Wang,92 mapping(address=>mapping(address=>uint))allowed; 93 functiontransferFrom(address_from,address_to,uint_value)onlyPayloadSize K. Qin, R. Wattenhofer, D. Song, and A. Gervais. Sok: Decentralized (3*32)unFrozenAccountonlyTransferable{ finance(defi)attacks.In2023IEEESymposiumonSecurityandPrivacy94 var_allowance=allowed[_from][msg.sender]; 95 require(!frozenAccount[_from]&&!frozenAccount[_to]); (SP),pages2444–2461.IEEE,2023. 96 balances[_to]=balances[_to].add(_value); [60] Y.Zhuang,Z.Liu,P.Qian,Q.Liu,X.Wang,andQ.He.Smartcontract97 balances[_from]=balances[_from].sub(_value); 98 allowed[_from][msg.sender]=_allowance.sub(_value); vulnerability detection using graph neural networks. In Proceedings99 Transfer(_from,_to,_value); of the Twenty-Ninth International Conference on International Joint100 } 101 functionapprove(address_spender,uint_value)unFrozenAccount{ ConferencesonArtificialIntelligence,pages3283–3290,2021. 102 if((_value!=0)&&(allowed[msg.sender][_spender]!=0))throw; 103 allowed[msg.sender][_spender]=_value; 104 Approval(msg.sender,_spender,_value); APPENDIXA 105 } 106 functionallowance(address_owner,address_spender)viewreturns(uint
SOURCECODEOFCASESTUDY remaining){ 107 returnallowed[_owner][_spender]; 108 } 1 pragmasolidityˆ0.4.24; 109 } 2 110 3 librarySafeMath{ 111 contractBAFCTokenisStandardToken{ 4 //...(contentsoftheSafeMathlibrary) 112 stringpublicname="BusinessAllianceFinancialCircle"; 5 } 113 stringpublicsymbol="BAFC"; 6 114 uintpublicdecimals=18; 7 contractERC20Basic{ 115 functionUBSexToken(){ 8 uintpublictotalSupply; 116 owner=msg.sender; 9 functionbalanceOf(addresswho)constantreturns(uint); 117 totalSupply=1.9*10**26; 10 functiontransfer(addressto,uintvalue); 118 balances[owner]=totalSupply; 11 eventTransfer(addressindexedfrom,addressindexedto,uintvalue); 119 } 12 functionallowance(addressowner,addressspender)constantreturns(uint); 120 function()publicpayable{ 13 functiontransferFrom(addressfrom,addressto,uintvalue); 121 revert(); 14 functionapprove(addressspender,uintvalue); 122 } 15 eventApproval(addressindexedowner,addressindexedspender,uintvalue); 123 } 16 } 17 Listing4. SmartcontractcodereportedinCVE2018-19830 18 contractBasicTokenisERC20Basic{ 19 usingSafeMathforuint; 20 addresspublicowner; 21 boolpublictransferable=true; 22 mapping(address=>uint)balances; 23 mapping(address=>bool)publicfrozenAccount; 24 modifieronlyPayloadSize(uintsize){ 25 if(msg.data.length<size+4){ 26 throw; 27 } 28 _; 29 } 30 modifierunFrozenAccount{ 31 require(!frozenAccount[msg.sender]); 32 _; 33 } 34 modifieronlyOwner{ 35 if(owner==msg.sender){ 36 _; 37 }else{ 38 InvalidCaller(msg.sender); 39 throw; 40 } 41 } 42 modifieronlyTransferable{ 43 if(transferable){ 44 _; 45 }else{ 46 LiquidityAlarm("Theliquidityisswitchedoff"); 47 throw; 48 } 49 } 50 eventFrozenFunds(addresstarget,boolfrozen); 51 eventInvalidCaller(addresscaller); 52 eventBurn(addresscaller,uintvalue); 53 eventOwnershipTransferred(addressindexedfrom,addressindexedto); 54 eventInvalidAccount(addressindexedaddr,bytesmsg); 55 eventLiquidityAlarm(bytesmsg); 56 functiontransfer(address_to,uint_value)onlyPayloadSize(2*32) unFrozenAccountonlyTransferable{ 57 if(frozenAccount[_to]){ 58 InvalidAccount(_to,"Thereceiveraccountisfrozen"); 59 }else{ 60 balances[msg.sender]=balances[msg.sender].sub(_value); 61 balances[_to]=balances[_to].add(_value); 62 Transfer(msg.sender,_to,_value); 63 } 64 } 65 functionbalanceOf(address_owner)viewreturns(uintbalance){ 66 returnbalances[_owner];
2310.07958 Towards Causal Deep Learning for Vulnerability Detection MdMahbuburRahman IraCeka ChengzhiMao mdrahman@iastate.edu ira.ceka@columbia.edu mcz@cs.columbia.edu IowaStateUniversity ColumbiaUniversity ColumbiaUniversity Ames,IA,USA NewYork,NY,USA NewYork,NY,USA SaikatChakraborty BaishakhiRay WeiLe saikatc@microsoft.com rayb@cs.columbia.edu weile@iastate.edu MicrosoftResearch ColumbiaUniversity IowaStateUniversity Redmond,WA,USA NewYork,NY,USA Ames,IA,USA ABSTRACT Detection.In2024IEEE/ACM46thInternationalConferenceonSoftwareEn- Deeplearningvulnerabilitydetectionhasshownpromisingresults gineering(ICSE’24),April14–20,2024,Lisbon,Portugal.ACM,NewYork, NY,USA,11pages.https://doi.org/10.1145/3597503.3639170 inrecentyears.However,animportantchallengethatstillblocksit frombeingveryusefulinpracticeisthatthemodelisnotrobust 1 INTRODUCTION underperturbationanditcannotgeneralizewellovertheout-of- distribution(OOD)data,e.g.,applyingatrainedmodeltounseen Asourcecodevulnerabilityreferstoapotentialflawinthecode projectsinrealworld.Wehypothesizethatthisisbecausethemodel thatmaybeexploitedbyanexternalattackertocompromisethe learnednon-robustfeatures,e.g.,variablenames,thathavespurious securityofthesystem.Vulnerabilitieshavecausedsignificantdata correlationswithlabels.WhentheperturbedandOODdatasetsno andfinanciallossinthepast[1,2].Despitenumerousautomatic longerhavethesamespuriousfeatures,themodelpredictionfails. vulnerabilitydetectiontoolsthathavebeendevelopedinthepast, Toaddressthechallenge,inthispaper,weintroducedcausality vulnerabilitiesarestillprevalent.TheNationalVulnerabilityData- intodeeplearningvulnerabilitydetection.OurapproachCausalVul basereceivedandanalyzedastaggeringnumberof16,000vulnera- consistsoftwophases.First,wedesignednovelperturbationsto bilitiesintheyear2023alone1.TheCybersecurityInfrastructure discoverspuriousfeaturesthatthemodelmayusetomakepredic- SecurityAgency(CISA)oftheUnitedStatesgovernmentreported tions.Second,weappliedthecausallearningalgorithms,specifically, that,since2022,therehavebeenover850documentedinstances do-calculus,ontopofexistingdeeplearningmodelstosystemati- ofknownvulnerabilitiesbeingexploitedinproductsfrommore callyremovetheuseofspuriousfeaturesandthuspromotecausal than150companies,includingmajortechfirmssuchasGoogle, basedprediction.OurresultsshowthatCausalVulconsistentlyim- Microsoft,Adobe,andCisco2. provedthemodelaccuracy,robustnessandOODperformancefor Duetotherecentadvancementsindeeplearning,researchers allthestate-of-the-artmodelsanddatasetsweexperimented.To areworkingonutilizingdeeplearningtoenhancevulnerability thebestofourknowledge,thisisthefirstworkthatintroduces detectioncapabilitiesandhaveachievedpromisingresults.Earlier docalculusbasedcausallearningtosoftwareengineeringmodels modelslikeDevign[27]andReVeal[7]reliedonarchitecturessuch andshowsit’sindeedusefulforimprovingthemodelaccuracy, asGNNs,whilemore-recentstate-of-the-art(SOTA)modelshave robustnessandgeneralization.Ourreplicationpackageislocated movedtowardstransformer-basedarchitectures.CodeBERT[13] athttps://figshare.com/s/0ffda320dcb96c249ef2. usesmasked-language-model(MLM)objectivewithareplacedtoken detectionobjectiveonbothcodeandcommentsforpretraining. CCSCONCEPTS GraphCodeBERT[16]leveragessemantic-levelcodeinformation suchasdataflowtoenhancetheirpre-trainingobjectives.Themore •Softwareanditsengineering→Empiricalsoftwarevalida- recentmodelUniXcoder[15]leveragescross-modalcontentslike tion;•Securityandprivacy→Softwaresecurityengineering. ASTandcommentstoenrichcoderepresentation. However,animportantchallengeofdeeplearningtoolsisthat KEYWORDS themodelslearnedandusedspuriouscorrelationsbetweencode vulnerabilitydetection,causality,spuriousfeatures featuresandlabels,insteadofusingrootcauses,topredictthevul- nerability.Wecallthesefeaturesusedinthespuriouscorrelations ACMReferenceFormat: spuriousfeatures.Asanexample,inFigure1a,thiscodecontains MdMahbuburRahman,IraCeka,ChengzhiMao,SaikatChakraborty,Baishakhi Ray,andWeiLe.2024.TowardsCausalDeepLearningforVulnerability a memory leak. The SOTA model CodeBERT detected this vul- nerabilitycorrectlywithveryhighconfidence(probability=0.95). However,afterwerefactoredthecodeandrenamedthevariables Permissiontomakedigitalorhardcopiesofpartorallofthisworkforpersonalor classroomuseisgrantedwithoutfeeprovidedthatcopiesarenotmadeordistributed (Figure1b),themodelpredictedthisfunctionasnon-vulnerable.In forprofitorcommercialadvantageandthatcopiesbearthisnoticeandthefullcitation Figure2,weshowthatthechangeofthevariablenamescaused onthefirstpage.Copyrightsforthird-partycomponentsofthisworkmustbehonored. itscoderepresentationtomovefromvulnerabletonon-vulnerable Forallotheruses,contacttheowner/author(s). ICSE’24,April14–20,2024,Lisbon,Portugal clusters. ©2024Copyrightheldbytheowner/author(s). ACMISBN979-8-4007-0217-4/24/04. 1https://nvd.nist.gov/general/nvd-dashboard
https://doi.org/10.1145/3597503.3639170 2https://www.cisa.gov/known-exploited-vulnerabilities-catalog 4202 naJ 51 ]ES.sc[ 5v85970.0132:viXraICSE’24,April14–20,2024,Lisbon,Portugal MdMahbuburRahman,IraCeka,ChengzhiMao,SaikatChakraborty,BaishakhiRay,andWeiLe Apparently,themodeldidnotusethecauseofthevulnerability Insummary,thispapermadethefollowingcontributions: that"theallocatedmemoryhastobereleasedineverypath"for • Wediscoveredandexperimentallydemonstratedthatvari- prediction.Inthisexample,av_mallocandav_freeparecausal ablenamesandAPInamesareusedasspuriousfeaturesin features—itistheincorrectuseofav_freepAPIthatleadstothe thecurrentdeeplearningvulnerabilitydetectionmodels, vulnerability.However,themodelassociateds,nbitsandinverse • Weformulatedeeplearningvulnerabilitydetectionusing withthevulnerablelabelandassociatedout1,dst0andout0with causalityandappliedcausaldeeplearningtoremovespuri- the non-vulnerable label. Such correlations are spurious; those ousfeaturesinthemodels,and variable names are spurious features. We believe that spurious • Weexperimentallydemonstratedthatcausaldeeplearning correlationsareanimportantreasonthatpreventthemodelsfrom canimprovemodelaccuracy,robustnessandgeneralization. beingrobustandbeingabletoapplytounseenprojects. Toaddressthischallenge,inthispaper,weintroducecausality 2 ANOVERVIEWOFOURAPPROACHAND (abranchofstatistics)intodeeplearningvulnerablitydetection. ITSNOVELTY WedevelopedCausalVulandappliedacausallearningalgorithm thatimplementsdocalculusandthebackdoorcriterionincausality, InFigure3,wepresentanoverviewofourapproach,CausalVul. aimingtodisablemodelstousespuriousfeaturesandpromotethe CausalVulconsistsoftwostages:(i)discoveringspuriousfeatures modelstousecausalfeaturesforprediction. and(ii)usingcausallearningtoremovethespuriousfeatures. Our approach consists of two phases. In the first phase, we Discovering Spurious Features. First, we work on discovering workedondiscoveringthespuriousfeaturesamodeluses.Thistask spurious features used in deep learning vulnerability detection ischallengingduetotheinscrutablenatureofdeeplearningmodels. models.Spuriousfeaturesaretheattributesoftheinputdataset, Totacklethisissue,wedesignedthenovelperturbationmethod, e.g.,variablenames,thatexhibitcorrelationwiththetargetlabels drawinginspirationfromadversarialrobustnesstesting[24].The (i.e.,vulnerable/non-vulnerable)butarenottherootcausesofbe- perturbationchangeslexicaltokensofcodebutpreservesthese- ing(non)-vulnerable,andthus,maynotgeneralizetoadesiredtest manticsofcodelikecompilertransformations.Inparticular,we distribution.Todiscoverthespuriousfeatures,inourapproach,we hypothesizedthatthemodelsmayusevariablenamesasaspurious firsthypothesizeaspuriousfeaturebasedonourdomainknowl- featurelikeFigure1,andthemodelsmayalsouseAPInamesas edge.Forinstance,weassumedvariablenamescanbespurious anotherspuriousfeatures.Correspondingly,wedesignedPerturb- featuresforvulnerabilitydetectiontasks—modelsshouldnotmake VarandPerturbAPI,twomethodstochangetheprogramsandthen adecisionbasedonthenames.Next,wedesignperturbationsbased observeifthemodels’predictionsfortheseprogramsarechanged. oneachhypothesizedfeature.Specifically,wetransformthepro- Throughourempiricalstudies,wevalidatedthatthemodelsindeed gramsviaarefactoringtoolthatchangesthefeature,butdoesnot usecertainvariablenamesandAPInamesasspuriousfeatures, changethesemanticsofcode(similartocompilertransformations). andforvulnerableandnon-vulnerableexamples,themodelsuse Afterweobservethemodelchangesitspredictionformanysuch differentsetsofsuchnames(Section3). examples,weconcludethatthemodellikelyreliedonthisspurious Todisablethemodelsfromusingsuchspuriousfeatures,inthe feature.Discoveringspuriousfeaturesisaverychallengingtask. secondphase,weappliedcausallearning,whichconsistsofspecial Asaninstance,randomlyalteringvariablenamesdoesnotreveal trainingandinferenceroutinesontopofexistingdeeplearning itsspuriousnature.Weneedtocarefullyidentifywhichvariable(s) models.Thecausallearningputsagivenmodelunderintervention, needtobechangedbywhichname(s)tomaximizeitsimpact.To computinghowthemodelwouldbehaveifitdoesnothavespurious thebestofourknowledge,wehavenotseenasystematicstudyon features.Thiscanbedoneviabackdoorcriteriafromjustthetraining findingspuriousfeaturesinvulnerabilitydetectionmodels. dataaswellasourknowledgeofspuriousfeatures.Specifically,we firsttrainamodelandexplicitlyencodeaknownspuriousfeature RemovingSpuriousFeatures. Oncewediscoveredspuriousfea- 𝐹 intothemodel.Attheinferencetimeforanexample𝑥,weuse tures,inthisstepwetrytoreducetheimpactofthesefeatures thex’srepresentationjointwithasetofdifferentspuriousfeatures, onthemodeldecision.Oneeasywaytoachievethatcouldbeto sothatthemodelcannotuse𝐹 tomakethefinaldecisionfor𝑥. augmentthetrainingdatabyrandomizingthespuriousfeatures We evaluated CausalVul using Devign [27] and Big-Vul [12], andthenretrainthemodel.However,itwilltakemuchtimefor tworeal-worldvulnerabilitydetectiondatasetsandinvestigatedon big,pretrained,coderepresentations/deepmodels.Soinstead,we
threeSOTAmodels–CodeBERT,GraphCodeBERT,andUnixCoder. takeexistingrepresentationandapplycausallearningtodisable ExperimentalevaluationshowsthatthecausalmodelinCausalVul themodeltousespuriousfeaturesatinferencetime.Thisensures learnstoignorethespuriousfeatures,improvingtheoverallper- themodelcanrelymostlyonthecausalfeaturesforprediction. formanceonvulnerabilitydetectionby6%inBig-Vuland7%in Weaccomplishthisgoalasfollows.First,asaninput,wetakethe the Devign dataset compared to the SOTA. The CausalVul also existingrepresentationandtheknownspuriousfeaturetobeelimi- demonstratessignificantimprovementingeneralizationtesting, nated.Wethentrainamodelsuchthattherepresentationlearned improvingtheperformanceontheDevigndatasettrainedmodelup inthisstepisespeciallybiasedtowardsthetargetedspuriousfea- to100%andBig-Vuldatasettrainedmodelupto200%.Experimen- ture.Next,duringinference,themodelmakesadecisionbasedon talresultsalsoshowthatourCausalVulismorerobustthanthe theinputrepresentationwhileignoringthebiasrepresentation currentSOTAmodels.Itimprovestheperformanceupto62%on learnedinthepreviousstep.Inparticular,wemarginalizedovera Devignand100%onBig-Vulontheperturbeddataweconstructed setofexamplesthatcontaindifferentspuriousfeaturesandprevent forrobustnesstesting. theinferencefromutilizingthetargetedspuriousfeaturewhileTowardsCausalDeepLearningforVulnerabilityDetection ICSE’24,April14–20,2024,Lisbon,Portugal 1 FFTContext *av_fft_init(int nbits, int inverse}){ 1 FFTContext *av_fft_init(int dst0, int out0){ 2 FFTContext *s = av_malloc(sizeof(*s)); 2 FFTContext *out1 = av_malloc(sizeof(*out1)); 3 if (s && ff_fft_init(s, nbits, inverse)) 3 if (out1 && ff_fft_init(out1, dst0, out0)) 4 av_freep(&s); 4 av_freep(&out1); 5 return s; 5 return out1; 6 } 6 } 7 // Prediction probability: 0.9493 7 // Prediction probability: 0.2270 (a)Vulnerablecode-CorrectlyPredicted (b)PerturbedVulnerablecode-Mispredicted Figure1:Avulnerableexamplepredictedasvulnerablewith0.9493butpredictedasnon-vulnerablewithprobability0.2270 whennamesareperturbedbysomeofthespuriousnamesfromtheoppositeclass. helpdeeplearningmodelsremovespuriousfeaturesandtowhatde- greewecanimprovemodelaccuracy,robustnessandgeneralization afterremovingspuriousfeatures.Suchcausallearningapproaches havebeenappliedincomputervision[9].Akeydifferenceisthat thevisiondomainiscontinuousinthatthepixelvaluesarecontin- uousrealnumbers,whileprogramfeaturesarediscrete.Thus,in thedomainofprogramdata,wecanexplicitlydiscoverspurious featuresandthenapplycausallearninginatargetedmanner. 3 DISCOVERINGSPURIOUSFEATURES Inthissection,weillustrateourtechniquefordiscoveringspurious featuresthroughsemantic-preservingperturbations. Spuriousfeaturesarefeaturesthatspuriouslycorrelatetothe output.Thosespuriouscorrelationsoftenonlyexistintrainingbut notintesting,especiallytestingunderperturbeddataorunseen projects. That is because a feature that is spurious, as opposed tocausal,willchangeacrossdomainsandisnolongerusefulfor Figure2:VisualizationusingPrincipleComponentAnalysis prediction,asshowninFigure1.Thus,astandardwaytodetermine (PCA)ofFigure1’scoderepresentationsgeneratedbyCode- whetheronefeatureisspuriousistoperturbthevaluesofthefeature BERTbeforeandafterperturbedthenames. andobservehowtheoutputchanges. Followingthisidea,wefirsthypothesizedtwospuriousfeatures, variablenamesandAPInamesthatthecurrentdeeplearningmodels mayuseforvulnerabilitydetection.Wechosethesetwospurious features as a proof of concept to check if we perturb in lexical andsyntacticcharacteristicsrespectivelywhilekeepingtheoverall semanticoftheprogramthesame,whetherthemodelprediction wouldchange.Weappliedanexistingcoderefactoringtool,Nat- Gen[3],toperturbthecodeintestdataset,andobservewhetherthe modelperformanceischangedbetweentheoriginaltestsetandthe perturbedtestset(seeproblemformulationinSection3.1).Inter- estingly,wefoundthatrandomlychangingthevariablenamesand functionnamesdonotbringdownthemodelperformance.Thus wedesignednovelperturbationmethodsthatcandemonstratethe twospuriousfeatures.SeeSections3.2to3.4. 3.1 ProblemFormulation Given aperturbation𝑝, acode sample𝑠, anda trainedmodel Figure3:CausalVul:anoverview 𝑀,wediscoveraspuriousfeatureifthefollowingconditionsare makingthefinaldecision.Thisapproachreliesontheprinciplesof met:(1)theapplicationoftheperturbation,𝑝(𝑠),doesnotalter "do-calculus"andthe"backdoorcriterion,"whichwewillexplain thesemanticsofthefunction,(2)themodel’spredictionchanges indetailinSections4and5,respectively. upontransformation,and(3)thecandidatesforaperturbation𝑝 Tothebestofourknowledge,thisisthefirstworktobridge aredrawnfromthetrainingdistribution. causallearningwithdeeplearningmodelsforvulnerabilitydetec- WewilluseF1scoreasamore-thoroughevaluationmetricfor tion.Ithelpsresearchersunderstandtowhatdegreecausalitycan ourimbalanceddatasetsandconditionnumber(2).AdegradedF1ICSE’24,April14–20,2024,Lisbon,Portugal MdMahbuburRahman,IraCeka,ChengzhiMao,SaikatChakraborty,BaishakhiRay,andWeiLe scoreindicatestheapplicationoftheperturbation𝑝(𝑠)hasresulted Weinjectdead-codeat𝑛randompositionswithinthecodesam-
inmis-classifications(flippedlabels). ple.Thedead-codeblockiscomposedof𝑚distinctfunction/API calls(𝑚and𝑛areconfigurableandweused𝑚 = 5,𝑛 = 5topro- 3.2 PerturbVar:VariableNameasaSpurious ducetheresultsinTable2).WeguardtheblockofAPI-callswith Feature anunsatisfiedcondition,ensuringtheloopisneverexecuted,as Todemonstratethatsomevariablenamesareindeedusedbythe showninFigure4: modelsasspuriousfeatures,wedesignanextremelyperturbedtest set.Weanalyzethetrainingdatasetandsortthevariablenames 1 while ( _i_4 > _i_4 ) { basedontheirfrequencyofoccurrencesinvulnerablefunctionsand 2 tcg_out_r(s , args[1]); help_cmd(argv[0]); 3 cris_alu(dc , CC_OP_BOUND , cpu_R[dc -> op2] , cpu_R[dc -> op2 non-vulnerablefunctions,respectively.Whenreplacingexisting ] , l0 , 4); variablenamesinthetestset,werandomlyselectanamefromthe 4 RET_STOP(ctx); tcg_out8(s , args[3]); } top-Kmost-frequentvariablenamesoftheopposite-classlabels:for anon-vulnerablesample,wereplacetheexistingvariablenames Figure4:Dead-codecomposedofourspuriousfeature,API withanamefromthevulnerabletrainingset,butwhichdoesnot calls occurinthenon-vulnerabletrainingset,andvice-versa.Weapply thisperturbationtoeverysampleinthetestset. Table2:PerturbAPI:ImpactofAPINamePerturbation Table1:PerturbVar:ImpactofVariableNamePerturbation CodeBERT UniXcoder GraphCodeBERT Dead-code CodeBERT UniXcoder GraphCodeBERT Devign Big-Vul Devign Big-Vul Devign Big-Vul Top-K Devign Big-Vul Devign Big-Vul Devign Big-Vul Type F1 F1 F1 F1 F1 F1 (Freq.) F1 F1 F1 F1 F1 F1 Baseline 0.61 0.36 0.63 0.38 0.62 0.37 Baseline 0.61 0.36 0.63 0.38 0.62 0.37 Random 0.61 0.35 0.62 0.36 0.62 0.35 Random 0.61 0.32 0.63 0.36 0.62 0.35 API 0.52 0.10 0.34 0.11 0.47 0.09 Top100 0.61 0.25 0.52 0.24 0.60 0.27 Top50 0.55 0.25 0.55 0.22 0.59 0.26 Observation.Wecompareourresultsto(1)baselineperfor- Top25 0.54 0.26 0.56 0.25 0.59 0.27 mance and (2) random dead-code transformation performance, Top20 0.55 0.25 0.56 0.24 0.59 0.28 showninTable2.Ourresultsshowthatdead-codecomposedof Top15 0.54 0.23 0.56 0.24 0.59 0.27 Top10 0.54 0.26 0.53 0.23 0.57 0.28 APIcallsseverelyhurtsmodelperformancecomparedtothevanilla Top5 0.52 0.21 0.52 0.18 0.58 0.26 baselineandrandomdead-codetransformation.Modelperformance degradesproportionallywiththeinclusionofincreasedAPIcalls andincreasedinjectionlocations.Performanceinthevanillamodel Observation.AsshowninTable1,weareabletodegradetheF1 degradesasmuchas28.73%onDevignand27.99%onBig-Vul. scoreasmuchas11.5%ontheDevigndataset,andasmuchas20% ontheBig-Vuldataset.Wehaveobservedtheperformancedegrada- 3.4 PerturbJoint:CombineThemTogether tionacrossallthedatasetsandmultiplearchitectures:CodeBERT, Wehypothesizethatthecompositionofthetwospuriousfeatures GraphCodeBERT,andUniXcoder.However,intherandomizedset- willfurtherdegrademodelperformance.Wesetupthestudy,where ting,theperformancealmostdoesnotchangerelativetothebase- foreverysampleconstructedinthePerturbAPI datasetfromSec- line.Introducingcommonvulnerablenamesintonon-vulnerable tion2,wereplaceexistingvariablenameswitharandomselection code-samplescausesthemodeltomisclassifythesampleasvulner- fromthetop-Kmost-frequentvariablenamesoftheoppositeclass able.Conversely,introducingcommonnon-vulnerablenamesinto (usingthesameapproachfromSection1). vulnerablecodesamplescausesthemodeltomisclassifythesample Observation.InTable3,ourresultsshowthatthecomposi- asnon-vulnerable.Themorecommonthevariablenamesareused tionofAPIdead-codeandvariablerenamingfurtherdegradesthe (i.e.thelowertheTop-K),themoretheperformancedegrades. model.Themodeldegradationismoreseverewhenapplyingthe 3.3 PerturbAPI:APINameasaSpuriousFeature compositionofthesettings.Inthecombinedsetting,performance degradesasmuchas41.37%forDevignand33.89%forBig-Vul. ModernprogramsfrequentlyuseAPIcalls.Weconjecturethatthe modelsmayestablishspuriouscorrelationsbetweenAPInames Summary. Inthissection,weinvestigatedmultipledatasets withvulnerabilities.SimilartotheapproachusedinPerturbVar, andmultipleSOTAmodels,revealingthatvariablenamesandAPI werankedthefrequencyoftheAPIcallsinthetrainingdata,for namesarespuriousfeaturesfrequentlyutilizedbythesemodels. vulnerable examples and non-vulnerable examples respectively. Interestingly,themodelsassociatedifferentvariablenamesand WetheninsertAPIcalls(thatarerankedinthetop-100occurring APInamesasspuriousfeaturesfordifferentlabels.Consequently, callsinnon-vulnerableexamples,butwhicharenotfrequently- itbecameevidentthatonlymeticulouslydesignedperturbations, occurringinthevulnerableexamples)intovulnerableexamples notrandomones,showcasedtheusageofthesespuriousfeatures, andvice-versa.Topreservethesemanticsofcode,weinsertthese consequentlyleadingtoadeclineinthemodels’performanceon
APIcallsas"dead-code",i.e.,thiscodewillneverbeexecuted. theperturbeddatasets.TowardsCausalDeepLearningforVulnerabilityDetection ICSE’24,April14–20,2024,Lisbon,Portugal Table3:PerturbJoint:ImpactofJointPerturbation variableandnotdirectlyobservedinthetrainingdatadistribu- tion.Forexample,𝑈 candenotetheexpertiseorcodingstyleof CodeBERT UniXcoder GraphCodeBERT theprogrammerorthetypeofsoftwareapplication.Becauseof𝑈, Top-K (Freq.) Devign Big-Vul Devign Big-Vul Devign Big-Vul therecanbedifferentspuriousfeaturesinthecode,likethetwowe F1 F1 F1 F1 F1 F1 showedinSection3,denotedas𝑀 1and𝑀 2. Baseline 0.61 0.36 0.63 0.38 0.62 0.37 𝑀 1 representsvariablenamingstyles,asdevelopersmaylike Top100 0.38 0.07 0.23 0.07 0.59 0.06 tousecertainformatsofvariablenamesintheircode.Similarly, Top50 0.37 0.07 0.25 0.06 0.60 0.07 𝑀 2 representsAPInames,ascertaindevelopersorapplications Top25 0.36 0.07 0.28 0.08 0.60 0.07 maymorelikelyuseaparticularsetofAPIcalls.Thesebasic(text) Top20 0.38 0.08 0.27 0.08 0.48 0.07 featuresof𝑀 1and𝑀 2influencethecode.Thus,weaddtheedges Top15 0.36 0.07 0.27 0.06 0.52 0.07 𝑋 to𝑀 1 and𝑀 2,respectively.Notethatthecausalgraphcanbe Top10 0.37 0.08 0.24 0.07 0.45 0.08 expandedtointegratemoreofsuchspuriousfeatures.Weleave Top5 0.33 0.06 0.22 0.05 0.47 0.07 thistofuturework. 4 CAUSALLEARNINGTOREMOVESPURIOUS InFigure5(a),wealsohavetheedge𝑈 to𝑌,indicatingthat FEATURES 𝑈 canimpact𝑌.Forexample,ajuniordevelopermaymorelikely introducevulnerabilities;similarly,certainAPIsmaymorelikely Inthissection,wepresenthowtoapplycausallearningtore- introducevulnerabilities,e.g.,SQLinjection. movespuriousfeaturesinthevulnerabilitymodels. 4.2 ApplyingCausality 4.1 CausalGraphforVulnerabilityDetection Whenwetrainadeeplearningmodel,usingthecode𝑋 asinput,to Toapplycausallearning,ourfirststepistoconstructacausalgraph predictthevulnerabilitylabel𝑌,welearnthecorrelation𝑃(𝑌|𝑋)in tomodelhowvulnerabilitydataisgeneratedfromastatisticspoint thetrainingdataset.However,whenwedeploythistrainedmodel ofview,showninFigure5(a).Acausalgraphvisuallyrepresents inanewenvironment,e.g.,handlingperturbeddatasetsorunseen thecausalrelationshipsbetweendifferentvariablesorevents.In datasets., the domain will be changed. Due to the shifting of 𝐸 thegraph,nodesrepresentrandomvariables,anddirectededges from0to1,thecorrelation𝑃(𝑌|𝑋) willbedifferent,denotedas betweennodesindicatecausalrelationships.Thus,A→Bmeans 𝑃(𝑌|𝑋,𝐸 = 0) ≠ 𝑃(𝑌|𝑋,𝐸 = 1).Fromthecausalgraphpointof variableAdirectlyinfluencesvariableB.Theabsenceofanedgebe- view,𝑋 and𝑌 areboththedescendantsof𝐸andthereforewillbe tweentwonodesimpliesnodirectcausalrelationshipbetweenthem. affectedbyitschange. InFigure5(a),𝑋 representsafunctionintheprogram.Whether Toimprovethemodels’performanceingeneralizingtonewen- thereisavulnerabilityornotinthecodeisdirectlydependenton vironmentsbeyondthetraining,ourgoalistolearnthesignalthat thefunction.Soweaddanedgefrom𝑋 to𝑌,where𝑌 isthelabel isinvariantacrossnewenvironmentanddomains.Applyingcausal ofvulnerabilitydetection,1indicatesvulnerable,and0indicates learning,weinterveneonthecausalgraphsothatthecorrelation notvulnerable. between𝑋 and𝑌 isthesamenomatterhowtheenvironment𝐸 changes.Incausality,suchinterventionisperformedviado-calculus, denotedas𝑃(𝑌|𝑑𝑜(𝑋 =𝑥)). Thekeyideaisthatweimaginetohaveanoraclethatcanwrite perfect, unbiased code𝑥; we then use such code to replace the originalcode𝑋 forvulnerabilitydetection.Doingso,wecreateda newcausalgraphbyremovingalltheincomingedgestothenode 𝑋 andputthenewcodehere𝑋 =𝑥.SeeFigure5(b).Inthisnew causalgraph,𝐸onlycanaffect𝑌,not𝑋,andthusthecorrelation of𝑋 and𝑌 willnolongerbedependenton𝐸.Inotherwords,the correlationof𝑥 and𝑌 inthenewcausalgraphthenbecomesthe invariant signal across different domains, e.g., from training to testing.Whenwelearnthecorrelationinthisnewcausalgraph, thelearnedmodelcangeneralizetonewenvironmentssincethis relationshipalsoholdsattesting.Inthenextsection,weexplain howwecompute𝑃(𝑌|𝑑𝑜(𝑋 =𝑥)fromtheobservationaldata. Figure5:CausalGraphBeforeandAfterDoCalculus 4.3 EstimatingCausalitythroughObservational Inthecausalgraph,weuse𝐸,namelyenvironment,tomodel Data thedomainwherethedatasetisgenerated.Forexample,attrain- ingtime,namely𝐸 =0,thisenvironmentindicatese.g.,thecode Figure5(a)modelsthejointdistributionofvulnerabilitydatafrom iswrittenbycertaindevelopersandforcertainsoftwareapplica- adatagenerationpointofview.Figure5(b)presentsthecausal tions.Attestingordeploymenttimeofthemodel,namely𝐸 =1, graphafterperformingdocalculuson𝑋,allowingustogeneralize thecodemaylinktodifferentdevelopersandapplications.Such acrossthedomains.Thechallengeisthatperformingintervention environment-specificfactorsaremodeledusing𝑈.Itisalatent andobtainingtheperfect,unbiasedcodeisalmostimpossible.ToICSE’24,April14–20,2024,Lisbon,Portugal MdMahbuburRahman,IraCeka,ChengzhiMao,SaikatChakraborty,BaishakhiRay,andWeiLe estimatethecausaleffectwithoutactuallyperformtheinterven- ThegoalofAlgorithm1istolearn𝑃(𝑌|𝑅,𝑀)fromthetraining tion,weapplythebackdoorcriterion[21]toderivecausaleffect data.Weusetheembedding(𝑟)oftheoriginalinput𝑥 computed 𝑃(𝑌|𝑑𝑜(𝑋))fromthegivenobservationaldataof𝑋 and𝑌. byanexistingdeeplearningmodel.Atline4,thetrainingiterates
Backdoorcriterionteacheshowtoblockallthenon-causalpaths through𝑛epochs.Foreachlabeleddatapoint(𝑥,𝑦),wefirstobtain from𝑋to𝑌 inthecausalgraphsowecancomputethecausaleffect. 𝑟 of𝑥 atline6. Forexample,inFigure5(a),node𝑈 cancause𝑋,𝑈 cancause𝑌;as Atline7,weselectx’thatsharesthetargetedspuriousfeature𝑡 aresult,therecanexistcorrelationsbetween𝑋 and𝑌 intheobser- withxbutdifferinrootcause𝑟.Thismeans(1)𝑥′shouldhavethe vationaldata.However,suchcorrelationdonotnecessarilyindicate samelabelas𝑥;asshowninSection3,spuriousfeaturesarelabel thecausalrelationfrom𝑋 to𝑌.Forexample,thejuniordevelopers specific;(2)𝑥′shouldshare𝑡 withx,sothatthemodelwillmore mayliketowritetheircodewiththevariablenamemyvarandthe likelyencode𝑡inthe𝑀componentof𝑃(𝑌|𝑅,𝑀);and(3)𝑥′should juniordevelopersmaymorelikelyintroducethevulnerabilityinto notbexsothattrainingwith(𝑟,𝑀 𝑥′,𝑦)atline9,themodelwill theircode.Inthiscase,wemightoftenseethecodewithmyvar relyonthespuriousfeaturein𝑀 𝑥′ insteadof𝑟 tomakeprediction. isdetectedwithvulnerabilityinthedataset.However,myvaris Ourfinalgoalistoencodeinthemodeltherootcauseof𝑥 inthe notthecauseforthevulnerability.Thecorrelationofmyvarand 𝑅component,andthetargetedspuriousfeature𝑡 sharedacrossx vulnerabilityisspurious.Toremovesuchspuriouscorrelations,the and𝑥′inthe𝑀component. backdoorcriterionstatesthatweneedtocondition𝑋 on𝑀 1and InTable4,wedesigneddifferentwaysofselectingx’atline7, 𝑀 2;asaresult,wecanremovetheincomingedgesto𝑋 andthus SettingsVar1-Var2forremovingthespuriousfeatureofvariable impactfrom𝑈 to𝑋.Mathematically,wewillhavethefollowing name,SettingsAPI1-API3forAPInames,SettingVar+APIisforboth formulaforFigure5(b). variableandAPInames. Thefirstideaistoselectx’fromthetrainingdatasetsuchthatit 𝑃(𝑌|𝑑𝑜(𝑋 =𝑥))= ∑︁ 𝑃(𝑌|𝑋 =𝑥,𝑀 1,𝑀 2)𝑃(𝑀 1)𝑃(𝑀 2) (1) sharesthemaximumnumberofspuriousvariableorAPInameswith x.SeetherowsofVar1andAPI1.Thesecondideaistoconstruct 𝑀,𝑀 1 2 anexamplex’suchthatxandx’share𝑘variable/APInames.See TocomputeEq.(1),weadoptedanalgorithmfrom[9].First,we therowsofVar2andAPI2.Similarly,SettingAPI3constructsx’ trainamodelthatpredicts𝑌 from𝑅and𝑀(wedenote𝑀 1,𝑀 2as𝑀 to have the top𝑘 spurious API names. We have also tried this forsimplicity),denotedas𝑃(𝑌|𝑅,𝑀),where𝑅istherepresentation approachforvariablenames,butitdoesnotreportgoodresults. of𝑋 computedbyanexistingdeeplearningmodel,and𝑀 isthe InSettingVar+APIforremovingbothspuriousfeatures,wetakex’ representationof𝑋thatencodesthespuriousfeatures.Forexample, fromSettingVar1andthenperformtransformationusingSetting ifweencodeeachtokenof𝑋 into𝑀,wewillhavevariablenames API3.Thisachievedthebestresultsinourexperimentscompared andAPInamesencodedin𝑀.Since𝑅isarepresentationlearned toothercombinations. bymodels,thealgorithmassumes𝑅encodedbothcausalandspu- riousfeatures.Thegoaloftaking𝑅and𝑀jointlyfortrainingisto Algorithm1:CausalLearningModelTraining especiallyencodetargetedspuriousfeature(s)inthe𝑀component ofthemodel𝑃(𝑌|𝑅,𝑀). 1 Input:TrainingdatasetDover(X,Y);Targetedspurious Atinference,weusethemodel𝑃(𝑌|𝑅,𝑀)tocompute𝑃(𝑌|𝑑𝑜(𝑋 = feature𝑡 𝑥))viaEq.(1).Thebackdoorcriterioninstructsthatwefirstran- 2 Phase1:Compute𝑃ˆ(𝑅|𝑋)bytheSOTAcode domlysampleexamplesfrom𝑋 thatcontainsdifferentspuriousfea- representationmodel tures.Wewillthenmarginalizeoverthespuriousfeaturesthrough 3 Phase2: weightedaveraging,thatis,(cid:205) 𝑀,𝑀 𝑃(𝑌|𝑋 =𝑥,𝑀 1,𝑀 2)𝑃(𝑀 1,𝑀 2). 4 for𝑖 ←1to𝑛epochsdo Thiscanbecomputedusingth1 em2 odeltrainedabove𝑃(𝑌|𝑅,𝑀). 5 foreach(𝑥,𝑦)in𝐷do Intuitively,thespuriousfeaturesinthedatahavebeencancelledout 6 Extractrforxusing𝑃ˆ(𝑅|𝑋). duetotheweightedaveraging(pleasereferto[22]tounderstand 7 Select𝑥′byoneoftheselectionproceduresinTable4 whythisisthecase).Themodelwillmakepredictionsbasedon 8 Encoding𝑥′to𝑀 𝑥′ theremainingsignals,whicharethecausalfeaturesleftin𝑅. 9 Train𝑃(𝑌|𝑅,𝑀)using(𝑟,𝑀 𝑥′,𝑦)viaminimizingthe classificationloss 5 THEALGORITHMSOFCAUSAL 10 end VULNERABILITYDETECTION 11 end Inthissection,wepresentthealgorithmsthatcomputetheterms 12 Output:Model𝑃ˆ(𝑅|𝑋)and𝑃(𝑌|𝑅,𝑀) Eq. (1). In Algorithm 1, we will train a model of 𝑃(𝑌|𝑅,𝑀). In Algorithm2,weshowhowtoapply𝑃(𝑌|𝑅,𝑀)andusebackdoor criteriatoundothespuriousfeaturesduringinference. 5.2 UndoSpuriousFeaturesduringInference 5.1 TrainingP(Y|R,M) InAlgorithm2,weexplainourinferenceprocedure.Topredictthe Algorithm1takesasinputthetrainingdataset𝐷 aswellasthe labelforafunction𝑥,wefirstextract𝑟 atline2.Existingwork spuriousfeature(s)weaimtoremove,namelytargetedspurious directlypredicttheoutput𝑦using𝑟.Sincetherearebothspurious feature(s)𝑡.Forexample,itcanbevariablenamesand/orAPInames, featuresandcorefeaturesin𝑟,bothofthemaregoingtobeusedto aspresentedinSection3. predictthevulnerability.Here,wewilluseourcausalalgorithmtoTowardsCausalDeepLearningforVulnerabilityDetection ICSE’24,April14–20,2024,Lisbon,Portugal Table4:SelectionProceduresforx’inAlgorithm.1 SpuriousFeature Setting x’:samelabelofx Var1 selectx’whichsharesthemaximumnumberofspuriousvariablenameswith𝑥. Variablename Var2 randomselectx’,replaceatmost𝑘variablenamesofx’withthevariablenamesfromx API1 select𝑥′whichsharesthemaximumnumberofspuriousAPInameswith𝑥 API2 randomselect𝑥′,randomlyselect𝑘APIsfrom𝑥 andinsertthemin𝑥′asdeadcode APIname randomselectx’,pick𝑘APIsfromthetop10%mostfrequentspuriousAPIsandinsertthem API3 inx’asdeadcode VariableandAPInames Var+API select𝑥′basedonsettingVar1;theninsertdeadcodeaccordingtosettingAPI3. removethespuriousfeaturesinourinference.Thekeyintuitionis 6.2 DatasetsandModels tocanceloutthecontributionofthespuriousfeaturesbyaveraging Weconsideredtwovulnerabilitydetectiondatasets:Devign[27] overthepredictionfromallkindsofspuriousfeatures.Todoso,at andBig-Vul[12].Devignisabalanceddatasetconsistingof27,318 line4,werandomlysampledifferentx’𝐾 timesfromthetraining examples(53.22%vulnerable)collectedfromtwodifferentlarge
datasetD.Atline8,wethencomputeEq(1).Weassumeduniform Cprogramming-basedprojects:QemuandFFmpeg.AsZhouet distributionfor𝑃(𝑀 𝑥′).Finally,atline9,wemaketheprediction. al.[27]didnotprovideanytrain,test,andvalidationsplit,weused 𝑎𝑟𝑔𝑚𝑎𝑥_𝑦 meansselectthelabelthathasthebetterprobability thesplitpublishedbyCodeXGLUEauthors[19].Big-Vuldataset amongthevulnerable/nonvulnerableclasses. is an imbalanced dataset consisting of 188,636 examples (5.78% vulnerable)collectedbycrawlingtheCommonVulnerabilitiesand Exposures(CVE)database.Forthisdataset,weusedthepartitions Algorithm2:CausalLearningModelInference publishedbytheLineVulauthors[14]. 1 Input:Queryx,trainingdatasetDover(X,Y),models WeevaluatedthethreeSOTAmodels,CodeBERT,GraphCode- 𝑃ˆ(𝑅|𝑋)and𝑃ˆ(𝑌|𝑅,𝑀) BERTandUniXcoderasthemodel𝑃(𝑅|𝑋) inAlgorithm1.The 2 Extractrforxusing𝑃ˆ(𝑅|𝑋). representationRisextractedfromtheoutputembeddingofthelast 3 for𝑖 ←1to𝐾 do hiddenlayerofthesemodels.Toconstructthenetwork𝑃ˆ(𝑌|𝑅,𝑀), 4 Randomlyselect𝑥′fromthetrainingsetD. atfirst,wepass𝑥′ throughthefirstfourencoderblockofthese 5 Extractspuriousfeatures𝑀 𝑥′ from𝑥′. transformermodels.Theobtainedoutputembeddingisconsidered 6 Compute𝑃ˆ(𝑦 𝑖|𝑟,𝑀 𝑥′) asM.Weusedthefirstfourthencoderblock(empiricallyitisthe 𝑖 7 end bestlayerwefound)fromthetwelveblockstocomputeMbecause 8 Calculatethecausaleffect𝑃(𝑦|𝑑𝑜(𝑋 =𝑥))= theearlylayerslearnthelow-levelfeaturesandspuriousfeatures (cid:205)ˆ 𝑖𝑃ˆ(𝑦 𝑖|𝑟,𝑀 𝑥 𝑖′)𝑃(𝑀 𝑥 𝑖′) 𝑅te .n Fd into allb ye ,ath 2e -ll ao yw e- rle fv ue lll yf -e ca ot nu nre es ct[ e2 d0] n. eM twis orth ke in suco sen dca tt oen pa rete dd icw t𝑌it .h 9 Output:Class𝑦ˆ=𝑎𝑟𝑔𝑚𝑎𝑥 𝑦𝑃(𝑦|𝑑𝑜(𝑋 =𝑥)). 7 EVALUATION Westudiedthefollowingresearchquestions: 6 EXPERIMENTALSETUP RQ1:CanCausalVulimprovetheaccuracyofthemodel? RQ2:CanCausalVulimprovetherobustnessandgeneralizationof 6.1 Implementation themodel? WeusePytorch2.0.13withCudaversion12.1andtransformers4 RQ3:(ablationstudies)Howdodifferentdesignchoicesaffectthe librarytoimplementourmethod.Allthemodelsarefine-tuned performanceofCausalVul? onsingleNVIDIARTXA6000GPU,Intel(R)Xeon(R)W-2255CPU with64GBram.Thepretrainedweightsandtokenizersofthetrans- 7.1 RQ1:ModelAccuracy formermodelswereobtainedfromthelinkprovidedbytheoriginal ExperimentalDesign.ToanswerthisRQ,weimplementedour authors5.WeusedAdamOptimizertofine-tuneourmodels.The causalapproachontopofthreestate-of-the-arttransformer-based modelsweretraineduntil10epochswhilethebatchsizeissetto vulnerabilitydetectionmodels-CodeBERT,GraphCodeBERT,and 32.Thelearningrateissetto2e-5.Wetrainedthemodelswithour UniXcoder.Weusedthedefault(w/ocausal)versionsofthesemod- trainingdataandthebestfine-tunedweightisselectedbasedon elsasbaselines.WeaddressthesedefaultversionsasVanillamodels. thef1scoreagainstthevalidationset.Weusedthisweightduring WeexperimentedwithallthecausalsettingsshowninTable4, theevaluationofourtestset.WesetK=40(Algorithm2line3),as namelyVar1andVar2,API1,API2andAPI3,andVar+API.Weeval- itreportedthebestresultsamongthevalueswetried. uatedboththevanillamodelsandthecausalmodelsonthesame unperturbedoriginaltestsetandusethemetricsofF1.Wetrained 3https://github.com/pytorch/pytorch 4https://github.com/huggingface/transformers allthemodelsthreetimeswithdifferentrandomseedsandconsider 5https://github.com/microsoft/CodeBERT theaverageF1scoreasourfinalscore(thisisdoneforalltheRQs).ICSE’24,April14–20,2024,Lisbon,Portugal MdMahbuburRahman,IraCeka,ChengzhiMao,SaikatChakraborty,BaishakhiRay,andWeiLe Table5:TheF1scoreofCausalVulandthevanillamodelon namesperformtheworstinvanillaCodeBERTandGraphCode- thetestsetofDevignandBig-Vuldataset. BERTmodels,respectively.Hence,weselecttheTop5forcompari- sonwithCodeBERTandtheTop10tocomparewithGraphCode- Settings CodeBERT GraphCodeBERT UniXcoder BERT.Similarly,API1–API3runsontheworstPerturbAPI dataset, andVar+APIrunsontheworstPerturbJointdatasetforthecorre- Devign Big-Vul Devign Big-Vul Devign Big-Vul spondingmodels. Vanilla 0.61 0.38 0.63 0.38 0.64 0.39 Toinvestigatethegeneralizationperformanceofthemodels,we CausalVul evaluatedthemodeltrainedontheDevigndatasetusingtheBig- Vultestdataset(excludingoverlappedprojectFFMPEG).Similarly, Var1 0.65 0.40 0.63 0.38 0.66 0.40 wetrainedonBig-VulandtestedontheDevigndataset.Forboth Var2 0.64 0.40 0.63 0.40 0.65 0.41 experiments,weexperimentedallthesettingsinTable4andused API1 0.63 0.40 0.62 0.41 0.64 0.40 theF1asmetrics. API2 0.65 0.40 0.64 0.40 0.66 0.40 ResultsforRobustness:Table6showstherobustnessperfor- API3 0.66 0.40 0.65 0.38 0.68 0.40 manceinthreeblocks:theupperblockpresentstheresultsfrom runningwiththeworstPerturbVardata,andsimilarly,themiddle Var+API 0.65 0.41 0.66 0.39 0.66 0.40 andlowerblockspresenttheresultsfromrunningwiththeworst
Result:Table5showstheresult.CausalVuloutperformsthevanilla PerturbAPIandwiththeworstPerturbJointdatarespectively. modelsforallthesettings,allthedatasetsandallthemodels.ForDe- BetweenVar1andVar2,Var1performsbetteronDevigndata vigndata,CausalVulVar1,API3,andVAR+APIshow4,5,and4per- andshows6,3,and2percentagepointsimprovementinF1with centagepointsimprovementrespectivelyintermsoftheF1-score CodeBERT,GraphCodeBERTandUniXcodermodelrespectively. againsttheCodeBERTvanillamodel.IntheUniXcodermodels, FortheBig-Vuldata,Var1worksbetterontheCodebertmodelwith theimprovementforthesethreeapproachesare2,4and2per- 1percentagepointimprovementwhileVar2isbetterfortheother centagepointsrespectively.WiththeGraphCodeBERTmodel,our twomodelswith4and3percentagepointsimprovementrespec- approachesAPI3andVar+APIshow2and3percentagepointsim- tively.Overall,bothofVar1andVar2approachesworkbetterthan provementagainstthevanillamodel.Overall,ourapproachesshow thevanillamodelintermsofrobustness.AmongAPI1,API2and 2-5percentagepointsF1scoreimprovementagainstthevanilla API3,API3worksbetterinDevigndataanddemonstrates10,5,and model. Forthe Big-Vul dataset, ourcausal approacheswith the 22percentagepointsimprovementwiththethreemodelsrespec- CodeBERTmodelshowa2-3percentagepointsimprovement,so tively.FortheBig-Vuldata,API2worksbetterwiththeCodeBERT dotheGraphCodeBERTandUniXcodermodels. modelandimprovesby3percentagepointsoverCodeBERTvanilla. Tothebestofourknowledge,thesearethebest-reportedvul- InGraphCodeBERT,bothAP1andAPI2showsimilarperformance nerabilitydetectionresultsinthesewidelystudieddatasets6.One andshow2percentagepointimprovement.TheVAR+APIsetting maysuspectthatwhileignoringspuriousfeatures,amodelmay showsasimilarimprovementtrendovervanillaperformance. reducesomeofthein-distributionaccuracies,asthespuriousfea- InFigure6,weshowthepredictedprobabilitydensityforthe turesalsocontributeinbenignsettings.Incontrast,tooursurprise, Devignvulnerabledata.Ineachsubplot,X-axisisthepredicted wefindthatCausalVulislearningmoresignificantcausalsignals, probabilityofbeingvulnerable,andY-axisisthecountoftheex- whichcompensateforthelossofspuriousfeaturesandalsoimprove ampleswhosepredictionsarethatprobability.Theorangelines in-distributionaccuracy. plotthecausalmodelandthebluelinesplotthevanilamodel.The figuredemonstratesthattheoverallpredictionprobabilityforvul- Result:RQ1.CausalVuloutperformsotherpre-trainedmodels, nerabledataincreases,whichmeansthemodelismoreconfidentin suggesting causal learning focuses on the root causes of the predictingvulnerabilities.Experimentresultshowthattheoverall vulnerabilitiesbylearningtoignorespuriousfeatures.Overall, differenceoftheprobabilitydensitybetweenvanillaandthecausal ourcausalsettingsshowupto6%improvementinF1inDevign approachisstatisticallysignificantwith𝑝−𝑣𝑎𝑙𝑢𝑒 <<<0.05.with Datasetand7%improvementinF1inBig-Vuldataset. varyingeffectsize,asdocumentedatTable7. Wealsoinvestigatehowmanyexamplesfromrobustnessdataare 7.2 RQ2:ModelRobustnessandGeneralization predictedincorrectlyinVanillamodelslikeFigure1andpredicted correctly in CausalVul. Table 8 shows that CausalVul correctly ExperimentalDesign.Forrobustnessevaluation,wecomparethe predictasignificantamountofdatawhicharepredictedincorrectly performanceofthecausalmodelswiththevanillamodelonthe inVanillamodels. threeperturbeddatasetspresentedinSection3.Var1andVar2run Results for Generalization: We present the generalization onthePerturbVardataset,whichhastheworstperformanceonthe resultsinTable9.TheDevigncolumndefinestheF1scoreofthe correspondingvanillamodelasperTable1.Forexample,PerturbVar Big-Vul test set when evaluated on the model trained with the datasetperturbedwithTop5andTop10mostfrequentvariable Devign train set. Similarly, the Big-Vul column presents the F1 scoreoftheDevigntestsetevaluatedonthemodelthatistrained withtheBig-Vultrainset.Ourresultsshowthatwhenthemodel istrainedontheDevigntrainsetandtheBig-Vultestsetisused 6oursettingkeeptheinputsintheiroriginalformwithoutanyperturbationor astheout-of-distribuition(OOD),ourcausalapproachshows1- normalization. 2percentagepointsimprovementforCodeBERTandUniXcoderTowardsCausalDeepLearningforVulnerabilityDetection ICSE’24,April14–20,2024,Lisbon,Portugal Table6:TheRobustnessperformanceofCausalVulandthe Table9:ThegeneralizationperformanceofCausalVuland vanillamodels. thevanillamodels. Settings CodeBERT GraphCodeBERT UniXcoder Settings CodeBERT GraphCodeBERT UniXcoder Devign Big-Vul Devign Big-Vul Devign Big-Vul Devign Big-Vul Devign Big-Vul Devign Big-Vul Vanilla 0.52 0.22 0.58 0.26 0.52 0.19 Vanilla 0.11 0.08 0.11 0.10 0.10 0.07 Var1 0.58 0.23 0.61 0.28 0.54 0.21 Var1 0.12 0.17 0.11 0.11 0.12 0.13 Var2 0.55 0.22 0.58 0.30 0.53 0.22 . Var2 0.12 0.18 0.11 0.14 0.12 0.15 Vanilla 0.52 0.10 0.47 0.09 0.35 0.09 API1 0.12 0.17 0.11 0.12 0.12 0.11 API1 0.52 0.13 0.35 0.11 0.45 0.14 API2 0.12 0.16 0.11 0.14 0.12 0.18 API2 0.56 0.13 0.39 0.11 0.49 0.13 API3 0.13 0.15 0.11 0.10 0.12 0.21 API3 0.62 0.10 0.52 0.09 0.57 0.13
Var+API 0.12 0.17 0.11 0.12 0.12 0.18 Vanilla 0.52 0.06 0.45 0.07 0.22 0.05 Var+API 0.55 0.07 0.54 0.08 0.31 0.10 theevaluationdata,learningtoignoringthemhelpsCausalVulto significantlyimprovetheperformance. Result:RQ2.CausalVulshowsupto62%and100%improvement inDevignandBig-Vulrobustnessdatarespectively.CausalVul alsoimprovesthegeneralizationperformanceupto100%and 200%forbothdatasetsrespectively. 7.3 RQ3:AblationStudies ExperimentDesign.InthisRQ,weinvestigatethedesignchoices forCausalVul.Inthefirstsetting,weuseK=1andset𝑥′ = 𝑥 in Algorithm2.Here,weinvestigateifwedon’tusemarginalization, Figure6:PredictedprobabilitydensityoftheDevigndata howourapproachperforms.Weevaluatedthemodelsonthero- fromVanillaandCausalapproach. bustnesstestingdata.Duetospace,weshowtheresultsforthe settingofVar+API. Table7:Cohen’sdeffectsizeofthedifferenceoftheproba- Inthesecondsetting,weinvestigatedhowourapproachper- bilitydensitybetweenVanillaandCausalapproachforvul- formswhenusingdifferentearlylayerstorepresent𝑀.Hence,we nerabledata. extractMfromthefirst,second,third,andfourthlayersanduse thatMinourcausalapproachrespectively. Setting CodeBERT GraphCodeBERT UniXcoder Results.InFigure7,weusedtheprobabilitydensityplotssimilar Devign Big-Vul Devign Big-Vul Devign Big-Vul toFigure6.TheorangelinesplotK=40,andthebluelinesplot K=1.Fromtheresults,wecanclearlyseethatforbothvulnerable var trivial midium trivial small trivial small andnon-vulnerablelabels,K=40learnedbetterandreportedmore api large midium small small large small confidentpredictionstowardsgroundtruthlabels. combine large small large small small midium Table8:Numberofpertubedexampleswhosepredictions areincorrectinVanillamodel(seeFigure1)butcorrectin CausalVul. Dataset CodeBERT GraphCodeBERT UniXcoder Devign 298 255 368 Big-Vul 205 93 130 models.ButwhentheDevigntestsetisusedastheOODdataon themodeltrainedwithBig-Vuldata,CodeBERTmodelshows7-10 percentagepointsimprovement,GraphCodeBERTmodelshowsat most4percentagepointsimprovementandtheUniXcodermodel Figure7:PredictionProbabilityDensityofCausalVulforK=1 shows6-14percentagepointsimprovement. andK=40. InthisRQ,weshowthatcausallearninghasthepotentialof significantlyimprovingtherobustaccuracyandgeneralization.In Table10demonstratestheresultofusingdifferentearlylayersto suchasetting,sincethespuriousfeaturesmaynotbepresentin extract𝑀.WechoosetheVar+APIsettingsforallmodelstopresentICSE’24,April14–20,2024,Lisbon,Portugal MdMahbuburRahman,IraCeka,ChengzhiMao,SaikatChakraborty,BaishakhiRay,andWeiLe Table10:TheperformanceoftheCausalApproachwhen Encoder-basedmodelssuchasCodeBERT[13,16]oftenemploy differentearlylayerisused. themasked-language-model(MLM)pre-tainingobjective;someare coupledwithacontrastivelearningapproach[5,10],whileothers Devign Big-Vul aimtomakepre-trainingmoreexecutionaware[11],bi-modal[4] Model EarlyLayer (Var+API) (Var+API) ornaturalized[3].Vulnerabilitydetectionhasbeenoneoftheim- portantdownstreamtasksforthesemodels.Inthispaper,weused CodeBERT 1 0.6494 0.4034 threerecentSOTAtransformer-basedmodels:GraphCodeBERT, CodeBERT 2 0.6528 0.4018 CodeBERT,andUniXcoderandshowthatcausalitycanfurther CodeBERT 3 0.6501 0.4026 improvetheirperformance. CodeBERT 4 0.6546 0.4055 Therehavebeenalsostudiesforvulnerabilitydetectionmod- GraphCodeBERT 1 0.6156 0.3916 elsregardingtheirrobustnessandgeneralization.Inrecentwork, GraphCodeBERT 2 0.6170 0.3903 Steenhoeketal.[23]evaluatedseveralSOTAvulnerabilitymod- GraphCodeBERT 3 0.6363 0.3908 elstoassesstheircapabilitiestounseendata.Furthermore,they GraphCodeBERT 4 0.6570 0.3927 touchonspuriousfeaturesandfoundsometokenssuchas"error" UniXcoder 1 0.5728 0.4049 and"printf"arefrequentlyusedtomakepredictions,whichcan UniXcoder 2 0.5720 0.4055 leadtomispredictions.Inanothersystematicinvestigationofdeep UniXcoder 3 0.5550 0.4056 learning-basedvulnerabilitydetection,Chakrabortyetal.[7]stated UniXcoder 4 0.6609 0.4058 thatvulnerabilitydetectionmodelsdidnotpickuponrelevantcode features,butinsteadrelyonirrelevantaspectsfromthetraining theresult.Forallthemodelsanddatasets,layerfourreportedthe distribution(suchasspecificvariables)tomakepredictions.Our bestperformance. workdesignednovelperturbationstoconfirmthehypothesized Result:RQ3.Ourresultsshowthatmarginalization(backdoor spuriousfeatures,andwefounddifferentnamesareusedfordif- criterion)helpsthemodeltofocusonthecausalfeaturesinstead ferentlabelsasspuriousfeatures.Noneoftheexistingworkhave ofspuriousfeatures.Ontheotherhand,wefoundLayerfouris conductedsuchstudies. bettertousetocompute𝑀from𝑥′thantheearlythreelayers. CausallearninginSE:Toourknowledge,applyingcausalityis relativelynewinSE.Citoetal.[9]isthemostrelevantrecentwork thatinvestigatesperturbationsonsourcecodewhichcauseamodel 8 THREATSTOVALIDITY to“changeitsmind”.Thisapproachusesamaskedlanguagemodel Todiscoverspuriousfeatures,ourexperimentdesignfollowsthe (MLM)togenerateasearchspaceof"natural"perturbationsthat
literature [3] and ensures our perturbation follows consistency, willflipthemodelprediction.Thesenaturalperturbationsarecalled naturalness,andsemantic-preservation. “counterfactualexplanations.”Ourworkalsoseeksfornaturalper- Deep learning models may report improved results due to a turbations,andusesvariablenamesandAPInamesinprogramsto betterrandomseed.Allofourexperimentshavebeenrunwith performtheperturbation.However,ourworkisdifferentinthatwe threerandomseeds.Ourcausalmodelshaveconsistentlyshown requireourperturbationtobesemantic-preserving.Furthermore, improvementacrossallthedatasetsandallthemodels.Wehave ourgoalofpromotingmodelstofliptheirdecisionistodiscover doneastatisticaltesttoshowthatourimprovementisstatistically spuriousfeatures,insteadofexplainingthecauseofabug.There significant.InadditiontoF1,ourprobabilitydensityplotsshown hasalsobeenorthogonalworkthatusescounterfactualcausalanal- inFigures6and7alsostronglydemonstratedourimprovement. ysisinthecontextofMLsystemdebugging[18,26].Unlikeour Thecausallearningmakestheassumptionthatthecoderepre- work,theseapproachesdonotusethebackdoorcriteriontoremove sentation𝑅learnedthecausalfeatures.Althoughwearenotsureif spuriousfeatures. that’sthecase,weseetheimprovementofourresultsinallsettings. CausallearninginOtherDomains:Ourworkdrewinspirations Ourevaluationworkedontworeal-worldvulnerabilitydatasets, fromMaoetal.[20].Thisworkaddressesrobustnessandgeneral- includingbothbalancedandimbalanceddata,andthethreeSOTA izationinthevisiondomainusingcausallearning.Theyuse"water models.Inthefuture,weplantoexperimentwithmoredatasets bird"and"landbird"astwodomainsandshowthatbyapplying andmodels. thebackdoorcriterion,thecausalmodelscanlearn"invariants"of thebirdsintwodomainsandachievebettergeneralization.Their 9 RELATEDWORK workdoesnotdiscoverspuriousfeatures,andtheircausallearning DeeplearningforVulnerability:Deeplearningvulnerability doesnottargetspuriousfeatures. modelscanbeseparatedintographneural-network(GNN)based or transformer-based models. GNN-based models capture AST, 10 CONCLUSIONSANDFUTUREWORK control-flow,anddata-flowinformationintoagraphrepresenta- tion.RecentGNN-basedmodels[6,7,17,25,27],haveproposed Thispaperproposedthefirststeptowardscausalvulnerabilityde- statement-levelvulnerabilityprediction. tection.Weaddressedseveralimportantchallengesfordeeplearn- In contrast, the transformer models are pre-trained in a self- ingvulnerabilitydetection.First,wedesignednovelperturbations supervisedlearningsetting.Theycanbecategorizedbythreedif- toexposethespuriousfeaturesthedeeplearningmodelshaveused ferentdesigns:encoder-based,decoder-based[8],andhybridar- forprediction.Second,weformulatetheproblemofdeeplearning chitectures[3,4,15]thatcombineelementsfrombothapproaches. vulnerabilitydetectionusingcausalityanddocalculussothatweTowardsCausalDeepLearningforVulnerabilityDetection ICSE’24,April14–20,2024,Lisbon,Portugal canapplycausallearningalgorithmstoremovespuriousfeatures APre-TrainedModelforProgrammingandNaturalLanguages.InFindingsofthe andpushthemodelstousemorerobustfeatures.Wedesigned AssociationforComputationalLinguistics:EMNLP2020.1536–1547. [14] MichaelFuandChakkritTantithamthavorn.2022. LineVul:ATransformer- comprehensiveexperimentsanddemonstratedthatCausalVulim- basedLine-LevelVulnerabilityPrediction.In2022IEEE/ACM19thInternational proved accuracy, robustness and generalisation of vulnerability ConferenceonMiningSoftwareRepositories(MSR).608–620. https://doi.org/10. detection.Inthefuture,wewillplantodiscovermorespurious 1145/3524842.3528452 [15] Daya Guo, Shuai Lu, Nan Duan, Yanlin Wang, Ming Zhou, and Jian Yin. featuresandexplorethecausallearningforotherapplicationsin 2022. UniXcoder:UnifiedCross-ModalPre-trainingforCodeRepresentation. softwareengineering. arXiv:2203.03850[cs.CL] [16] DayaGuo,ShuoRen,ShuaiLu,ZhangyinFeng,DuyuTang,ShujieLiu,Long Zhou,NanDuan,JianYin,DaxinJiang,etal.2021.GraphCodeBERT:Pre-training 11 ACKNOWLEDGEMENTS CodeRepresentationswithDataFlow.InInternationalConferenceonLearning Representations. Wethanktheanonymousreviewersfortheirvaluablefeedback. [17] DavidHin,AndreyKan,HuamingChen,andM.AliBabar.2022. LineVD: Statement-LevelVulnerabilityDetectionUsingGraphNeuralNetworks.InPro- ThisresearchispartiallysupportedbytheU.S.NationalScience ceedingsofthe19thInternationalConferenceonMiningSoftwareRepositories Foundation(NSF)underAward#2313054. (PittsburghPA)(MSR’22).596–607. https://doi.org/10.1145/3524842.3527949 [18] MdShahriarIqbal,RahulKrishna,MohammadAliJavidian,BaishakhiRay, andPooyanJamshidi.2022. Unicorn:ReasoningaboutConfigurableSystem REFERENCES PerformancethroughtheLensofCausality.InProceedingsoftheSeventeenth EuropeanConferenceonComputerSystems(Rennes,France)(EuroSys’22).As- [1] [n.d.].CybercrimeToCostTheWorld$10.5TrillionAnnuallyBy2025,howpub- sociationforComputingMachinery,NewYork,NY,USA,199–217. https: lished=https://cybersecurityventures.com/hackerpocalypse-cybercrime-report- //doi.org/10.1145/3492321.3519575
2016/. [19] ShuaiLu,DayaGuo,ShuoRen,JunjieHuang,AlexeySvyatkovskiy,Ambro- [2] [n.d.].MicrosoftExchangeFlaw:AttacksSurgeAfterCodePublished,howpub- sioBlanco,ColinClement,DawnDrain,DaxinJiang,DuyuTang,etal.2021. lished=https://www.bankinfosecurity.com/ms-exchange-flaw-causes-spike-in- CodeXGLUE:AMachineLearningBenchmarkDatasetforCodeUnderstanding trdownloader-gen-trojans-a-16236. andGeneration.arXivpreprintarXiv:2102.04664(2021). https://arxiv.org/abs/ [3] 2022.NatGen:GenerativePre-trainingby“Naturalizing”SourceCode-Code 2102.04664 andscriptsforPre-Training. https://doi.org/10.5281/zenodo.6977595 [20] C.Mao,K.Xia,J.Wang,H.Wang,J.Yang,E.Bareinboim,andC.Vondrick.2022. [4] WasiUddinAhmad,SaikatChakraborty,BaishakhiRay,andKai-WeiChang.2021. CausalTransportabilityforVisualRecognition.In2022IEEE/CVFConferenceon UnifiedPre-trainingforProgramUnderstandingandGeneration.In2021Annual ComputerVisionandPatternRecognition(CVPR).IEEEComputerSociety,Los ConferenceoftheNorthAmericanChapteroftheAssociationforComputational Alamitos,CA,USA,7511–7521. https://doi.org/10.1109/CVPR52688.2022.00737 Linguistics(NAACL). [21] JudeaPearl.2000.Causality:Models,reasoning,andinference. [5] NghiD.Q.Bui,YijunYu,andLingxiaoJiang.2021.Self-SupervisedContrastive [22] JudeaPearlandEliasBareinboim.2011.Transportabilityofcausalandstatistical LearningforCodeRetrievalandSummarizationviaSemantic-PreservingTrans- relations:Aformalapproach.InProceedingsoftheAAAIConferenceonArtificial formations(SIGIR’21).AssociationforComputingMachinery,NewYork,NY, Intelligence,Vol.25.247–254. USA,511–521. https://doi.org/10.1145/3404835.3462840 [23] BenjaminSteenhoek,MdMahbuburRahman,RichardJiles,andWeiLe.2023. [6] SicongCao,XiaobingSun,LiliBo,RongxinWu,BinLi,andChuanqiTao.2022. An Empirical Study of Deep Learning Models for Vulnerability Detection. MVD:Memory-RelatedVulnerabilityDetectionBasedonFlow-SensitiveGraph arXiv:2212.08109[cs.SE] NeuralNetworks.InProceedingsofthe44thInternationalConferenceonSoft- [24] ShiqiWang,ZhengLi,HaifengQian,ChenghaoYang,ZijianWang,Mingyue wareEngineering(PittsburghPA)(ICSE’22).1456–1468. https://doi.org/10.1145/ Shang,VarunKumar,SamsonTan,BaishakhiRay,ParminderBhatia,Ramesh 3510003.3510219 Nallapati,MuraliKrishnaRamanathan,DanRoth,andBingXiang.2022.ReCode: [7] SaikatChakraborty,RahulKrishna,YangruiboDing,andBaishakhiRay.2021. RobustnessEvaluationofCodeGenerationModels. arXiv:2212.10264[cs.LG] DeepLearningbasedVulnerabilityDetection:AreWeThereYet. IEEETrans- [25] WenboWang,TienN.Nguyen,ShaohuaWang,YiLi,JiyuanZhang,andAashish actionsonSoftwareEngineering(2021),1–1. https://doi.org/10.1109/TSE.2021. Yadavally.2023.DeepVD:TowardClass-SeparationFeaturesforNeuralNetwork 3087402 VulnerabilityDetection.In2023IEEE/ACM45thInternationalConferenceonSoft- [8] MarkChen,JerryTworek,HeewooJun,QimingYuan,HenriquePondede wareEngineering(ICSE).2249–2261. https://doi.org/10.1109/ICSE48619.2023. OliveiraPinto,JaredKaplan,HarriEdwards,YuriBurda,NicholasJoseph,Greg 00189 Brockman,AlexRay,RaulPuri,GretchenKrueger,MichaelPetrov,HeidyKhlaaf, [26] ZiyuanZhong,ZhishengHu,ShengjianGuo,XinyangZhang,ZhenyuZhong, GirishSastry,PamelaMishkin,BrookeChan,ScottGray,NickRyder,Mikhail andBaishakhiRay.2022. DetectingMulti-SensorFusionErrorsinAdvanced Pavlov,AletheaPower,LukaszKaiser,MohammadBavarian,ClemensWinter, Driver-AssistanceSystems.InProceedingsofthe31stACMSIGSOFTInternational PhilippeTillet,FelipePetroskiSuch,DaveCummings,MatthiasPlappert,Fo- SymposiumonSoftwareTestingandAnalysis(Virtual,SouthKorea)(ISSTA2022). tiosChantzis,ElizabethBarnes,ArielHerbert-Voss,WilliamHebgenGuss,Alex AssociationforComputingMachinery,NewYork,NY,USA,493–505. https: Nichol,AlexPaino,NikolasTezak,JieTang,IgorBabuschkin,SuchirBalaji,Shan- //doi.org/10.1145/3533767.3534223 tanuJain,WilliamSaunders,ChristopherHesse,AndrewN.Carr,JanLeike,Josh [27] YaqinZhou,ShangqingLiu,JingkaiSiow,XiaoningDu,andYangLiu.2019. Achiam,VedantMisra,EvanMorikawa,AlecRadford,MatthewKnight,Miles Devign:EffectiveVulnerabilityIdentificationbyLearningComprehensivePro- Brundage,MiraMurati,KatieMayer,PeterWelinder,BobMcGrew,DarioAmodei, gramSemanticsviaGraphNeuralNetworks.InAdvancesinNeuralInformation SamMcCandlish,IlyaSutskever,andWojciechZaremba.2021.EvaluatingLarge ProcessingSystems,Vol.32.10197–10207. LanguageModelsTrainedonCode. arXiv:2107.03374[cs.LG] [9] JürgenCito,IsilDillig,VijayaraghavanMurali,andSatishChandra.2022.Coun- terfactualExplanationsforModelsofCode.InProceedingsofthe44thInternational ConferenceonSoftwareEngineering:SoftwareEngineeringinPractice(Pittsburgh,
Pennsylvania)(ICSE-SEIP’22).AssociationforComputingMachinery,NewYork, NY,USA,125–134. https://doi.org/10.1145/3510457.3513081 [10] YangruiboDing,LucaBuratti,SaurabhPujar,AlessandroMorari,BaishakhiRay, andSaikatChakraborty.2022. TowardsLearning(Dis)-SimilarityofSource CodefromProgramContrasts.InProceedingsofthe60thAnnualMeetingofthe AssociationforComputationalLinguistics(Volume1:LongPapers).6300–6312. [11] Yangruibo Ding, Ben Steenhoek, Kexin Pei, Gail Kaiser, Wei Le, and BaishakhiRay.2023.TRACED:Execution-awarePre-trainingforSourceCode. arXiv:2306.07487[cs.SE] [12] JiahaoFan,YiLi,ShaohuaWang,andTienN.Nguyen.2020. AC/C++Code VulnerabilityDatasetwithCodeChangesandCVESummaries.InProceedingsof the17thInternationalConferenceonMiningSoftwareRepositories(Seoul,Republic ofKorea)(MSR’20).AssociationforComputingMachinery,NewYork,NY,USA, 508–512. https://doi.org/10.1145/3379597.3387501 [13] ZhangyinFeng,DayaGuo,DuyuTang,NanDuan,XiaochengFeng,MingGong, LinjunShou,BingQin,TingLiu,DaxinJiang,andMingZhou.2020.CodeBERT:
2310.08153 A Systematic Evaluation of Automated Tools for Side-Channel Vulnerabilities Detection in Cryptographic Libraries AntoineGeimer MathéoVergnolle FrédéricRecoules Univ.Lille,CNRS,Inria UniversitéParis-Saclay,CEA,List UniversitéParis-Saclay,CEA,List Univ.Rennes,CNRS,IRISA Gif-sur-Yvettes,France Gif-sur-Yvettes,France Lille,France Lesly-AnnDaniel SébastienBardin ClémentineMaurice imec-DistriNet,KULeuven UniversitéParis-Saclay,CEA,List Univ.Lille,CNRS,Inria Leuven,Belgium Gif-sur-Yvettes,France Lille,France Abstract 1 Introduction Toprotectcryptographicimplementationsfromside-channelvul- Implementing cryptographic algorithms is an arduous task. Be- nerabilities,developersmustadoptconstant-timeprogramming yondfunctionalcorrectness,thedevelopersmustalsoensurethat practices.Asthesecanbeerror-prone,manyside-channeldetection theircodedoesnotleakpotentiallysecretinformationthrough toolshavebeenproposed.Despitethis,suchvulnerabilitiesarestill sidechannels.SincePaulKocher’sseminalwork[83],theresearch manuallyfoundincryptographiclibraries.Whilearecentpaper communityhascombedthroughsoftwareandhardwaretofind byJancaretal.showsthatdevelopersrarelyperformside-channel vectorsallowingforside-channelattacks,fromexecutiontimeto detection,itisunclearifexistingdetectiontoolscouldhavefound electromagneticemissions.Theunifyingprinciplebehindthisclass thesevulnerabilitiesinthefirstplace. ofattacksisthattheydonotexploitthealgorithmspecificationbut Toanswerthisquestionwesurveyedtheliteraturetobuilda ratherphysicalcharacteristicsofitsexecution.Amongtheafore- classificationof34side-channeldetectionframeworks.Theclassifi- mentionedattackvectors,theprocessormicroarchitectureisof cationweoffercomparesmultiplecriteria,includingthemethods particularinterest,asitisasharedresourcebetweenmultiplepro- used,thescalabilityoftheanalysisorthethreatmodelconsidered. grams.Byobservingthetargetexecutionthroughmicroarchitec- Wethenbuiltaunifiedcommonbenchmarkofrepresentativecryp- turalcomponents(e.g., cache[89,140],branchpredictor[6,57], tographicoperationsonaselectionof5promisingdetectiontools. DRAM[99],CPUports[9]),anattackercandeducesecretinforma- Thisbenchmarkallowsustobettercomparethecapabilitiesofeach tionbeyondwhatisnormallypossiblewithclassicalcryptanalysis. tool,andthescalabilityoftheiranalysis.Additionally,weoffera Side-channelprimitivesusingthesecomponentsallowanattacker classificationofrecentlypublishedside-channelvulnerabilities.We toreconstructsecret-dependentcontrolflowandtablelook-ups. thentesteachoftheselectedtoolsonbenchmarksreproducing Thisproblemisexacerbatedinmulti-coreprocessorsandVMenvi- asubsetofthesevulnerabilitiesaswellasthecontextinwhich ronments,whereexecutioncanbesharedconcurrentlybymultiple theyappear.Wefindthatexistingtoolscanstruggletofindvul- actors,andintrustedexecutionenvironments,wheretheprivileged nerabilitiesforavarietyofreasons,mainlythelackofsupportfor controlofuntrustedoperatingsystemscanbeleveragedtoperform SIMDinstructions,implicitflows,andinternalsecretgeneration. controlled-channelattacks[138]. Basedonourfindings,wedevelopasetofrecommendationsforthe Consequently,multiplecountermeasurestotimingandmicroar- researchcommunityandcryptographiclibrarydevelopers,withthe chitecturalattackshavebeendevelopedintheliterature.System- goaltoimprovetheeffectivenessofside-channeldetectiontools. levelapproacheslikeStealthMem[80]modifytheOS’behavior betweencontextswitchestominimizeinformationsharing,while ACMReferenceFormat: language-levelapproachesliketypesystemscanbeusedtoenforce AntoineGeimer,MathéoVergnolle,FrédéricRecoules,Lesly-AnnDaniel, properinformationflowinthesourcecode[12].Hardware-based SébastienBardin,andClémentineMaurice.2023.ASystematicEvaluation approachesalsoenablesecuringcomponentsbydesign[103,144]. ofAutomatedToolsforSide-ChannelVulnerabilitiesDetectioninCrypto- However,theseapproachesarehardlypracticalastheyrelyonei- graphicLibraries.InProceedingsof2023ACMSIGSACConferenceonCom- therlargesourcecoderewrites,orrequiresystemorhardwaremod- puterandCommunicationsSecurity(CCS’23).ACM,NewYork,NY,USA, 15pages.https://doi.org/XXXXXXX.XXXXXXX ifications.Manuallywritingaprogramsuchthatitisfreeofsuch microarchitecturalleakageisthus,byfar,themostcommonlyem- ployedcountermeasureincryptographiclibraries[1,26].Inparticu- Permissiontomakedigitalorhardcopiesofallorpartofthisworkforpersonalor lar,theconstant-timeprogrammingdiscipline[22,26](CT)consists classroomuseisgrantedwithoutfeeprovidedthatcopiesarenotmadeordistributed inwritingaprogramsuchthatitscontrolflow,memoryaccesses forprofitorcommercialadvantageandthatcopiesbearthisnoticeandthefullcitation onthefirstpage.CopyrightsforcomponentsofthisworkownedbyothersthanACM andoperandsofvariable-timeinstructionsdonotdependonsecret mustbehonored.Abstractingwithcreditispermitted.Tocopyotherwise,orrepublish, data,andisconsideredthede-factostandardtoprotectagainst
topostonserversortoredistributetolists,requirespriorspecificpermissionand/ora (non-transient)timingandmicroarchitecturalattacks.Constant- fee.Requestpermissionsfrompermissions@acm.org. CCS’23,November26–30,2023,Copenhagen,Denmark timeprogrammingisanarduoustaskasitsrecommendationsgo ©2023AssociationforComputingMachinery. againstusualprogrammingpracticesandrequiresknowledgeof ACMISBN978-1-4503-XXXX-X/18/06...$15.00 theliteratureonsidechannels.Moreover,theprogrammermust https://doi.org/XXXXXXX.XXXXXXX 3202 tcO 21 ]RC.sc[ 1v35180.0132:viXraCCS’23,November26–30,2023,Copenhagen,Denmark Geimeretal. 2 Preliminaries bemindfulofcompileroptimizationsnotpreservingconstant-time code[52,116]andlibrariesnotfollowingthesepracticesatall[79]. 2.1 Scope Problem.Inthepastdecade,theresearchcommunityhasinvesti- Language-basedapproachesforhigh-assurancecryptography—in particularusedintheEverCryptlibrary[101,145],FaCT[46],and gatedautomatedwaysofcheckingwhetheraprogramisleakage- Jasmin[12,14]—provideprovablyfunctionallycorrectandconstant- freeornot.Manydifferentapproacheshavebeenproposed[20, timeimplementationsofcryptographicprimitivesandlibraries,but 51,67,86,107,133,134],bothstaticanddynamic,butdespitethis requireacompletecoderewriteinadedicatedlanguage.Inthis abundanceoftools,side-channelvulnerabilitiesarestillfoundregu- paper,weconsiderout-of-scopesuchlanguage-basedapproaches larlyincryptographiclibraries[90].Twofactorscouldexplainthis andfocusontoolsthatareapplicabletooff-the-shelflibraries. paradox:eitherthesetoolswerenotabletofindsuchvulnerabilities, Programtransformationsandrepairtoolshavebeenproposed orthedevelopersdidnotusethem.ArecentsurveybyJancaretal. totransforminsecureprogramsinto(variantsof)constant-time [74]investigatedtheopinionsofcryptographiclibrariesdevelop- programs[7,31,35,47,85,94,105,117,136].Becauseside-channel ersregardingside-channeldetectionframeworks.Theyfoundthat detectionisnotthemainfocusoftheseworks,weonlyincludethem whilemanywerewillingtoincludesidechannelsintheirthreat whentheyproposeanoveldetectionphase. models,veryfewactuallyusedtools.Whilethisprovidesanswers Powerside-channelattacks[84]exploitdata-dependentdiffer- forthesecondfactor,thefirstoneremainsunexplored. encesinthepowerconsumptionofaCPUtoextractsecrets.These Goal.Ourpaperinvestigatesthisquestionbyprovidingathor- attacksusuallyassumeastrongerattackermodelthantimingmi- croarchitecturalattacks,typicallyrequiringphysicalaccesstoa oughstateoftheartonside-channeldetectionframeworksand device.Thetoolsweconsiderinthissurveyexplicitlyrestrictto recentside-channelvulnerabilities.Inparticular,werestrictour- thelatterattackermodel,hencetheycannotfindvulnerabilities selvestovulnerabilitiesfoundincryptographiclibrariesspanning relatedtopower-basedattacks. fiveyears(2017-2022).Weonlyconsidervulnerabilitiesexploited Interestingly,recentfrequencythrottlingside-channelattacks[88, throughpassivemicroarchitecturalside-channelattacks.Physical 130]exploitthefactthatpowerconsumptioncanalsoinfluencethe sidechannels[43,64,84,106],activeattacks(e.g.,Rowhammer[81]) executiontimeofaprogramviadynamicvoltageandfrequency ortransientexecutionattacks[44,45,104,125]areout-of-scope. scalingfeaturesofCPUs.Theseattackseffectivelybringpower Wetacklethefollowingresearchquestions: side-channelstothe(micro-)architecture,makingthemexploitable RQ1 Howtocomparetheseframeworks,astheirrespectivepubli- withoutphysicalaccess,andblurtheboundarybetweenthesetwo cationsofferdifferingevaluation? attackermodels.Weconsidertheseattacksout-of-scopeastheyalso RQ2 Couldanexistingframeworkhavedetectedthesevulnerabil- affectconstant-timeprogramsandrequiredifferentmitigations, itiesfoundmanually? traditionallyusedtoprotectcryptographiccodeagainstpower-side RQ3 Whatfeaturesmightbemissingfromexistingframeworksto channelattacks,suchasmasking[66,93]orblinding[76]. findthesevulnerabilities? Finally, we focus on side-channel detection tools for crypto- graphiccodeandexcludetoolstargetingotherlibraries[129,143]. Contributions.Thecontributionscanbesummarizedasfollows: (1) Wepresentaqualitativeclassificationofthestate-of-the-art toolsforside-channelvulnerabilitydetection.Weclassify34 2.2 RelatedWork frameworksdependingonmultipleparameterssuchasthe TheclosestrelatedworkisarecentsurveybyJancaretal.[59,74], methodsused,thetypeofoutputsgivenbytheanalysis,or whichanalyzeshowcryptographicdevelopersensurethattheir thetypeofprogramsanalyzed(Section4); codeissecureagainstmicroarchitecturalattacks.Inparticular,this (2) Wecompareasubsetofthesedetectionframeworksonauni- surveyclassifiesexistingtoolsformicroarchitecturalside-channel fiedbenchmark,comprisedofrepresentativecryptographic analysis,andidentifies:(1)whetherdevelopersareawareofthese operationsfrom3libraries,totaling25primitives.Wefound toolsorhaveexperiencewiththem,(2)whatkindoftoolsdevel-
thatasymmetriccryptographicprimitivesarestillachal- opersarewillingtouse(specificallygiventhetrade-offbetween lengeformostdetectiontools.Thisbenchmarkaimsatensur- strongguaranteesandusability),(3)whatarethecommonshort- ingafaircomparisonbetweenframeworksandhelpdevelop comingshinderingtheadoptionofthesetools.Incontrast,wefocus futureefforts(Section7); ontechnicaldifferencesbetweenthesetoolsinordertocompare (3) Weofferaclassificationofrecentlypublishedside-channel theiractualcapabilities.Wealsoselectasubsetofpromisingtools vulnerabilitiesincryptographiclibraries,offeringnewin- and experimentally assess their scalability and ability to detect sightsintowheretofindpotentialvulnerabilities(Section5); recentvulnerabilitiesfromcryptographiclibraries. (4) Weverifywhether4ofthesevulnerabilitiescouldhavebeen Yuanetal.[143]compared11side-channelsdetectiontoolson8 detectedwiththeaforementionedframeworks.Weconclude criteria,includingsomeoverlappingwithours.However,ourclassi- thatsupportforSIMDinstructions,implicitflows,andin- ficationaimsatbeingmoregeneralandsystematicbyconsidering ternalsecretgenerationiscrucialforeffectiveside-channel moregeneralcriteriaandabroadersetoftools. detectiontools(Section8). Louetal.presentedasurvey[90]characterizinghardwareand Ourcompleteevaluationframeworkandresultsareopensource[63]. softwarevectorsinmicroarchitecturalside-channelattacks.WhileASystematicEvaluationofAutomatedToolsforSide-ChannelVulnerabilitiesDetectioninCryptographicLibraries CCS’23,November26–30,2023,Copenhagen,Denmark their work is more exhaustive in terms of characterizing side- Memoryaccessesandcontrolflowthatdependonthesecretare channelvulnerabilitiesincryptographiclibraries,ourworkisfo- themostcommonlyidentifiedsourcesofleakageincryptographic cusedonvulnerabilitiesfoundinarecentperiodanditspurposeis software.Wedetailcommonside-channelpitfallsencounteredin tounderstandwhytheywerenotfoundautomatically. cryptographcimplementations.Anexhaustivelookatvulnerabili- Barbosaetal. [21]proposedasystematizationoftheliterature tiesincryptographiclibrariesispresentedin[90]. andataxonomyoftoolsforcomputer-aidedcryptography.Their Control-flow.Oneexampleofsecret-dependentcontrolflowisthe survey encompasses formal, machine-checkable approaches for square-and-multiply implementationofmodularexponentiation design-levelsecurity,functionalcorrectness,andside-channelse- usedinRSA,wheretheoperationsequencedependsonthebitvalue curity.Incontrast,wefocusonside-channelsecuritybutconsider ofthesecretexponent,leadingtotimingattacks[83].Scalarmulti- abroadersetoftools,notrestrictedtoprovablysecureapproaches. plication,theanalogousoperationforellipticcurvecryptography, Complementarytoourwork,Geetal.[62]proposedataxonomy suffersfromasimilarproblemindouble-and-addimplementations. oftimingmicroarchitecturalattacksanddefenses.Theygivean Variantsoftheseimplementationssuchastheslidingwindowand overviewofmicroarchitecturalcomponentsthataresusceptibleto wNAFmultiplicationalgorithmsarepreferredfortheirperformance, side-channelattacksandclassifyattacksaccordingtothedegree butdonotalleviatethisvulnerability[24,97].Tomitigatecontrol- ofhardwaresharingandconcurrencyinvolved.Finally,theysur- flow-basedside-channelattacks,developershaveresortedtoeither veyexistinghardwareandsoftwarecountermeasuresagainstthese balancing[8]orlinearizing[94](i.e.,eliminating)secret-dependent attacks.Similarly,JakubSzefer[119]proposedasurveyofmicroar- control-flow.Theformersolution,employedforinstanceinthe chitecturalchannels,attacksanddefenseswithastrongerfocuson Montgomeryladderalgorithm[77]forscalarmultiplication,ispar- microarchitecturalfeaturesenablingcovertandsidechannels. ticularlychallengingtogetrightasitremainsvulnerabletoattacks Buhanetal.[39]presentedataxonomyoftoolsforprotecting exploitingport-contention,branchpredictors,and(instructionand againstphysicalside-channelattackssuchaspowerconsumption data)cacheattacks[139].Thelatterisnotafool-proofsolutionei- orelectromagneticemanations.Theirsurveyencompassestoolsfor therasitremainsvulnerabletoattacksexploitingsecret-dependent physicalside-channelleakagedetection,verification,andmitigation memoryaccesses[68].Modularinversionisanothercommonop- forpre-orpost-silicondevelopmentstage.Conversely,ourworkfo- erationthatcanbeasourceofside-channelleakage,inparticular cuseson(non-overlapping)toolsfortimingandmicroarchitectural throughthecontrolflowofgreatestcommondivisor(GCD)algo- side-channelsdetectioninsoftware. rithms,suchasthebinaryextendedEuclideanalgorithm(BEEA)[5]. Secret-dependentbranchesarealsoattheheartofpaddingoracles 3 Background usedtomountBleichenbacher’sattack[29]orLucky13[11]. Thissectionintroducesthebackgroundtounderstandside-channel Memory access.The original AES proposal describes how the vulnerabilities,inparticularthehardwareandsoftwarevectors. ciphercanbeefficientlyimplementedusingpre-computedtables withanaccessoffsetdependingonthesecretkey,makingitpartic- 3.1 Microarchitecturalsidechannels ularlyvulnerabletocacheattacks[25].Theaforementionedsliding
Microarchitecturalattacksusesharedmicroarchitecturalcompo- window andwNAFmultiplicationalgorithmalsoemploystables nentstodeducesecretinformationfromaprogramexecutingonthe to store pre-computed values to speed up computations which, samephysicalcore,CPU,orsocket.Byprobingacomponentstate, whenaccessedwithavaluederivedfromthesecret,induceleakage theattackercanobservethechangesthevictim’sexecutioninduced throughmemoryaccesses[36,89]. onthatstate,thusgaininginformationonthevictim.Thetypeand Operandvalues.Secret-dependentoperandvaluescanalsobea quantityofinformationgaineddependonthecomponentandthe sourceofleakageiftheyinfluencetheprogramrunningtime.Early wayitwasprobed,e.g.,branchpredictorscanexposethedirection terminationinintegermultiplicationonARMhasbeenshownto ofbranchestakenbyavictim[5],portcontentionallowsanattacker induceatimingleak[70],withsimilarproblembeingfoundinx86 todeducewhichinstructionsthevictimexecutes[9],TLBattacks floating-pointinstructions[16]. leakthevictim’smemoryaccessesatpage-levelresolution[68]. Byfarthemostwidelyusedcomponentinattacksremainsthe 4 Classifyingside-channelvulnerability cache.Multipletechniqueshavebeendevelopedtoattackcaches,to detectionframeworks retrieveboththevictim’scontrol-flowandmemoryaccesses,such asPrime+Probe[89,97]andFlush+Reload[140].Someofthese Astheconceptofmicroarchitecturalattackshasgatheredattention attackscanalsoberunremotely[37,38].Theycanbeespecially onlyrelativelyrecently,alltheapproacheswepresenthereareless powerfulinthecontextoftrustedexecutionenvironment,suchas thanadecadeold.Westructurethissectionintwolargecategories, Intel-SGX,wheretheuntrustedoperatingsystemcanbeleveraged staticanddynamic,thatrepresentsasplitinapproachesbutalsoin toperformcontrolled-channelattacks,enablinghigh-resolution researchcommunities:thefirstclosertoprogramverificationand andlow-noisesidechannels[40,124,138].Acompleteoverviewof thesecondtobug-finding.However,westrivetocomparethese microarchitecturalside-channelattackscanbefoundin[62,119]. approachesusingbroaderparameters.Table1givesacomparison ofthe34detectionframeworksweconsiderinthissurvey. 3.2 Side-channelvulnerabilities Criteria.Inadditiontocomparingframeworksbytheirtype(static Thehardwarevectorrepresentsonlyoneaspectofsidechannels. ordynamic),wegiveadescriptionoftheprecisemethodemployed. Tobeexploitable,softwaremustleaksensitivedatainsomeway. Whileoureffortsareindependentfrom[59],thereisasignificantCCS’23,November26–30,2023,Copenhagen,Denmark Geimeretal. Table1:Comparisontableofvulnerabilitydetectionframeworks. Ref Year Tool Type Methods Scal. Policy Sound Input L W E B Available [86] 2010 ct-grind Dynamic Tainting CT Binary ✓ ✓ [15] 2013 Almeidaetal. Static Deductiveverification (cid:32) CT (cid:71)(cid:35) Csource [55] 2013 CacheAudit Static Abstractinterpretation (cid:35) CO (cid:32) Binary ✓ ✓ [22] 2014 VirtualCert Static Typesystem (cid:35) CT (cid:71)(cid:35) Csource ✓ ✓ [71] 2015 CacheTemplates Dynamic Statisticaltests (cid:35) CO (cid:32) Binary ✓ ✓ [13] 2016 ct-verif Static Logicalverification (cid:35) CT (cid:35) LLVM ✓ [108] 2016 FlowTracker Static Typesystem (cid:71)(cid:35) CT (cid:32) LLVM ✓ ✓ [56] 2017 CacheAudit2 Static Abstractinterpretation (cid:71)(cid:35) CT (cid:32) Binary ✓ [28] 2017 Blazyetal. Static Abstractinterpretation (cid:35) CT (cid:32) Csource [17] 2017 Blazer Static Decomposition (cid:71)(cid:35) CR (cid:32) Java ✓ [48] 2017 Themis Static Logicalverification (cid:71)(cid:35) CR (cid:32) Java ✓ ✓ [128] 2017 CacheD Dynamic DSE (cid:71)(cid:35) CO (cid:32) Binary ✓ ✓ [137] 2017 STACCO Dynamic Tracediff (cid:71)(cid:35) CR (cid:35) Binary ✓ ✓ [107] 2017 dudect Dynamic Statisticaltests (cid:71)(cid:35) CC (cid:35) Binary ✓ [118] 2018 CANAL Static SE (cid:71)(cid:35) CO (cid:35) LLVM ✓ ✓ [47] 2018 CacheFix Static SE (cid:35) CO (cid:71)(cid:35) C ✓ ✓ ✓ [34] 2018 CoCo-Channel Static SE,tainting (cid:71)(cid:35) CR (cid:71)(cid:35) Java ✓ [19] 2018 SideTrail Static Logicalverification (cid:32) CR (cid:71)(cid:35) LLVM ✓ ✓ ✓ ✓ [115] 2018 Shinetal. Dynamic Statisticaltests (cid:35) CO (cid:32) Binary ✓ [133] 2018 DATA Dynamic Statisticaltests (cid:71)(cid:35) CT (cid:35) Binary ✓ ✓ ✓ [134] 2018 MicroWalk Dynamic MIA (cid:71)(cid:35) CT (cid:35) Binary ✓ ✓ ✓ [111] 2019 STAnalyzer Static Abstractinterpretation (cid:32) CT (cid:35) C ✓ ✓ [96] 2019 DifFuzz Dynamic Fuzzing (cid:32) CR (cid:32) Java ✓ ✓ [127] 2019 CacheS Static Abstractinterpretation,SE (cid:71)(cid:35) CT (cid:35) Binary ✓ ✓ [35] 2019 CaSym Static SE (cid:32) CO (cid:35) LLVM ✓ ✓ [54] 2020 Pitchfork Static SE,tainting (cid:71)(cid:35) CT (cid:32) LLVM ✓ ✓ ✓ [67] 2020 ABSynthe Dynamic Geneticalgorithm,RNN (cid:32) CR (cid:71)(cid:35) Csource ✓ ✓
[73] 2020 ct-fuzz Dynamic Fuzzing (cid:71)(cid:35) CT (cid:35) Binary ✓ ✓ ✓ [51] 2020 Binsec/Rel Static SE (cid:71)(cid:35) CT (cid:35) Binary ✓ ✓ ✓ [20] 2021 Abacus Dynamic DSE (cid:32) CT (cid:71)(cid:35) Binary ✓ ✓ ✓ [75] 2022 CaType Dynamic Typesystem (cid:32) CO (cid:71)(cid:35) Binary ✓ ✓ [135] 2022 MicroWalk-CI Dynamic MIA (cid:71)(cid:35) CT (cid:32) Binary,JS ✓ ✓ ✓ [141] 2022 ENCIDER Static SE (cid:32) CT (cid:35) LLVM ✓ ✓ ✓ [142] 2023 CacheQL Dynamic MIA,NN (cid:32) CT (cid:71)(cid:35) Binary ✓ ✓ ✓ ✓† (cid:32) (cid:35) (D)SE:(Dynamic)SymbolicExecution,MIA:MutualInformationAnalysis,(R)NN:(Recurrent)NeuralNetwork,E:estimationofthenumberof bitsleaked,L:originoftheleakage,W:witnesstriggeringthevulnerability,B:supportforblinding.†Notavailableatthetimeofwriting. overlapbetweenthetwoclassifications.Yetthepresentonegoes (CO)cache-obliviousnessrequiresthesequenceofcachehitsand intomoredetails,beingenrichedwithadditionalcriteria.Similarly missesw.r.t.acachemodeltobeindependentfromthesecretinput; to[59,74],wedetailthetypeofprogramssupportedbytheanalysis (CR)constant-ressourcerequirestheexecutioncost(determinedby (e.g.,C,Java,binarycode)andwhethertheapproachissupported afixedinstructioncost),tobeindependentfromthesecretinput; byasoundnessclaim.WhileanalyzingthesourcecodeorLLVMis (CC)constantclockcyclesrequiresthenumberofclockcyclestobe morepracticalastheprogramsemanticsismoreeasilyextracted independentfromthesecretinput.NotethatCT[13,22]isstrictly thaninthebinary,doingsoposestheriskofmissingvulnerabilities moreconservativethanCOandCR,andistheonlypolicythatis introducedbythecompiler[52,79,116].Afullsoundnessclaim securew.r.t.theattackerscopeconsideredinthispaper. ( )offersaformalguaranteethattheanalysiswillnot,inprinciple, Wealsodetailthetypeofoutputsgivenbytheanalysis:estima- a(cid:32)cceptinsecureprogramsandsoshouldnotyieldanyfalsenega- tiononthenumberofbitsleaked(E),originoftheleakage(L),and tives.Inpractice,thismaynotbethecasebecauseofvulnerabilities whetherawitnesstriggeringthevulnerabilityisgiven(W).Atool outsidetheanalysis’threatmodel,orbecauseofbugsinthede- reportingneitherthesethreeonlyreportsthepresenceorabsence tectiontool.Conversely,partialsoundness( )claimscovertools ofvulnerabilitiesonthetarget.Weadditionallyreportwhetherthe thataresoundundersomerestrictions(e.g.,(cid:71)(cid:35)partialexplorationof toolsupportsblinding[76](B),adefensethatintroducesadditional loops).Inaddition,ourclassificationreportsthepolicyenforcedby randomnesstocomputationstohinderinferenceofsecretvalues. thesetools:(CT)constant-timeproscribessecret-dependentcontrol Finally,weofferaroughestimationofthetoolscalability. flow,memoryaccesses,andoperandsofvariabletimeinstructions;ASystematicEvaluationofAutomatedToolsforSide-ChannelVulnerabilitiesDetectioninCryptographicLibraries CCS’23,November26–30,2023,Copenhagen,Denmark Limitation.Unfortunately,quantifyingscalabilityremainschal- 4.1.2 Typesystems Approachesbasedonverifyingtypesafetyof lengingduetotheabsenceofauniversallyapplicablemetric.One aprogramdifferfromlanguage-levelcountermeasures[12,30],as optionwouldbetoreportthenumberofinstructionsprocessedby thedeveloperonlyneedstotypethesecretvalueswithannotations secondbyatool.Howevernotallpublicationsprovidethisinfor- insteadofrewritingtheprogram.Thetypesystemthenpropagates mationandtheconceptofinstructioncanvaryfromoneapproach thisthroughouttheprogram,similarlytostatictaintanalysis.Type toanother(LoC,IRorassemblyinstructions,countingunrolled systemswereconsideredrelativelyearlytoverifynon-interference loopsornot,etc.).Additionally,suchmetricisspecifictoagivenset properties[7]andoffergoodscalabilitybuttheirimprecisionmakes ofbenchmarksandhardware,whereaspublicationstypicallyfea- themdifficulttouseinpractice. turediversebenchmarksacrossvarioussetups.Hence,weinstead VirtualCert[22]analyzesamodifiedCompCertIRwhereeach chosetoprovidearoughestimationofscalability,basedonthe instructionmakesitssuccessorsexplicit.Theauthorsdefineseman- claimsmadeinthetools’publications.Wedifferentiatetoolsable ticsforthatrepresentation,buildingthetypesystemontopofit. toanalyzeinareasonabletime:complexasymmetricprimitives Analiasanalysisgivingasoundover-approximatedpredictionof ( ),symmetricprimitives( ),andthosestrugglingtoscaleeven targetedmemoryaddressisneededtohandlepointerarithmetic. f(cid:32)orthese( ). (cid:71)(cid:35) Whilethisapproachismoresuitedtoastrictverificationtask,it (cid:35) canalsoprovidealeakageestimate. FlowTracker[108]introducesanovelalgorithmtoefficiently 4.1 Staticanalysis computeimplicitinformationflowsinaprogram,andusesitto applyatypesystemverifyingconstant-time.
Staticanalysisapproachesattempttoderivesecurityproperties fromtheprogramwithoutactuallyexecutingit,extractingformally definedguaranteesonallpossibleexecutionsthroughbinaryor 4.1.3 Abstractinterpretation Asaprogramsemanticsisgenerally sourcecodeanalysis.Asaformalexplorationofeveryreachable toocomplextoformallyverifynon-trivialproperties,abstractin- stateisunfeasible,programbehaviorisoftenapproximated,making terpretation[50]over-approximatesitssetofreachablestates,so thempronetofalsepositives.Staticapproacheswerethefirsttobe thatiftheapproximationissafe,thentheprogramissafe. considered,asside-channelsecurityiscloselyrelatedtoinformation CacheAudit[55]performsabinary-levelanalysis,quantifying flowpolicies[53]. theamountofleakagedependingonthecachepolicybyfinding thesizeoftherangeofaside-channelfunction.Thisside-channel functioniscomputedthroughabstractinterpretation,andthesize 4.1.1 Logicalreduction Non-interferenceisa2-safetypropertystat- ofits rangedetermined usingcounting techniques. It waslater ingthattwoexecutionswithequivalentpublicinputsandpoten- extendedtosupportdynamicmemoryandthreatmodelsallowing tiallydifferentsecretinputsmustresultinequivalentpublicoutputs. byte-levelobservations[56]andmorex86instructions[92]. Thisdefinitioncoverssidechannelsbyconsideringresourceusage Blazyetal.[28]focusonthesourcecodeinsteadofthebinary. (e.g.,addresstrace)asapublicoutput.Approachesbasedonlogical Theirtoolisintegratedintotheformally-verifiedVerascostatican- reductionto1-safetytransformtheprogramsothatverifyingits alyzer,andusestheCompCertcompiler.Theanalysisisstructured side-channelsecurityamountstoprovingthesafetyofthetrans- aroundataintingsemanticsthatpropagatessecretinformation formedprogram. throughouttheprogram. Self-composition[23]interleavestwoexecutionsofaprogram𝑃 STAnalyzer[111]usesdata-flowanalysistoreportsecret-dependent withdifferentsetsofsecretvariablesinasingleself-composedpro- gram𝑃;𝑃′.Solverscanthenbeusedtoverifythenon-interference branchesandmemoryaccesses. CacheS[127]usesanhybridapproachbetweenabstractinterpre- property.ThisapproachwasusedbyBacelarAlmeidaetal.[15] tationandsymbolicexecution.Theabstractdomainkeepstrackof tomanuallyverifylimitedexamples,relyingonalargeamountof programsecrets—withaprecisesymbolicrepresentationforvalues codeannotations.ct-verifinsteadrunsthetwocopiesinlockstep, inordertoconfirmleakage—butkeepsonlyacoarse-grainrep- whilecheckingtheirassertion-safety[13].ItisabletoverifyLLVM resentationofnon-secretvalues.Toimprovescalability,CacheS programs,leveragingtheboogieverifier.Sidetrail[19]reusesthis implementsalightweightbutunsoundmemorymodel. toverifythatsecretdependentbranchesarebalanced(assuming afixedinstructioncostandexcludingmemoryaccesspatterns), providingacounter-examplewhenthisverificationfails. 4.1.4 Symbolicexecution Symbolicexecution[82](SE)denotes However,suchapproachessufferfromanexplosioninthesizeof approachesthatverifypropertiesofaprogrambyexecutingitwith theprogramstatespace.Blazer[17]verifiestiming-channelsecurity symbolicinputsinsteadofconcreteones.Exploredexecutionpaths onJavaprogramsbyinsteaddecomposingtheexecutionspaceinto areassociatedwithalogicalformula:theconjunctionofcondi- a partition on secret-independent branches. Proving 2-safety is tionalsleadingtothatpath.Amemorymodelmapsencountered thusreducedtoverifying1-safetyoneachtraceinthepartition variablesontosymbolicexpressionsderivedfromthesymbolicin- improvingscalabilityatthecostofprecision.Themis[48]usesstatic putsandtheconcreteconstants.Asolveristhenusedtocheck taintanalysistoautomaticallyannotatesecret-dependentJavacode whetherasetofconcretevaluessatisfiesthegeneratedformulas. withHoarelogicformulasaspre-andpost-conditions.AnSMT RecentadvancesinSMTsolvershavemadesymbolicexecutiona solverthenverifiesthatthepost-conditionimpliesexecutiontime practicaltoolforprogramanalysis[42]. differencesremainboundedbygivenconstant.Bothtoolsprovide CoCo-Channel[34]identifiessecret-dependentconditionsus- awitnesstriggeringthevulnerabilityotherwise. ingtaint-analysis,constructssymboliccostexpressionsforeachCCS’23,November26–30,2023,Copenhagen,Denmark Geimeretal. pathoftheprogramusesSEandreportspathsthatexhibitsecret- MIscoresbetweeninputsetsandhashedtraces,withleakagelo- dependenttimingbehavior.Theircostmodelassignsafixedcost cationpinpointedusingfiner-grainedinstruction-levelMIscores. perinstruction,excludingsecret-dependentmemoryaccesses. MicroWalk-CI[135]optimizesthisprocessbytransformingthe Severalworksusesymbolicexecutiontoderiveasymboliccache tracesincalltrees,andaddssupportforJavaScriptandeasyinte- modelandcheckthatcachebehaviordoesnotdependonsecrets. grationinCI,followingrecommendationsfrom[74].CacheQL[142] CANAL[118]modelscachebehaviorsofprogramsdirectlyinthe reformulatesMIintoconditionalprobabilities,estimatedwithneu- LLVMintermediaterepresentationbyinsertingauxiliaryvariable ralnetworks.Leakagelocationisestimatedbyrecastingtheproblem
andinstructions.ItthenusesKLEE[41]toanalyzetheprogram intoacooperativegamesolvedusingShapleyvalues.Contraryto and check that the number of hits does not depend on secrets. othertools[20,134],CacheQLdoesnotassumeuniformdistribu- Similarly,CacheFix[47]usesSEtoderiveasymboliccachemodel tionofthesecret,nordeterministicexecutionstraces. supportingmultiplecachepolicies.Incaseofaviolation,CacheFix STACCO[137]targetscontrol-flowvulnerabilitiesspecificallyin cansynthesizeafixbyinjectingcachehits/missesintheprogram. TLSlibrariesrunningonSGX,focusingonoraclesattacks[11,29]. CaSym[35]followsthesamemethodologyand,toimprovescala- TracesrecordedunderdifferentTLSpacketsarerepresentedas bility,includessimplificationsofthesymbolicstateandlooptrans- sequencesofbasicblocksandcomparedusingadifftool. formations,whicharesoundbutmightintroducefalsepositives. Insteadofrecordingtraces,dudect[107]recordsoverallclock SEsuffersfromscalabilityissueswhenappliedto2-safetyprop- cyclesandcomparestheirdistributionwithsecretinputsdivided ertieslikeconstant-timeverification.Danieletal.[51]adaptits intwoclasses(fix-vs-random).Whilethisapproachissimpleand formalismtobinaryanalysis,introducingoptimizationstomaxi- lightweight,itgivescertaintythatanimplementationissecureup mizeinformationsharedbetweentwoexecutionsfollowingasame toanumberofmeasurements.Contrarytoothertoolsrelyingon path.TheirframeworkBinsec/Reloffersabinary-levelCTanalysis, anexplicitleakagemodel,dudectdirectlymonitorstimings.Hence, performingaboundedexplorationofreachablestatesandgiving vulnerabilitiestoothermicroarchitecturalattackslikeHertzbleed counterexamplesfortheidentifiedvulnerabilities. might(intheory)bedetectedbydudect. Pitchfork[54]combinesSEanddynamictainttracking.Itsoundly propagatessecrettaintsalongallexecutionspaths,reportingtainted Fuzzing.Fuzzingtechniquescanbeusedtofindinputsmaximiz- branchconditionsormemoryaddresses.Interestingly,Pitchfork ing coverage and side-channel leakage. DifFuzz [96] combines cananalyzeprotocol-levelcodebyabstractingawayprimitives’im- fuzzingwithself-compositiontofindside-channelsbasedonin- plementationsusingfunctionhooks,andanalyzingthemseparately. structioncount,memoryusageandresponsesizeinJavaprograms. ENCIDER[141]combinessymbolicexecutionwithtaintanalysis ct-fuzz[73]extendsthismethodtobinaryexecutablesandcache to reduce the number of solver calls. It also enables to specify leakage. information-flowfunctionsummariestoreducepathexplosion. 4.2.2 Singletrace Otherapproachesuseonlyonetracetoperform 4.2 Dynamicanalysis theanalysis,sacrificingcoverageforscalability.ctgrind[86]re- purposesthedynamictaintanalysisofValgrindtocheckCTby Dynamic analysis groups approaches that derive security guar- declaringsecretsasundefinedmemory.Thissolutioniseasyto anteesfromexecutiontracesofatargetprogram.Someformof deployandreusesfamiliartools,butremainsimprecise. dynamicbinaryinstrumentation(DBI)isoftenusedtoexecutethe ABSynthe[67]identifiessecret-dependentbranchesusingdy- programandgathereventsofinterest,suchasmemoryaccessesor namic taint analysis. It employs a genetic algorithm to build a jumps.Dynamicapproachesdifferintheeventscollected,andhow sequenceofinstructionsbasedoninterferencemapsevaluating tracesareprocessed.Theycanbegroupeddependingonwhether contentioncreatedbyeachx86instructions. theyreasononasingletrace,orcomparemultiplestracestogether. MorepreciseapproachesuseSEtoreplay thetracewiththe secretasasymbolicvalueandcheckforCTviolation.CacheD[128] 4.2.1 Tracecomparisonapproaches appliesthisapproachtomemoryaccesses.Abacus[20]extendsit Statisticaltests.Statisticaltestscanbeusedtocheckifdifferent to control-flow vulnerabilities, picking random values to check satisfiabilityinsteadofusingaSMTsolver.Italsoincludesleakage secretsinducestatisticallysignificantdifferencesinrecordedtraces. estimationthroughMonteCarlosimulation. CacheTemplate[71]monitorscacheactivitytodetectlinesassoci- Finally,CaType[75]usesrefinementtypes(i.e.,typescarrying atedwithatargetevent,thenfindslinescorrelatedwiththeevent a predicate restricting their possible values) on a trace to track usingasimilaritymeasure.Afirstpassusingpage-levelobserva- constantbitvaluesandimproveprecision.CaTypealsosupports tionsinsteadoflinescanbeusedtoimprovescalability[113].Shin implementationsthatuseblinding. etal.[115]useK-meansclusteringtoproducetwogroupsoftraces foreachline.Theconfidenceinthepartitionindicateswhichline 4.3 Insights islikelytobesecret-dependent.DATA[133]employsaKuipertest thenaRandomizedDependenceCoefficienttesttoinferlinearand Despitetherelativeyouthofthefield,awidevarietyofapproaches non-linearrelationshipsbetweentracesandsecrets.Thiswaslater havebeenproposed.Whileinitiallymorestaticapproacheswere extendedtosupportcryptographicnoncesassecrets[131]. proposed,dynamiconessoonfollowedafter2017.Thismightrepre- Mutualinformation(MI)canbeusedtoquantifytheinformation sentashiftinresearchcommunities,fromafocusonverificationto
sharedbetweensecretvaluesandrecordedtraces,withanon-zero bug-findingand,critically,scalability.Indeed,dynamicapproaches MIscoregivingaleakageestimation.MicroWalk[134]computes typicallyscalebetterthanstaticones.Yet,thisadvantagemainlyASystematicEvaluationofAutomatedToolsforSide-ChannelVulnerabilitiesDetectioninCryptographicLibraries CCS’23,November26–30,2023,Copenhagen,Denmark appliestosingle-traceanalysisapproaches.Fortracecomparison- beimpractical.Weiseretal.[132]provedthisassumptionfalseby basedapproaches,thescalabilitygainislessobviousasrecording exploitingsecret-dependentcontrol-flowintheBEEAusedinkey multipletracescanbetime-consuming,particularlyforstatistical generation.Keygenerationwasindependentlyinvestigatedthrough approachesrequiringalargenumberoftraces,orforsloweral- thetestmethodologyTriggerflow[10,69],findingadditionalissues gorithms(e.g.,RSA).Singletraceanalyseshoweversufferfroma stemmingfromtheCTflag. criticallackofcoverage,whichcouldbealleviatedthroughmeth- Keyparsing.SimilarvulnerabilitieswerefoundinOpenSSLand odslikefuzzing.SEhasbecomeapopularapproachforbothstatic MbedTLS’keyformathandling[61],askeyformatstandardsleave anddynamicmethods,asrecentadvancesinSMTsolversmakeit alotofflexibilitytoimplementations.Differencesinkeyformat practicalforside-channeldetection. werefoundtoinducedifferentexecutionpathsforasameoperation, Boththestaticanddynamiccommunitiescouldbenefitfrom somecallingvulnerablefunctions.ThiswasthecaseforRSAkey integrating approaches from one another. For example we find parsing/validation,andsignaturesforsomeellipticcurves.Asimilar thatAbacus’optimizationoftryingrandomvaluestosatisfySMT problemwasdiscoveredearlier[9]. formulaswouldpairwellwithBinsec/Rel’soptimizationssparing UNSATformulas. SRP.ThemissingCTflagsteersOpenSSLimplementationofSRP, apassword-authenticatedkeyexchangeprotocol,toaninsecure 5 Classifyingside-channelvulnerabilities variantofmodularexponentiationusingsquare-and-multiply[33]. Wegiveherea(non-exhaustive)overviewofmicroarchitectural PRG.Amongthestandarddesignsforpseudo-randomgenerators side-channelvulnerabilitieswhichweresubjecttopublicationsin (PRGs),CTR_DRBGgeneratesapseudo-randombitsequenceusing securityandcryptographyconferencesinthepastfiveyears,many theAEScipher.Cohneyetal.[49]investigatedimplementationsof ofwhichwerestillfoundmanuallybyresearchers.Interestingly, CTR_DRBGinmultiplelibraries,findingthattheT-Tablevariantof mostofthesevulnerabilitiesarenewmanifestationsofalready- AESwasuseddespiteitswell-knownvulnerabilities. knownvulnerabilities(Section5.1)andonlyfewofthemactually 5.1.2 Newlibraries Asthemostpopularcryptographiclibrary[95], targetnewprimitivesorfunctionalities(Section5.2). OpenSSLhasreceivedconsiderableattentionfromside-channel 5.1 Knownvulnerabilities researchers.However,mitigationsimplementedinOpenSSLare notnecessarilypropagatedtootherlibraries.MbedTLSRSAimple- Knownvulnerabilitiescanresurfacefortworeasons:whenknown- mentationusesthesliding-windowmethoddespiteitsvulnerability vulnerablefunctionsareusedinnewcontexts,orinnewlibraries. tocacheattacks[89,97].Despitetheimplementation’sattemptto Inthefirstcase,developerskeepvulnerablefunctionsinthecode- balancebranchesbycallingthesamefunctioninbothbranchesof baseforperformancereasons,carefullyavoidingusingthemwhen thesquare-and-multiply,Schwarzetal.[112]successfullyexploited manipulatingsecretdata.Thispracticeleavesthedooropentonew secret-dependentdataaccessusingacacheattackwithinanSGX vulnerabilitiesinwhichtheseknown-vulnerablefunctions(e.g., enclave.Asimilarattackwasalsoperformedontheso-calledleft- square-and-multiply)areusedinanewcontext(e.g.,keygener- to-rightvariantfromLibgcrypt,featuringexponentblinding[122]. ation).Inthesecondcase,thelackofdeveloperawarenessmay StillinSGX,thesecret-dependentbranchitselfremainsvulnerable preventside-channelmitigationtransferfromonelibrarytothe tobranchshadowing[87],ortoattacksbasedoninteractionsbe- other.Thisalsoincludeslibrarieschoosingtoonlypartiallymitigate tweeninterruptsandinstructionexecutiontime[102].Hassanetal. thevulnerability,despiteavailablesecurealternatives. [123]illustratethisissueintheNSSlibrary.WhileNSS’ECCcode 5.1.1 Newcontexts Featuringadecade-oldcode-base,OpenSSLis isforkedfromOpenSSL,mitigationssuchasnoncepadding[37] particularlysusceptibletothiskindofvulnerability,asside-channel arenotimplementedinNSS. protectioninonemodulemightnotbecorrectlyportedtoanother. Somelibrariesimplementpseudoconstant-timeinsteadoffull Inparticular,OpenSSLsetsaCTflagonBIGNUMsmarkingsecret constant-timetokeeptheircodebaseeasiertomaintain.Suchmiti- datasothattheycanbemanipulatedusingsecurefunctions.Recent gationsseektoaddresstheLucky13[11]attackbyaddingdummy publicationshaveshownthatsuchinsecure-by-defaultapproaches MACverificationorrandomdelays.Ronenetal.[110]demonstrate