text
stringlengths 64
2.99M
|
---|
ductedacomprehensiveanalysisoftheseskillsusingSkillScanner. [28] TravisDBreaux,HananHibshi,andAshwiniRao. Eddy,aformallanguage SkillScannereffectivelyidentified1,328violationsamong786skills. forspecifyingandanalyzingdataflowspecificationsforconflictingprivacy requirements.RequirementsEngineering,19(3):281–307,2014. 694ofthemareaboutprivacynon-complianceand298ofthem [29] NicholasCarlini,PratyushMishra,TavishVaidya,YuankaiZhang,MicahSherr, violatecontentguidelinesimposedbytheAmazonAlexaplatform. ClayShields,DavidWagner,andWenchaoZhou.Hiddenvoicecommands.In Wefoundthat32%ofpolicyviolationsareintroducedthroughcode USENIXSecuritySymposium(USENIXSecurity),pages513–530,2016. [30] GuangkeChen,SenChen,LinglingFan,XiaoningDu,ZheZhao,FuSong,and copyandpaste.ThepolicyviolationsinAlexa’sofficialaccountsled YangLiu.Whoisrealbob?adversarialattacksonspeakerrecognitionsystems. to81policyviolationsinotherskills.Asourfuturework,weplan InIEEESymposiumonSecurityandPrivacy(SP),2021. [31] YanjiaoChen,YijieBai,RichardMitev,KaiboWang,Ahmad-RezaSadeghi,and toconductuserstudiestoevaluatetheusability(e.g.,acceptance WenyuanXu.Fakewake:Understandingandmitigatingfakewake-upwordsof anduser-friendliness)of SkillScannerbyskilldevelopers. voiceassistants.InProceedingsofthe2021ACMSIGSACConferenceonComputer andCommunicationsSecurity,pages1861–1883,2021. ACKNOWLEDGMENT [32] LongCheng,ChristinWilson,SongLiao,JeffreyYoung,DanielDong,and HongxinHu. Dangerousskillsgotcertified:Measuringthetrustworthiness TheworkofL.ChengissupportedbyNationalScienceFoundation ofskillcertificationinvoicepersonalassistantplatforms.InACMSIGSACCon- ferenceonComputerandCommunicationsSecurity(CCS),2020. (NSF) under the Grant No. 2239605, 2228616 and 2114920. The [33] PengChengandUtzRoedig.Personalvoiceassistantsecurityandprivacy—a workofH.HuissupportedbyNSFundertheGrantNo.2228617, survey.ProceedingsoftheIEEE,110(4):476–507,2022. 2120369,2129164,and2114982.TheworkofL.Guoissupportedby [34] H.Chung,M.Iorga,J.Voas,andS.Lee.“alexa,canitrustyou?”.IEEEComputer, 50(9):100–104,2017. NSFundergrantIIS-1949640,CNS-2008049,andCCF-2312616. [35] JideEdu,XaviFerrerAran,JoseSuch,andGuillermoSuarez-Tangil. Skillvet: Automatedtraceabilityanalysisofamazonalexaskills. IEEETransactionson REFERENCES DependableandSecureComputing,2021. [1] AlexaCertificationTestsforVUIandUX. https://developer.amazon.com/fr- FR/docs/alexa/custom-skills/voice-interface-and-user-experience-testing-for-a- custom-skill.html.SkillScanner:DetectingPolicy-ViolatingVoiceApplicationsThroughStaticAnalysisattheDevelopmentPhase CCS’23,November26–30,2023,Copenhagen,Denmark #ofskillswith Problem Detailedpolicy Datasource policyviolation Output 240 Datacollection/storagebut Providealegallyadequateprivacynoticethatwillbedisplayed Permission 94 missingaprivacypolicy toendusersonyourskill’sdetailpage. Database 126 DisclosuretoAlexa 4 Output 19 Datacollection/storagebut Ensurethatyourcollectionanduseofthatinformation Permission 23 Privacy havinganincompleteprivacy complieswithyourprivacynoticeandallapplicablelaws. Database 10 Violations policy DisclosuretoAlexa 19 Collectandusethedataonlyifitisrequiredtosupportandimprove Over-privilegeddatarequests Permission 42 thefeaturesandservicesyourskillprovides. Not-askedpermissionusage Bug Permission 18 Incorrectdatacollection Wronganswerto“DoesthisAlexaskillcollectusers’personal DisclosuretoAlexa 99 disclosuretoAlexa information?” Contentsafety Containsexcessiveprofanity. Output 1 Askingforpositiverating Cannotexplicitlyrequeststhatusersleaveapositiveratingoftheskill. Description 1 DoesnotadheretoAmazonInvocationNameRequirements.Please Violationsof Invocationnamerequirements consulttheserequirementstoensureyourskilliscompliantwithour Invocationname 281 Content invocationnamepolicies. Guidelines Kidcategorypolicy Itdoesn’tcollectanypersonalinformationfromendusers. Output 1 Askillthatprovideshealth-relatedinformation,news,factsortips Healthcategorypolicy mustincludeadisclaimerintheskilldescriptionstatingthattheskillis Description 13 notasubstituteforprofessionalmedicaladvice. Datacollectionrequestbut Bug Output 124 missingaslot Datacollectionslotbutmissing Code Vulnerability Intent 147 arequest Inconsistency Datacollectionslotbutmissing Bug Slot 24 anutterance Intentbutmissinganutterance Bug Intent 40 Table9:Policyviolationsandcodeinconsistencyinskillcodeanddetailedpoliciestheyviolate.Wehaveremovedthefalse positivesbyourmanualverification. [36] JideEdu,XavierFerrer-Aran,JoseSuch,andGuillermoSuarez-Tangil.Measuring [49] AafaqSabir,EvanLafontaine,andAnupamDas.Heyalexa,whoamitalkingto?: alexaskillprivacypracticesacrossthreeyears.InProceedingsoftheACMWeb Analyzingusers’perceptionandawarenessregardingthird-partyalexaskills.In Conference(WWW),page670–680,2022. CHIConferenceonHumanFactorsinComputingSystems,pages1–15,2022. |
[37] JideS.Edu,JoseM.Such,andGuillermoSuarez-Tangil.Smarthomepersonal [50] AwanthikaSenarathandNalinA.G.Arachchilage. Whydeveloperscannot assistants:Asecurityandprivacyreview.ACMComputingSurveys,53(6),2020. embedprivacyintosoftwaresystems?anempiricalinvestigation.InProceedings [38] SergioEsposito,DanieleSgandurra,andGiampaoloBella.Alexaversusalexa: ofthe22ndInternationalConferenceonEvaluationandAssessmentinSoftware Controllingsmartspeakersbyself-issuingvoicecommands.InProceedingsofthe Engineering2018,page211–216,2018. 2022ACMonAsiaConferenceonComputerandCommunicationsSecurity,pages [51] VanditSharmaandMainackMondal.Understandingandimprovingusability 1064–1078,2022. ofdatadashboardsforsimplifiedprivacycontrolofvoiceassistantdata.In31st [39] ZhixiuGuo,ZijinLin,PanLi,andKaiChen.Skillexplorer:Understandingthe USENIXSecuritySymposium(USENIXSecurity22),pages3379–3395,2022. behaviorofskillsinlargescale.In29th{USENIX}SecuritySymposium({USENIX} [52] SwapneelSheth,GailKaiser,andWalidMaalej.Usandthem:Astudyofprivacy Security20),pages2649–2666,2020. requirementsacrossnorthamerica,asia,andeurope.InProceedingsofthe36th [40] DeepakKumar,RiccardoPaccagnella,PaulMurley,EricHennenfent,Joshua InternationalConferenceonSoftwareEngineering(ICSE),page859–870,2014. Mason,AdamBates,andMichaelBailey. SkillSquattingAttacksonAmazon [53] FaysalHossainShezan,HangHu,GangWang,andYuanTian.Verhealth:Vetting Alexa.In27thUSENIXSecuritySymposium(USENIXSecurity),pages33–47,2018. medicalvoiceapplicationsthroughpolicyenforcement.Proc.ACMInteract.Mob. [41] ChristopherLentzsch,SheelJayeshShah,BenjaminAndow,MartinDegeling, WearableUbiquitousTechnol.,2020. AnupamDas,andWilliamEnck.HeyAlexa,isthisskillsafe?:Takingacloser [54] DanSu,JiqiangLiu,SencunZhu,XiaoyangWang,andWeiWang. "areyou lookattheAlexaskillecosystem.InProceedingsofthe28thISOCAnnualNetwork homealone?""yes"disclosingsecurityandprivacyvulnerabilitiesinalexaskills. andDistributedSystemsSymposium(NDSS),2021. arXivpreprintarXiv:2010.10788,2020. [42] SuwanLi,LeiBu,GuangdongBai,ZhixiuGuo,KaiChen,andHanlinWei.Vitas: [55] DaweiWang,KaiChen,andWeiWang. Demystifyingthevettingprocessof Guidedmodel-basedvuitestingofvpaapps. In37thIEEE/ACMInternational voice-controlledskillsonmarkets.Proc.ACMInteract.Mob.WearableUbiquitous ConferenceonAutomatedSoftwareEngineering,pages1–12,2022. Technol.,5(3),2021. [43] SongLiao,ChristinWilson,LongCheng,HongxinHu,andHuixingDeng.Mea- [56] YuandaWang,HanqingGuo,andQibenYan.Ghosttalk:Interactiveattackon suringtheeffectivenessofprivacypoliciesforvoiceassistantapplications.In smartphonevoicesystemthroughpowerline.arXivpreprintarXiv:2202.02585, AnnualComputerSecurityApplicationsConference(ACSAC),page856–869,2020. 2022. [44] GaryLiuandNathanMalkin. Effectsofprivacypermissionsonuserchoices [57] ChenYan,XiaoyuJi,KaiWang,QinhongJiang,ZizhiJin,andWenyuanXu.A invoiceassistantappstores. ProceedingsonPrivacyEnhancingTechnologies, surveyonvoiceassistantsecurity:Attacksandcountermeasures.ACMComputing 4:421–439,2022. Surveys,2022. [45] NathanMalkin,DavidWagner,andSergeEgelman. Runtimepermissionsfor [58] QibenYan,KehaiLiu,QinZhou,HanqingGuo,andNingZhang.Surfingattack: privacyinproactiveintelligentassistants.InEighteenthSymposiumonUsable Interactivehiddenattackonvoiceassistantsusingultrasonicguidedwave.In PrivacyandSecurity(SOUPS2022),pages633–651,2022. NetworkandDistributedSystemsSecurity(NDSS)Symposium,2020. [46] ErikaMcCallister.Guidetoprotectingtheconfidentialityofpersonallyidentifiable [59] JeffreyYoung,SongLiao,LongCheng,HongxinHu,andHuixingDeng.SkillDe- information,volume800.DianePublishing,2010. tective:Automatedpolicy-violationdetectionofvoiceassistantapplicationsin [47] YanMeng,JiachunLi,MatthewPillari,ArjunDeopujari,LiamBrennan,Hafsah thewild.InUSENIXSecuritySymposium(USENIXSecurity),2022. Shamsie,HaojinZhu,andYuanTian.Yourmicrophonearrayretainsyouridentity: [60] LeYu,XiapuLuo,XuleLiu,andTaoZhang.Canwetrusttheprivacypolicies Arobustvoicelivenessdetectionsystemforsmartspeaker.InUSENIXSecurity, ofandroidapps? In201646thAnnualIEEE/IFIPInternationalConferenceon 2022. DependableSystemsandNetworks(DSN),pages538–549.IEEE,2016. [48] LisaParker,TanyaKarliychuk,DonnaGillies,BarbaraMintzes,MelissaRaven, andQuinnGrundy.Ahealthappdeveloper’sguidetolawandpolicy:amulti- sectorpolicyanalysis.BMCMedicalInformaticsandDecisionMaking,2017.CCS’23,November26–30,2023,Copenhagen,Denmark SongLiao,LongCheng,HaipengCai,LinkeGuo,andHongxinHu [61] GuomingZhang,ChenYan,XiaoyuJi,TianchenZhang,TaiminZhang,and IEEESymposiumonSecurityandPrivacy(SP),pages1381–1396.IEEE,2019. WenyuanXu. Dolphinattack:Inaudiblevoicecommands. InACMSIGSAC [63] YangyongZhang,LeiXu,AbnerMendoza,GuangliangYang,PhakpoomChin- |
ConferenceonComputerandCommunicationsSecurity(CCS),pages103–117, prutthiwong,andGuofeiGu. Lifeafterspeechrecognition:Fuzzingsemantic 2017. misinterpretationforvoiceassistantapplications. InNetworkandDistributed [62] NanZhang,XianghangMi,XuanFeng,XiaoFengWang,YuanTian,andFeng SystemSecuritySymposium(NDSS),2019. Qian.Dangerousskills:Understandingandmitigatingsecurityrisksofvoice- [64] SerenaZheng,NoahApthorpe,MarshiniChetty,andNickFeamster.Userper- controlledthird-partyfunctionsonvirtualpersonalassistantsystems.In2019 ceptionsofsmarthomeiotprivacy.Proc.ACMHum.-Comput.Interact.,2018. |
2309.08474 ffi VulnSense: E cient Vulnerability Detection in Ethereum Smart Contracts by Multimodal Learning with Graph Neural Network and Language Model PhanTheDuya,b,NghiHoangKhoaa,b,NguyenHuuQuyena,b,LeCongTrinha,b,VuTrungKiena,b,TrinhMinhHoanga,b, Van-HauPhama,b aInformationSecurityLaboratory,UniversityofInformationTechnology,HoChiMinhcity,Vietnam bVietnamNationalUniversityHoChiMinhCity,HochiminhCity,Vietnam Abstract ThispaperpresentsVulnSenseframework, acomprehensiveapproachtoefficientlydetectvulnerabilitiesinEthereumsmartcon- tractsusingamultimodallearningapproachongraph-basedandnaturallanguageprocessing(NLP)models. Ourproposedframe- work combines three types of features from smart contracts comprising source code, opcode sequences, and control flow graph (CFG)extractedfrombytecode.WeemployBidirectionalEncoderRepresentationsfromTransformers(BERT),BidirectionalLong Short-TermMemory(BiLSTM)andGraphNeuralNetwork(GNN)modelstoextractandanalyzethesefeatures. Thefinallayerof ourmultimodalapproachconsistsofafullyconnectedlayerusedtopredictvulnerabilitiesinEthereumsmartcontracts. Address- inglimitationsofexistingvulnerabilitydetectionmethodsrelyingonsingle-featureorsingle-modeldeeplearningtechniques,our methodsurpassesaccuracyandeffectivenessconstraints. WeassessVulnSenseusingacollectionof1.769smartcontractsderived from the combination of three datasets: Curated, SolidiFI-Benchmark, and Smartbugs Wild. We then make a comparison with variousunimodalandmultimodallearningtechniquescontributedbyGNN,BiLSTMandBERTarchitectures. Theexperimental outcomes demonstrate the superior performance of our proposed approach, achieving an average accuracy of 77.96% across all threecategoriesofvulnerablesmartcontracts. Keywords: VulnerabilityDetection,SmartContract,DeepLearning,GraphNeuralNetworks,Multimodal 1. Introduction Blockchainecosystemenablesattackerstoexploitflawedsmart contracts,therebyaffectingtheassetsofindividuals,organiza- TheBlockchainkeywordhasbecomeincreasinglymorepop- tions,aswellasthestabilitywithintheBlockchainecosystem. ular in the era of Industry 4.0 with many applications for a In more detail, the DAO attack [3] presented by Mehar et variety of purposes, both good and bad. For instance, in the al. is a clear example of the severity of the vulnerabilities, as fieldoffinance,Blockchainisutilizedtocreatenew,faster,and itresultedinsignificantlossesupto$50million. Tosolvethe moresecurepaymentsystems,examplesofwhichincludeBit- issues,Kushwahaetal. [4]conductedaresearchsurveyonthe coinandEthereum.However,Blockchaincanalsobeexploited differenttypesofvulnerabilitiesinsmartcontractsandprovided for money laundering, as it enables anonymous money trans- an overview of the existing tools for detecting and analyzing fers, as exemplified by cases like Silk Road [1]. The number thesevulnerabilities.Developershavecreatedanumberoftools ofkeywordsassociatedwithblockchainisgrowingrapidly,re- todetectvulnerabilitiesinsmartcontractssourcecode,suchas flecting the increasing interest in this technology. A typical Oyente [5], Slither [4], Conkas [6], Mythril [7], Securify [8], example is the smart contract deployed on Ethereum. Smart etc.Thesetoolsusestaticanddynamicanalysisforseekingvul- contractsareprogrammedinSolidity,whichisanewlanguage nerabilities,buttheymaynotcoverallexecutionpaths,leading that has been developed in recent years. When deployed on a to false negatives. Additionally, exploring all execution paths blockchain system, smart contracts often execute transactions in complex smart contracts can be time-consuming. Current related to cryptocurrency, specifically the ether (ETH) token. endeavorsincontractsecurityanalysisheavilydependonpre- However, the smart contracts still have many vulnerabilities, definedrulesestablishedbyspecialists,aprocessthatdemands which have been pointed out by Zou et al. [2]. Alongside the significantlaborandlacksscalability. immutable and transparent properties of Blockchain, the pres- Meanwhile,theemergenceofMachineLearning(ML)meth- ence of vulnerabilities in smart contracts deployed within the odsinthedetectionofvulnerabilitiesinsoftwarehasalsobeen explored. Thisisalsoapplicabletosmartcontracts,wherenu- merous tools and research are developed to identify security Emailaddresses:duypt@uit.edu.vn(PhanTheDuy), bugs, such as ESCORT by Lutz [9], ContractWard by Wang khoanh@uit.edu.vn(NghiHoangKhoa),quyennh@uit.edu.vn(Nguyen HuuQuyen),19522404@gm.uit.edu.vn(LeCongTrinh), [10]andQian[11]. TheML-basedmethodshavesignificantly 19521722@gm.uit.edu.vn(VuTrungKien),19521548@gm.uit.edu.vn improvedperformanceoverstaticanddynamicanalysismeth- (TrinhMinhHoang),haupv@uit.edu.vn(Van-HauPham) ods,asindicatedinthestudybyJiang[12]. PreprintsubmittedtoKnowledge-BasedSystems September18,2023 3202 peS 51 ]RC.sc[ 1v47480.9032:viXraHowever,thecurrentstudiesdoexhibitcertainlimitations, coderevealslow-levelexecutiondetails,andopcodesequences primarilycenteredaroundtheutilizationofonlyasingulartype capturetheexecutionflow. Byfusingthesefeatures,themodel of feature from the smart contract as the input for ML mod- canextractarichersetoffeatures,potentiallyleadingtomore |
els. To elaborate, it is noteworthy that a smart contract’s rep- accuratedetectionofvulnerabilities. Themaincontributionsof resentationandsubsequentanalysiscanbeapproachedthrough thispaperaresummarizedasfollows: itssourcecode,employingtechniquessuchasNLP,asdemon- • First, we propose a multimodal learning approach con- stratedinthestudyconductedbyKhodadadietal. [13]. Con- sistingofBERT,BiLSTMandGNNtoanalyzethesmart versely, an alternative approach, as showcased by Chen et al. contractundermulti-viewstrategybyleveragingtheca- [14], involves the usage of the runtime bytecode of a smart pability of NLP algorithms, corresponding to three type contract published on the Ethereum blockchain. Additionally, of features, including source code, opcodes, and CFG Wangandcolleagues[10]addressedvulnerabilitydetectionus- generatedfrombytecode. ingopcodesextractedthroughtheemploymentoftheSolctool [15] (Solidity compiler), based on either the contract’s source • Then,weextractandleveragethreetypesoffeaturesfrom codeorbytecode. smartcontractstomakeacomprehensivefeaturefusion. Inpracticalterms,thesemethodologiesfallunderthecate- Morespecifics,oursmartcontractrepresentationswhich gorizationofunimodalormonomodalmodels,designedtoex- are created from real-world smart contract datasets, in- clusively handle one distinct type of data feature. Extensively cludingSmartbugsCurated,SolidiFI-BenchmarkandSmart- investigatedandprovenbeneficialindomainssuchascomputer bugsWild, canhelpmodeltocapturesemanticrelation- vision,naturallanguageprocessing,andnetworksecurity,these shipsofcharacteristicsinthephaseofanalysis. unimodalmodelsdoexhibitimpressiveperformancecharacter- istics. However, their inherent drawback lies in their limited • Finally,weevaluatetheperformanceofVulnSenseframe- perspective, resulting from their exclusive focus on singular workonthereal-worldvulnerablesmartcontractstoin- dataattributes,whichlackthepotentialcharacteristicsformore dicatethecapabilityofdetectingsecuritydefectssuchas in-depthanalysis. Reentrancy,Arithmeticonsmartcontracts. Additionally, Thislimitationhaspromptedtheemergenceofmultimodal wealsocompareourframeworkwithaunimodalmodels models,whichofferamorecomprehensiveoutlookondataob- and other multimodal ones to prove the superior effec- jects. The works of Jabeen and colleagues [16], Tadas et al. tivenessofVulnSense. [17],Nametal.[18],andXu[19]underscorethistrend.Specif- The remaining sections of this article are constructed as fol- ically,multimodallearningharnessesdistinctMLmodels,each lows. Section 3 introduces some related works in adversar- accommodating diverse input types extracted from an object. ial attacks and countermeasures. The section 2 gives a brief Thisapproachfacilitatestheacquisitionofholisticandintricate backgroundofappliedcomponents. Next,thethreatmodeland representationsoftheobject,aconcertedefforttosurmountthe methodologyarediscussedinsection4.Section5describesthe limitations posed by unimodal models. By leveraging multi- experimental settings and scenarios with the result analysis of ple input sources, multimodal models endeavor to enrich the ourwork. Finally,weconcludethepaperinsection6. understanding of the analyzed data objects, resulting in more comprehensiveandaccurateoutcomes. Recently,multimodalvulnerabilitydetectionmodelsforsmart 2. Background contractshaveemergedasanewresearcharea,combiningdif- 2.1. BytecodeofSmartContracts ferenttechniquestoprocessdiversedata,includingsourcecode, Bytecode is a sequence of hexadecimal machine instruc- bytecode and opcodes, to enhance the accuracy and reliability tions generated from high-level programming languages such of AI systems. Numerous studies have demonstrated the ef- as C/C++, Python, and similarly, Solidity. In the context of fectiveness of using multimodal deep learning models to de- deploying smart contracts using Solidity, bytecode serves as tect vulnerabilities in smart contracts. For instance, Yang et the compiled version of the smart contract’s source code and al. [20]proposedamultimodalAImodelthatcombinessource isexecutedontheblockchainenvironment. Bytecodeencapsu- code, bytecode, and execution traces to detect vulnerabilities latestheactionsthatasmartcontractcanperform. Itcontains in smart contracts with high accuracy. Chen et al. [13] pro- statementsandnecessaryinformationtoexecutethecontract’s posed a new hybrid multimodal model called HyMo Frame- functionalities. Bytecode is commonly derived from Solidity work, which combines static and dynamic analysis techniques orotherlanguagesusedinsmartcontractdevelopment. When to detect vulnerabilities in smart contracts. Their framework deployedonEthereum,bytecodeiscategorizedintotwotypes: usesmultiplemethodsandoutperformsothermethodsonsev- creationbytecodeandruntimebytecode. eraltestdatasets. Recognizingthatthesefeaturesaccuratelyreflectsmartcon- 1. Creation Bytecode: The creation bytecode runs only tracts and the potential of multimodal learning, we employ a once during the deployment of the smart contract onto multimodalapproachtobuildavulnerabilitydetectiontoolfor thesystem. Itisresponsibleforinitializingthecontract’s smart contracts called VulnSense. Different features can pro- initialstate,includinginitializingvariablesandconstruc- videuniqueinsightsintovulnerabilitiesonsmartcontracts.Source tor functions. Creation bytecode does not reside within |
code offers a high-level understanding of contract logic, byte- thedeployedsmartcontractontheblockchainnetwork. 22. RuntimeBytecode:Runtimebytecodecontainsexecutable Furthermore,CFGsupportstheoptimizationofSoliditysource informationaboutthesmartcontractandisdeployedonto code. By analyzing and understanding the control flow struc- theblockchainnetwork. ture, we can propose performance and correctness improve- ments for the Solidity program. This is particularly crucial in Onceasmartcontracthasbeencompiledintobytecode,itcan thedevelopmentofsmartcontractsontheEthereumplatform, bedeployedontotheblockchainandexecutedbynodeswithin whereperformanceandsecurityplayessentialroles. thenetwork. Nodesexecutethebytecode’sstatementstodeter- Inconclusion,CFGisapowerfulrepresentationthatallows minethebehaviorandinteractionsofthesmartcontract. ustoanalyze,understand,andoptimizethecontrolflowinSo- Bytecodeishighlydeterministicandremainsimmutableaf- lidityprograms. Byconstructingcontrolflowgraphsandana- tercompilation. Itprovidesparticipantsintheblockchainnet- lyzingthecontrolflowstructure,wecanidentifyerrors,verify work the ability to inspect and verify smart contracts before correctness,andoptimizeSoliditysourcecodetoensureperfor- deployment. manceandsecurity. Insummary,bytecodeservesasabridgebetweenhigh-level programming languages and the blockchain environment, en- abling smart contracts to be deployed and executed. Its deter- 3. Relatedwork ministic nature and pre-deployment verifiability contribute to This section will review existing works on smart contract thesecurityandreliabilityofsmartcontractimplementations. vulnerabilitydetection,includingconventionalmethods,single learningmodelandmultimodallearningapproaches. 2.2. OpcodeofSmartContracts Opcodeinsmartcontractsreferstotheexecutablemachine 3.1. Staticanddynamicmethod instructions used in a blockchain environment to perform the There are many efforts in vulnerability detection in smart functionsofthesmartcontract. Opcodesarelow-levelmachine contractsthroughbothstaticanddynamicanalysis.Thesetech- commandsusedtocontroltheexecutionprocessofthecontract niques are essential for scrutinizing both the source code and onablockchainvirtualmachine,suchastheEthereumVirtual theexecutionprocessofsmartcontractstouncoversyntaxand Machine(EVM). logic errors, including assessments of input variable validity Eachopcoderepresentsaspecifictaskwithinthesmartcon- and string length constraints. Dynamic analysis evaluates the tract,includinglogicaloperations,arithmeticcalculations,mem- controlflowduringsmartcontractexecution,aimingtounearth orymanagement,dataaccess,callingandinteractingwithother potential security flaws. In contrast, static analysis employs contracts in the Blockchain network, and various other tasks. approaches such as symbolic execution and tainting analysis. Opcodes define the actions that a smart contract can perform Taintanalysis,specifically,identifiesinstancesofinjectionvul- and specify how the contract’s data and state are processed. nerabilitieswithinthesourcecode. Theseopcodesarelistedanddefinedinthebytecoderepresen- Recentresearchstudieshaveprioritizedcontrolflowanaly- tationofthesmartcontract. sisastheprimaryapproachforsmartcontractvulnerabilityde- Theuseofopcodesprovidesflexibilityandstandardization tection. Notably,Kushwahaetal. [21]havecompiledanarray inimplementingthefunctionalitiesofsmartcontracts.Opcodes of tools that harness both static analysis techniques—such as ensureconsistencyandsecurityduringtheexecutionofthecon- thoseinvolvingsourcecodeandbytecode—anddynamicanal- tractontheblockchain,andplayasignificantroleindetermin- ysis techniques via control flow scrutiny during contract exe- ingthebehaviorandlogicofthesmartcontract. cution. AprominentexampleofstaticanalysisisOyente[22], a tool dedicated to smart contract examination. Oyente em- 2.3. ControlFlowGraph ploys control flow analysis and static checks to detect vulner- CFG is a powerful data structure in the analysis of Solid- abilitieslikeReentrancyattacks,faultytokenissuance,integer ity source code, used to understand and optimize the control overflows, and authentication errors. Similarly, Slither [23], a flowofaprogramextractedfromthebytecodeofasmartcon- dynamicanalysistool,utilizescontrolflowanalysisduringexe- tract. The CFG helps determine the structure and interactions cutiontopinpointsecurityvulnerabilities,encompassingReen- between code blocks in the program, providing crucial infor- trancy attacks, Token Issuance Bugs, Integer Overflows, and mation about how the program executes and links elements in Authentication Errors. It also adeptly identifies concerns like thecontrolflow. TransactionOrderDependence(TOD)andTimeDependence. Specifically, CFG identifies jump points and conditions in Beyond static and dynamic analysis, another approach in- Soliditybytecodetoconstructacontrolflowgraph. Thisgraph volvesfuzzytesting. Inthistechnique,inputstringsaregener- describesthebasicblocksandcontrolbranchesintheprogram, ated randomly or algorithmically to feed into smart contracts, therebycreatingaclearunderstandingofthestructureoftheSo- and their outcomes are verified for anomalies. Both Contract lidity program. With CFG, we can identify potential issues in Fuzzer[24]andxFuzz[25]pioneertheuseoffuzzingforsmart |
theprogramsuchasinfiniteloops, incorrectconditions, orse- contractvulnerabilitydetection. ContractFuzzeremployscon- curityvulnerabilities.ByexaminingcontrolflowpathsinCFG, colic testing, a hybrid of dynamic and static analysis, to gen- we can detect logic errors or potential unwanted situations in erate test cases. Meanwhile, xFuzz leverages a genetic algo- theSolidityprogram. rithmtodeviserandomtestcases,subsequentlyapplyingthem tosmartcontractsforvulnerabilityassessment. 3Moreover,symbolicexecutionstandsasanadditionalmethod And most recently, Jie Wanqing and colleagues have pub- for in-depth analysis. By executing control flow paths, sym- lished a study [32] utilizing four attributes of smart contracts bolicexecutionallowsthegenerationofgeneralizedinputval- (SC), including source code, Static Single Assigment (SSA), ues,addressingchallengesassociatedwithrandomnessinfuzzing CFG, and bytecode. With these four attributes, they construct approaches. Thisapproachholdspotentialforovercominglim- three layers: SC, BB, and EVMB. Among these, the SC layer itationsandintricaciestiedtothecreationofarbitraryinputval- employs source code for attribute extraction using Word2Vec ues. and BERT, the BB layer uses SSA and CFG generated from However, the aforementioned methods often have low ac- the source code, and finally, the EVMB layer employs assem- curacyandarenotflexiblebetweenvulnerabilitiesastheyrely bly code and CFG derived from bytecode. Additionally, the on expert knowledge, fixed patterns, and are time-consuming authorscombinetheseclassesthroughvariousmethodsandun- and costly to implement. They also have limitations such as dergoseveraldistinctsteps. onlydetectingpre-definedfixedvulnerabilitiesandlackingthe These models yield promising results in terms of Accu- abilitytodetectnewvulnerabilities. racy, with HyMo [30] achieving approximately 0.79%, HY- DRA[31]surpassingitwitharound0.98%andthemultimodal 3.2. MachineLearningmethod AI of Jie et al. [32]. achieved high-performance results rang- ML methods often use features extracted from smart con- ingfrom0.94%to0.99%acrossvarioustestcases. Withthese tractsandemploysupervisedlearningmodelstodetectvulnera- achievedresults,thesestudieshavedemonstratedthepowerof bilities. Recentresearchhasindicatedthatresearchgroupspri- multimodal models compared to unimodal models in classify- marily rely on supervised learning. The common approaches ing objects with multiple attributes. However, the limitations usually utilize feature extraction methods to obtain CFG and within the scope of the paper are more constrained by imple- Abstract Syntax Tree (AST) through dynamic and static anal- mentation than design choices. They utilized word2vec that ysis tools on source code or bytecode. Th ese studies [26, lackssupportforout-of-vocabularywords.Toaddressthiscon- 27] used a sequential model of Graph Neural Network to pro- straint, they proposed substituting word2vec with the fastText cess opcodes and employed LSTM to handle the source code. NLPmodel. Subsequently,theirvulnerabilitydetectionframe- Besides, a team led by Nguyen Hoang has developed Mando work was modeled as a binary classification problem within a Guru[28],aGNNmodeltodetectvulnerabilitiesinsmartcon- supervisedlearningparadigm.Inthiswork,theirprimaryfocus tracts.TheirteamappliedadditionalmethodssuchasHeteroge- wasondeterminingwhetheracontractcontainsavulnerability neous Graph Neural Network, Coarse-Grained Detection, and ornot.Asubsequenttaskcoulddelveintoinvestigatingspecific Fine-GrainedDetection. Theyleveragedthecontrolflowgraph vulnerabilitytypesthroughdesiredmulti-classclassification. (CFG) and call graph (CG) of the smart contract to detect 7 Fromtheevaluationspresentedinthissection,wehaveiden- vulnerabilities. Theirapproachiscapableofdetectingmultiple tified the strengths and limitations of existing literature. It is vulnerabilities in a single smart contract. The results are rep- evident that previous works have not fully optimized the uti- resentedasnodesandpathsinthegraph. Additionally, Zhang lizationofSmartContractdataandlacktheincorporationofa Lejun et al. [29] also utilized ensemble learning to develop a diverse range of deep learning models. While unimodal ap- 7-layerconvolutionalmodelthatcombinedvariousneuralnet- proaches have not adequately explored data diversity, multi- workmodelssuchasCNN,RNN,RCN,DNN,GRU,Bi-GRU, modaloneshavetraded-offconstructiontimeforclassification and Transformer. Each model was assigned a different role in focus, solely determining whether a smart contract is vulnera- eachlayerofthemodel. bleornot. Inlightoftheseinsights,weproposeanovelframeworkthat 3.3. MultimodalLearning leverages the advantages of three distinct deep learning mod- elsincludingBERT,GNN,andBiLSTM.Eachmodelformsa The HyMo Framework [30], introduced by Chen et al. in separatebranch,contributingtothecreationofaunifiedarchi- 2020, presented a multimodal deep learning model used for tecture. Our approach adopts a multi-class classification task, smartcontractvulnerabilitydetectionillustratesthecomponents aiming to collectively improve the effectiveness and diversity oftheHyMoFramework.Thisframeworkutilizestwoattributes of vulnerability detection. By synergistically integrating these of smart contracts, including source code and opcodes. After |
models, we strive to overcome the limitations of the existing preprocessing these attributes, the HyMo framework employs literatureandprovideamorecomprehensivesolution. FastTextforwordembeddingandutilizestwoBi-GRUmodels toextractfeaturesfromthesetwoattributes. Anotherframework, theHYDRAframework, proposedby 4. Methodology Chen and colleagues [31], utilizes three attributes, including API, bytecode, and opcode as input for three branches in the Thissectionprovidestheoutlineofourproposedapproach multimodalmodeltoclassifymalicioussoftware. Eachbranch forvulnerabilitydetectioninsmartcontracts. Additionally,by processes the attributes using basic neural networks, and then employingmultimodallearning, wegenerateacomprehensive theoutputsofthesebranchesareconnectedthroughfullycon- viewofthesmartcontract,whichallowsustorepresentofsmart nected layers and finally passed through the Softmax function contractwithmorerelevantfeaturesandboosttheeffectiveness toobtainthefinalresult. ofthevulnerabilitydetectionmodel. 4Figure1:TheoverviewofVulnSenseframework. 4.1. Anoverviewofarchitecture functionality, wedesignedaBERTnetworkwhichisabranch Our proposed approach, VulnSense, is constructed upon a inourMultimodal. multimodaldeeplearningframeworkconsistingofthreebranches, As in Figure 2, BERT model consists of 3 blocks: Pre- including BERT, BiLSTM, and GNN, as illustrated in Figure processor,EncoderandNeuralnetwork. Morespecifically,the 1. Morespecifically,thefirstbranchistheBERTmodel,which Preprocessor processes the inputs, which is the source code isbuiltupontheTransformerarchitectureandemployedtopro- of smart contracts. The inputs are transformed into vectors cessthesourcecodeofthesmartcontract. Secondly,tohandle through the input embedding layer, and then pass through the andanalyzetheopcodecontext,theBiLSTMmodelisapplied positional encoding layer to add positional information to the in the second branch. Lastly, the GNN model is utilized for words. Then, the preprocessed valuesare fedinto theencod- representingtheCFGofbytecodeinthesmartcontract. ingblocktocomputerelationshipsbetweenwords. Theentire Thisintegrativemethodologyleveragesthestrengthsofeach encodingblockconsistsof12identicalencodinglayersstacked component to comprehensively assess potential vulnerabilities ontopofeachother. Eachencodinglayercomprisestwomain withinsmartcontracts. Thefusionoflinguistic,sequential,and parts: aself-attentionlayerandafeed-forwardneuralnetwork. structuralinformationallowsforamorethoroughandinsight- Theoutputencodedformsavectorspaceoflength768.Subse- ful evaluation, thereby fortifying the security assessment pro- quently,theencodedvaluesarepassedthroughasimpleneural cess.Thisapproachpresentsarobustfoundationforidentifying network. The resulting bert output values constitute the out- vulnerabilitiesinsmartcontractsandholdspromiseforsignifi- put of this branch in the multimodal model. Thus, the whole cantlyreducingrisksinblockchainecosystems. BERTcomponentcouldbedemonstratedasfollows: preprocessed= positional encoding(e(input)) (1) 4.2. BidirectionalEncoderRepresentationsfromTransformers (BERT) encoded= Encoder(preprocessed) (2) bert output= NN(encoded) (3) where, (1), (2) and (3) represent the Preprocessor block, En- coderblockandNeuralNetworkblock,respectively. 4.3. Bidirectionallong-shorttermmemory(BiLSTM) Toward the opcode, we applied the BiLSTM which is an- other branch of our Multimodal approach to analysis the con- textual relation of opcodes and contribute crucial insights into thecode’sexecutionflow. ByprocessingOpcodesequentially, Figure2:ThearchitectureofBERTcomponentinVulnSense weaimedtocapturepotentialvulnerabilitiesthatmightbeover- lookedbysolelyconsideringstructuralinformation. In this study, to capture high-level semantic features from Indetail,asinFigure3,wefirsttokenizetheopcodesand thesourcecodeandenableamorein-depthunderstandingofits convertthemintointegervalues.Theopcodefeaturestokenized 5Inthisbranch,wefirstlyextracttheCFGfromthebytecode, andthenuseOpenAI’sembeddingAPItoencodethenodesand edgesoftheCFGintovectors,asin(7). encode= Encoder(edges,nodes) (7) Theencodedvectorshavealengthof1536. Thesevectors are then passed through 3 GCN layers with ReLU activation functions(8),withthefirstlayerhavinganinputlengthof1536 Figure3:ThearchitectureofBiLSTMcomponentinVulnSense. andanoutputlengthofacustomhidden channels(hc)variable. GCN1=GCNConv(1536,relu)(encode) are embedded into a dense vector space using an embedding GCN2=GCNConv(hc,relu)(GCN1) (8) layerwhichhas200dimensions. GCN3=GCNConv(hc)(GCN2) token=Tokenize(opcode) (4) Finally, to feed into the multimodal deep learning model, vector space= Embedding(token) the output of the GCN layers isfed into 2 dense layers with 3 and64unitsrespecitvely,asdescribedin(9). Then,theopcodevectorisfedintotwoBiLSTMlayerswith 128and64unitsrespectively. Moreover,toreduceoverfitting, d1 gnn= Dense(3,relu)(GCN3) (9) theDropoutlayerisappliedafterthefirstBiLSTMlayerasin gnn output= Dense(64,relu)(d1 gnn) (5). 4.5. Multimodal bi lstm1= Bi LSTM(128)(vector space) Each of these branches contributes a unique dimension of |
r= Dropout(dense(bi lstm1)) (5) analysis, allowing us to capture intricate patterns and nuances bi lstm2= Bi LSTM(128)(r) presentinthesmartcontractdata.Therefore,weadoptaninno- vativeapproachbysynergisticallyconcatinatingtheoutputsof Finally,theoutputofthelastBiLSTMlayeristhenfedinto threedistinctivemodelsincludingBERTbert output(3),BiL- adenselayerwith64unitsandReLUactivationfunctionasin STMlstm output(6),andGNNgnn output(9)toenhancethe (6). accuracyanddepthofourpredictivemodel,asshownin(10): c=Concatenate([bert output, lstm output= Dense(64,relu)(bi lstm2) (6) (10) lstm output,gnn output]) 4.4. GraphNeuralNetwork(GNN) Then the output c is transformed into a 3D tensor with di- Toofferinsightsintothestructuralcharacteristicsofsmart mensions(batch size,194,1)usingtheReshapelayer(11): contracts based on bytecode, we present a CFG-based GNN c reshaped=Reshape((194,1))(c) (11) model which is the third branch of our multimodal model, as Next,thetransformedtensorc reshapedispassedthrough showninFigure4. a 1D convolutional layer (12) with 64 filters and a kernel size of3,utilizingtherectifiedlinearactivationfunction: conv out=Conv1D(64,3,relu)(c reshaped) (12) The output from the convolutional layer is then flattened (13)togeneratea1Dvector: f out=Flatten()(conv out) (13) Theflattenedtensorf outissubsequentlypassedthrougha fully connected layer with length 32 and an adjusted rectified linearactivationfunctionasin(14): d out=Dense(32,relu)(f out) (14) Finally,theoutputispassedthroughthesoftmaxactivation function (15) to generate a probability distribution across the threeoutputclasses: (cid:101)y=Dense(3,softmax)(d out) (15) Figure4:ThearchitectureofGNNcomponentinVulnSense Thisarchitectureformsthefinalstagesofourmodel,culmi- natinginthegenerationofpredictedprobabilitiesforthethree outputclasses. 65. ExperimentsandAnalysis 5.3.1. SourceCodeSmartContract Whenprogramming,developersoftenhavethehabitofwrit- 5.1. ExperimentalSettingsandImplementation ing comments to explain their source code, aiding both them- In this work, we utilize a virtual machine (VM) of Intel selvesandotherprogrammersinunderstandingthecodesnip- Xeon(R)CPUE5-2660v4@2.00GHzx24,128GBofRAM, pets. BERT, a natural language processing model, takes the and Ubuntu 20.04 version for our implementation. Further- source code of smart contracts as its input. From the source more,allexperimentsareevaluatedunderthesameexperimen- codeofsmartcontracts,BERTcalculatestherelevanceofwords talconditions.TheproposedmodelisimplementedusingPython within the code. Comments present within the code can in- programming language and utilized well-established libraries troduce noise to the BERT model, causing it to compute un- suchasTensorFlow,Keras. necessary information about the smart contract’s source code. Foralltheexperiments,wehaveutilizedthefine-tunestrat- Hence,preprocessingofthesourcecodebeforefeedingitinto egy to improve the performance of these models during the theBERTmodelisnecessary. trainingstage. Wesetthebatchsizeas32andthelearningrate ofoptimizerAdamwith0.001. Additionally,toescapeoverfit- tingdata,thedropoutoperationratiohasbeensetto0.03. 5.2. PerformanceMetrics Weevaluateourproposedmethodvia4followingmetrics, including Accuracy, Precision, Recall, F1-Score. Since our workconductsexperimentsinmulti-classesclassificationtasks, thevalueofeachmetriciscomputedbasedona2Dconfusion matrixwhichincludesTruePositive(TP),TrueNegative(TN), FalsePositive(FP)andFalseNegative(FN). AccuracyistheratioofcorrectpredictionsTP, TNoverall predictions. Figure5:AnexampleofSmartContractPriortoProcessing Precision measures the proportion of TP over all samples classifiedaspositive. Moreover, removing comments from the source code also RecallisdefinedtheproportionofTPoverallpositivein- helps reduce the length of the input when fed into the model. stancesinatestingdataset. To further reduce the source code length, we also eliminate F1-ScoreistheHarmonicMeanofPrecisionandRecall. extra blank lines and unnecessary whitespace. Figure 5 pro- vides an example of an unprocessed smart contract from our 5.3. DatasetandPreprocessing dataset.Thiscontractcontainscommentsfollowingthe’//’syn- tax,blanklines,andexcessivewhitespacesthatdonotadhere Table1:DistributionofLabelsintheDataset toprogrammingstandards. Figure6representsthesmartcon- tractafterundergoingprocessing. VulnerabilityType Contracts Arithmetic 631 Re-entrancy 591 Non-Vulnerability 547 Inthisdataset,wecombinethreedatasets,includingSmart- bugs Curated [33, 34], SolidiFI-Benchmark [35], and Smart- bugs Wild [33, 34]. For the Smartbugs Wild dataset, we col- lectsmartcontractscontainingasinglevulnerability(eitheran Arithmetic vulnerability or a Reentrancy vulnerability). The Figure6:AnexampleofprocessedSmartContract identification of vulnerable smart contracts is confirmed by at leasttwovulnerabilitydetectiontoolscurrentlyavailable.Into- tal,ourdatasetincludes547Non-Vulnerability,631Arithmetic 5.3.2. OpcodeSmartContracts VulnerabilitiesofSmartContracts,and591ReentrancyVulner- Weproceedwithbytecodeextractionfromthesourcecode abilitiesofSmartContracts,asshowninTable1. |
of the smart contract, followed by opcode extraction through thebytecode. Theopcodeswithinthecontractarecategorized into 10 functional groups, totaling 135 opcodes, according to theEthereumYellowPaper[36]. However,wehavecondensed thembasedonTable2. 7Table2:Thesimplifiedopcodemethods SubstitutedOpcodes OriginalOpcodes DUP DUP1-DUP16 SWAP SWAP1-SWAP16 PUSH PUSH5-PUSH32 LOG LOG1-LOG4 During the preprocessing phase, unnecessary hexadecimal characterswereremovedfromtheopcodes.Thepurposeofthis preprocessing is to utilize the opcodes for vulnerability detec- tioninsmartcontractsusingtheBiLSTMmodel. Inadditiontoopcodepreprocessing,wealsoperformedother preprocessingstepstopreparethedatafortheBiLSTMmodel. Firstly, we tokenized the opcodes into sequences of integers. Subsequently, we applied padding to create opcode sequences ofthesamelength. Themaximumlengthofopcodesequences wassetto200,whichisthemaximumlengththattheBiLSTM modelcanhandle. Figure8:VisualizeGraphby.cgf.gvfile Afterthepaddingstep,weemployaWordEmbeddinglayer totransformtheencodedopcodesequencesintofixed-sizevec- tors,servingasinputsfortheBiLSTMmodel. Thisenablesthe To train the GNN model, we encode the nodes and edges BiLSTM model to better learn the representations of opcode oftheCFGintonumericalvectors. Oneapproachistouseem- sequences. beddingtechniquestorepresenttheseentitiesasvectors.Inthis Ingeneral,thepreprocessingstepsweperformedarecrucial case, we utilize the OpenAI API embedding to encode nodes inpreparingthedatafortheBiLSTMmodelandenhancingits and edges into vectors of length 1536. This could be a cus- performanceindetectingvulnerabilitiesinSmartContracts. tomizedapproachbasedonOpenAI’spre-traineddeeplearning models. OncethenodesandedgesoftheCFGareencodedinto 5.3.3. ControlFlowGraph vectors,weemploythemasinputsfortheGNNmodel. 5.4. ExperimentalScenarios To prove the efficiency of our proposed model and com- paredmodels, weconductedtrainingwithatotalof7models, categorizedintotwotypes:unimodaldeeplearningmodelsand multi-modaldeeplearningmodels. On the one hand, the unimodal deep learning models con- sisted of models within each branch of VulnSense. On the other hand, these multimodal deep learning models are pair- wisecombinationsofthreeunimodaldeeplearningmodelsand VulnSense,utilizinga2-wayinteraction. Specifically: • Unimodal: – BiLSTM – BERT – GNN Figure7:GraphExtractedfromBytecode • Multimodal: First,weextractbytecodefromthesmartcontract,thenex- – MultimodalBERT-BiLSTM(M1) tract the CFG through bytecode into .cfg.gv files as shown in – MultimodalBERT-GNN(M2) Figure 7. From this .cfg.gv file, a CFG of a Smart Contract – MultimodalBiLSTM-GNN(M3) throughbytecodecanberepresentedasshowninFigure8.The – VulnSense(asmentionedinSection4) nodesintheCFGtypicallyrepresentcodeblocksorstatesofthe contract,whiletheedgesrepresentcontrolflowconnectionsbe- Furthermore, to illustrate the fast convergence and stability of tweennodes. multimodalmethod,wetrainandvalidate7modelson3differ- entmocksoftrainingepochsincluding10,20and30epochs. 8Table3:Theperformanceof7models Score Epoch BERT BiLSTM GNN M1 M2 M3 VulnSense E10 0.5875 0.7316 0.5960 0.7429 0.6468 0.7542 0.7796 Accuracy E20 0.5903 0.6949 0.5988 0.7796 0.6553 0.7768 0.7796 E30 0.6073 0.7146 0.6016 0.7796 0.6525 0.7683 0.7796 E10 0.5818 0.7540 0.4290 0.7749 0.6616 0.7790 0.7940 Precision E20 0.6000 0.7164 0.7209 0.7834 0.6800 0.7800 0.7922 E30 0.6000 0.7329 0.5784 0.7800 0.7000 0.7700 0.7800 E10 0.5876 0.7316 0.5960 0.7429 0.6469 0.7542 0.7797 Recall E20 0.5900 0.6949 0.5989 0.7797 0.6600 0.7800 0.7797 E30 0.6100 0.7147 0.6017 0.7700 0.6500 0.7700 0.7700 E10 0.5785 0.7360 0.4969 0.7509 0.6520 0.7602 0.7830 F1 E20 0.5700 0.6988 0.5032 0.7809 0.6600 0.7792 0.7800 E30 0.6000 0.7185 0.5107 0.7700 0.6500 0.7700 0.7750 5.5. ExperimentalResults the M1 and M3 models, which require 30 epochs. Besides, Theexperimentationprocessforthemodelswascarriedout throughout30trainingepochs,theM3,M1,BiLSTM,andM2 onthedatasetasdetailedinSection5.3. modelsexhibitedsimilarperformancetotheVulnSensemodel, yet they demonstrated some instability. On the one hand, the 5.5.1. Modelsperformanceevaluation VulnSensemodelmaintainsaconsistentperformancelevelwithin therangeof75-79%,ontheotherhand,theM3modelexperi- ThroughthevisualizationsinTable3, itcanbeintuitively enced a severe decline in both Accuracy and F1-Score values, answeredthattheabilitytodetectvulnerabilitiesinsmartcon- tracts using multi-modal deep learning models is more effec- decliningbyover20%bythe15th epoch,indicatingsignificant disturbanceinitsperformance. tivethanunimodaldeeplearningmodelsintheseexperiments. Fromthisobservation,thesefindingsindicatethatourpro- Specifically,whentesting3multimodalmodelsincludingM1, posed model, VulnSense, is more efficient in identifying vul- |
M3 and VulnSense on 3 mocks of training epochs, the results nerabilitiesinsmartcontractscomparedtotheseothermodels. indicate that the performance is always higher than 75.09% Furthermore,byharnessingtheadvantagesofmultimodalover with 4 metrics mentioned above. Meanwhile, the testing per- unimodal,VulnSensealsoexhibitedconsistentperformanceand formanceofM2and3unimodalmodelsincludingBERT,BiL- rapidconvergence. STM and GNN are lower than 75% with all 4 metrics. More- over, with the testing performance on all 3 mocks of training 5.5.2. ComparisonsofTime epochs, VulnSense model has achieved the highest F1-Score withmorethan77%andAccuracywithmorethan77.96%. Figure11illustratesthetrainingtimefor30epochsofeach In addition, Figure 9 provides a more detailed of the per- model. Concerning the training time of unimodal models, on formances of all 7 models at the last epoch training. It can the one hand, the training time for the GNN model is very be seen from Figure 9 that, among these multimodal models, short, at only 7.114 seconds, on the other hand BERT model VulnSense performs the best, having the accuracy of Arith- reaches significantly longer training time of 252.814 seconds. metic, Reentrancy, and Clean label of 84.44%, 64.08% and For the BiLSTM model, the training time is significantly 10 84.48%,respectively,followedbyM3,M1andM2model.Even timeslongerthanthatoftheGNNmodel. Furthermore, when thoughtheGNNmodel,whichisanunimodalmodel,managed comparing the multimodal models, the shortest training time to attain an accuracy rate of 85.19% for the Arithmetic label belongstotheM3model(themultimodalcombinationofBiL- and82.76%fortheReentrancylabel,itsperformanceinterms STM and GNN) at 81.567 seconds. Besides, M1, M2, and of the Clean label accuracy was merely 1.94%. Similarity in VulnSenseinvolvetheBERTmodel,resultinginrelativelylonger thecontextofunimodalmodels,BiLSTMandBERTbothhave trainingtimeswithover270secondsfor30epochs.Itisevident givedtheaccuracyofall3labelsrelativelylowlessthan80%. thattheunimodalmodelsignificantlyimpactsthetrainingtime Furthermore, the results shown in Figure 10 have demon- ofthemultimodalmodelitcontributesto. AlthoughVulnSense stratedthesuperiorconvergencespeedandstabilityofVulnSense takes more time compared to the 6 other models, it only re- modelcomparedtotheother6models. Indetail,throughtest- quires10epochstoconverge. Thisgreatlyreducesthetraining ing after 10 training epochs, VulnSense model has gained the timeofVulnSenseby66%comparedtotheother6models. highestperformancewithgreaterthan77.96%inall4metrics. Inaddition,Figure12illustratesthepredictiontimeonthe Although, VulnSense, M1 and M3 models give high perfor- same testing set for each model. It’s evident that these mul- mance after 30 training epochs, VulnSense model only needs timodal models M1, M2, and VulSense, which are incorpo- tobetrainedfor10epochstoachievebetterconvergencethan rated from BERT, as well as the unimodal BERT model, ex- 9(a)BERT (b)M1 (c)BiLSTM (d)M2 (e)GNN (f)M3 (g)VulnSense Figure9:Confusionmatricesatthe30thepoch,with(a),(c),(e)representingtheunimodalmodels,and(b),(d),(f),(g)representingthemultimodalmodels hibit extended testing durations, surpassing 5.7 seconds for a of blockchain-based systems. As the adoption of smart con- set of 354 samples. Meanwhile, the testing durations for the tractscontinuestogrow,thevulnerabilitiesassociatedwiththem GNN,BiLSTM,andM3modelsareremarkablybrief,approxi- pose considerable risks. Our proposed methodology not only mately0.2104,1.4702,and2.0056secondscorrespondingly. It addresses these vulnerabilities but also paves the way for fu- isnoticeablethatthepresenceoftheunimodalmodelshasadi- ture research in the realm of multimodal deep learning and its rectinfluenceonthepredictiontimeofthemultimodalmodels diversifiedapplications. in which the unimodal models are involved. In the context of Inclosing, VulnSensenotonlymarksasignificantstepto- the2mosteffectivemultimodalmodels,M3andVulSense,M3 wards securing Ethereum smart contracts but also serves as a modelgavetheshortesttestingtime,about2.0056seconds. On stepping stone for the development of advanced techniques in thecontrary,theVulSensemodelexhibitsthelengthiestpredic- blockchainsecurity. Asthelandscapeofcryptocurrenciesand tiontime,extendingtoabout7.4964seconds,whichisroughly blockchain evolves, our research remains poised to contribute fourtimesthatoftheM3model. WhiletheM3modeloutper- totheongoingquestforenhancedsecurityandreliabilityinde- formstheVulSensemodelintermsoftrainingandtestingdura- centralizedsystems. tion,theVulSensemodelsurpassestheM3modelinaccuracy. Nevertheless,inthecontextofdetectingvulnerabilityforsmart Acknowledgment contracts,increasingaccuracyismoreimportantthanreducing execution time. Consequently, the VulSense model decidedly ThisresearchwassupportedbyTheVNUHCM-University outperformstheM3model. ofInformationTechnology’sScientificResearchSupportFund. 6. Conclusion References Inconclusion,ourstudyintroducesapioneeringapproach, [1] D.Ghimiray(Feb2023).[link]. VulnSense, which harnesses the potency of multimodal deep URLhttps://www.avast.com/c-silk-road-dark-web-market |
learning, incorporatinggraphneuralnetworksandnaturallan- [2] W.Zou,D.Lo,P.S.Kochhar,X.-B.D.Le,X.Xia,Y.Feng,Z.Chen, B.Xu,Smartcontractdevelopment:Challengesandopportunities,IEEE guage processing, to effectively detect vulnerabilities within TransactionsonSoftwareEngineering47(10)(2021)2084–2106. Ethereum smart contracts. By synergistically leveraging the [3] I.Mehar,C.Shier,A.Giambattista,E.Gong,G.Fletcher,R.Sanayhie, strengths of diverse features and cutting-edge techniques, our H.Kim,M.Laskowski,Understandingarevolutionaryandflawedgrand experimentinblockchain: Thedaoattack,JournalofCasesonInforma- frameworksurpassesthelimitationsoftraditionalsingle-modal tionTechnology21(2019)19–32.doi:10.4018/JCIT.2019010102. methods.Theresultsofcomprehensiveexperimentsunderscore [4] S.S.Kushwaha,S.Joshi,D.Singh,M.Kaur,H.-N.Lee,Ethereumsmart the superiority of our approach in terms of accuracy and effi- contract analysis tools: A systematic review, IEEE Access 10 (2022) ciency, outperforming conventional deep learning techniques. 57037–57062.doi:10.1109/ACCESS.2022.3169902. This affirms the potential and applicability of our approach in [5] S.Badruddoja,R.Dantu,Y.He,K.Upadhyay,M.Thompson,Making smart contracts smarter, 2021, pp. 1–3. doi:10.1109/ICBC51069. bolstering Ethereum smart contract security. The significance 2021.9461148. of this research extends beyond its immediate applications. It [6] Nveloso,Nveloso/conkas: Ethereumvirtualmachine(evm)bytecodeor contributestothebroaderdiscourseonenhancingtheintegrity soliditysmartcontractstaticanalysistoolbasedonsymbolicexecution. URLhttps://github.com/nveloso/conkas 10(a)Accuracy (b)Precision (c)Recall (d)F1-Score Figure10:Theperformanceof7modelsin3differentmocksoftrainingepochs. Figure11:Comparisonchartoftrainingtimefor30epochsamongmodels Figure12:Comparisonchartofpredictiontimeonthetestsetforeachmodel [7] Consensys,Consensys/mythril:Securityanalysistoolforevmbytecode. [12] F. Jiang, K. Chao, J. Xiao, Q. Liu, K. Gu, J. Wu, Y. Cao, Enhanc- supports smart contracts built for ethereum, hedera, quorum, vechain, ingsmart-contractsecuritythroughmachinelearning: Asurveyofap- roostock,tronandotherevm-compatibleblockchains. proaches and techniques, Electronics 12 (9) (2023). doi:10.3390/ URLhttps://github.com/ConsenSys/mythril electronics12092046. [8] P. Tsankov, A. Dan, D. Drachsler-Cohen, A. Gervais, F. Bu¨nzli, URLhttps://www.mdpi.com/2079-9292/12/9/2046 M.Vechev, Securify: Practicalsecurityanalysisofsmartcontracts, in: [13] M. Khodadadi, J. Tahmoresnezhad, Hymo: Vulnerability detection in Proceedings of the 2018 ACM SIGSAC Conference on Computer and smartcontractsusinganovelmulti-modalhybridmodel(2023). arXiv: CommunicationsSecurity,CCS’18,AssociationforComputingMachin- 2304.13103. ery,NewYork,NY,USA,2018,p.67–82. [14] J. Chen, X. Xia, D. Lo, J. Grundy, X. Luo, T. Chen, DefectChecker: URLhttps://doi.org/10.1145/3243734.3243780 AutomatedsmartcontractdefectdetectionbyanalyzingEVMbytecode, [9] O. Lutz, H. Chen, H. Fereidooni, C. Sendner, A. Dmitrienko, A. R. IEEETransactionsonSoftwareEngineering48(7)(2022)2189–2207. Sadeghi, F. Koushanfar, Escort: Ethereum smart contracts vulnerabil- doi:10.1109/tse.2021.3054928. ity detection using deep neural network and transfer learning (2021). [15] [link]. arXiv:2103.12607. URL https://docs.soliditylang.org/en/develop/index. [10] W.Wang,J.Song,G.Xu,Y.Li,H.Wang,C.Su,Contractward: Auto- html matedvulnerabilitydetectionmodelsforethereumsmartcontracts,IEEE [16] J.Summaira, X.Li, A.M.Shoib, J.Abdul, Areviewonmethodsand TransactionsonNetworkScienceandEngineering8(2)(2021)1133– applicationsinmultimodaldeeplearning(2022).arXiv:2202.09195. 1144.doi:10.1109/TNSE.2020.2968505. [17] T.Baltrusˇaitis,C.Ahuja,L.-P.Morency,Multimodalmachinelearning: [11] P.Qian,Z.Liu,Q.He,R.Zimmermann,X.Wang,Towardsautomated Asurveyandtaxonomy,IEEETransactionsonPatternAnalysisandMa- reentrancy detection for smart contracts based on sequential models, chineIntelligence41(2)(2019)423–443.doi:10.1109/TPAMI.2018. IEEE Access 8 (2020) 19685–19695. doi:10.1109/ACCESS.2020. 2798607. 2969429. [18] W.Nam,B.Jang,Asurveyonmultimodalbidirectionalmachinelearn- 11ingtranslationofimageandnaturallanguageprocessing,ExpertSystems SoftwareTestingandAnalysis,2020. withApplications(2023)121168. [36] D.D.Wood,Ethereum: Asecuredecentralisedgeneralisedtransaction [19] P.Xu,X.Zhu,D.A.Clifton,Multimodallearningwithtransformers: A ledger,2014. survey,IEEETransactionsonPatternAnalysisandMachineIntelligence (2023)1–20doi:10.1109/TPAMI.2023.3275156. |
[20] W.Jie, Q.Chen, J.Wang, A.S.VoundiKoe, J.Li, P.Huang, Y.Wu, Y.Wang,Anovelextendedmultimodalaiframeworktowardsvulnerabil- itydetectioninsmartcontracts,InformationSciences636(2023)118907. doi:https://doi.org/10.1016/j.ins.2023.03.132. [21] S.Kushwaha,S.Joshi,D.Singh,M.Kaur,H.-N.Lee,Ethereumsmart contract analysis tools: A systematic review, IEEE Access (2022) 1– 1doi:10.1109/ACCESS.2022.3169902. [22] L.Luu,D.-H.Chu,H.Olickel,P.Saxena,A.Hobor,Makingsmartcon- tracts smarterhttps://eprint.iacr.org/2016/633 (2016). doi: 10.1145/2976749.2978309. URLhttps://eprint.iacr.org/2016/633 [23] J.Feist, G.Grieco, A.Groce, Slither: Astaticanalysisframeworkfor smartcontracts(082019). [24] B.Jiang,Y.Liu,W.K.Chan,Contractfuzzer: Fuzzingsmartcontracts forvulnerabilitydetection,in: Proceedingsofthe33rdACM/IEEEIn- ternational Conference onAutomated Software Engineering, ASE ’18, AssociationforComputingMachinery, NewYork, NY,USA,2018, p. 259–269.doi:10.1145/3238147.3238177. URLhttps://doi.org/10.1145/3238147.3238177 [25] Y. Xue, J. Ye, W. Zhang, J. Sun, L. Ma, H. Wang, J. Zhao, xfuzz: Machinelearningguidedcross-contractfuzzing, IEEETransactionson Dependable and Secure Computing (2022) 1–14doi:10.1109/TDSC. 2022.3182373. [26] Z.Liu,P.Qian,X.Wang,Y.Zhuang,L.Qiu,X.Wang,Combininggraph neuralnetworkswithexpertknowledgeforsmartcontractvulnerability detection,CoRRabs/2107.11598(2021).arXiv:2107.11598. URLhttps://arxiv.org/abs/2107.11598 [27] Y.Zhuang,Z.Liu,P.Qian,Q.Liu,X.Wang,Q.He,Smartcontractvul- nerability detection using graph neural network, in: C. Bessiere (Ed.), ProceedingsoftheTwenty-NinthInternationalJointConferenceonAr- tificialIntelligence,IJCAI-20,InternationalJointConferencesonArtifi- cialIntelligenceOrganization,2020,pp.3283–3290,maintrack. doi: 10.24963/ijcai.2020/454. URLhttps://doi.org/10.24963/ijcai.2020/454 [28] H. H. Nguyen, N.-M. Nguyen, H.-P. Doan, Z. Ahmadi, T.-N. Doan, L.Jiang,Mando-guru: Vulnerabilitydetectionforsmartcontractsource codebyheterogeneousgraphembeddings, in: Proceedingsofthe30th ACM Joint European Software Engineering Conference and Sympo- sium on the Foundations of Software Engineering, ESEC/FSE 2022, AssociationforComputingMachinery, NewYork, NY,USA,2022, p. 1736–1740.doi:10.1145/3540250.3558927. URLhttps://doi.org/10.1145/3540250.3558927 [29] L. Zhang, J. Wang, W. Wang, Z. Jin, C. Zhao, Z. Cai, H. Chen, A novel smart contract vulnerability detection method based on informa- tiongraphandensemblelearning,Sensors22(9)(2022).doi:10.3390/ s22093581. [30] M. Khodadadi, J. Tahmoresnezhad, Hymo: Vulnerability detection in smartcontractsusinganovelmulti-modalhybridmodel(2023). arXiv: 2304.13103. [31] D. Gibert, C. Mateu, J. Planes, Hydra: A multimodal deep learning frameworkformalwareclassification,Computers&Security95(2020) 101873.doi:https://doi.org/10.1016/j.cose.2020.101873. [32] W.Jie,Q.Chen,J.Wang,A.S.V.Koe,J.Li,P.Huang,Y.Wu,Y.Wang,A novelextendedmultimodalaiframeworktowardsvulnerabilitydetection insmartcontracts,InformationSciences636(2023)118907. [33] T.Durieux,J.F.Ferreira,R.Abreu,P.Cruz,Empiricalreviewofauto- matedanalysistoolson47,587ethereumsmartcontracts,in:Proceedings oftheACM/IEEE42ndInternationalconferenceonsoftwareengineering, 2020,pp.530–541. [34] J.F.Ferreira,P.Cruz,T.Durieux,R.Abreu,Smartbugs:Aframeworkto analyzesoliditysmartcontracts,in:Proceedingsofthe35thIEEE/ACM InternationalConferenceonAutomatedSoftwareEngineering,2020,pp. 1349–1352. [35] A.Ghaleb, K.Pattabiraman, Howeffectivearesmartcontractanalysis tools? evaluatingsmartcontractstaticanalysistoolsusingbuginjection, in:Proceedingsofthe29thACMSIGSOFTInternationalSymposiumon 12 |
2309.10644 Robin: A Novel Method to Produce Robust Interpreters for Deep Learning-Based Code Classifiers Zhen Li1†, Ruqian Zhang1†, Deqing Zou1†∗, Ning Wang1†, Yating Li1†, Shouhuai Xu2, Chen Chen3, and Hai Jin4† 1School of Cyber Science and Engineering, Huazhong University of Science and Technology, Wuhan, 430074, China 2Department of Computer Science, University of Colorado Colorado Springs, USA 3Center for Research in Computer Vision, University of Central Florida, USA 4School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, 430074, China {zh li, ruqianzhang, deqingzou, wangn, leeyating}@hust.edu.cn sxu@uccs.edu, chen.chen@crcv.ucf.edu, hjin@hust.edu.cn Abstract—Deep learning has been widely used in source code networks are often considered black-boxes which means they classification tasks, such as code classification according to their cannot provide explanations for why a particular prediction functionalities, code authorship attribution, and vulnerability is made. The lack of interpretability poses as a big hurdle to detection. Unfortunately, the black-box nature of deep learning the adoption of these models in the real world (particularly makes it hard to interpret and understand why a classifier (i.e., classification model) makes a particular prediction on a in high-security scenarios), because practitioners do not know given example. This lack of interpretability (or explainability) when they should trust the predictions made by these models might have hindered their adoption by practitioners because it and when they should not. is not clear when they should or should not trust a classifier’s The importance of addressing the aforementioned lack of prediction. The lack of interpretability has motivated a number interpretability is well recognized by the research community ofstudiesinrecentyears.However,existingmethodsareneither robustnorabletocopewithout-of-distributionexamples.Inthis [11]–[13],asevidencedbyveryrecentstudies.Existingstudies paper,weproposeanovelmethodtoproduceRobustinterpreters on addressing the interpretability of source code classifiers for a given deep learning-based code classifier; the method is (i.e., classification models) can be classified into two ap- dubbed Robin. The key idea behind Robin is a novel hybrid proaches: ante-hoc vs. post-hoc. The ante-hoc approach aims structure combining an interpreter and two approximators, to provide built-in interpretability by leveraging the attention while leveraging the ideas of adversarial training and data augmentation. Experimental results show that on average the weight matrix associated with a neural network in question interpreter produced by Robin achieves a 6.11% higher fidelity [14], [15], which in principle can be applied to explain the (evaluated on the classifier), 67.22% higher fidelity (evaluated prediction on any example. The post-hoc approach aims to ontheapproximator),and15.87xhigherrobustnessthanthatof interpret the decision-making basis of a trained model. In the the three existing interpreters we evaluated. Moreover, the in- context of sourcecode classification, this approachmainly fo- terpreter is 47.31% less affected by out-of-distribution examples than that of LEMNA. cusesonlocalinterpretation,whichaimstoexplainpredictions Index Terms—Explainable AI, deep learning, code classifica- for individual examples by leveraging: (i) perturbation-based tion, robustness feature saliency [16], [17], which computes the importance scoresoffeaturesbyperturbingfeaturesincodeexamplesand I. INTRODUCTION then observing changes in prediction scores; or (ii) program reduction[18],[19],whichusesthedeltadebuggingtechnique In the past few years there has been an emerging field [20]to reducea programtoa minimalsetof statementswhile focusing on leveraging deep learning or neural networks to preserving the classifier’s prediction. study various kinds of source code classification problems, The ante-hoc approach must be incorporated into the clas- suchasclassifyingcodebasedontheirfunctionalities[1],[2], sifier training phase, meaning that it cannot help existing or codeauthorshipattribution[3]–[7],andvulnerabilitydetection given classifiers, for which we can only design interpreters [8]–[10]. While the accuracy of deep neural networks in to provide interpretability in a retrospective manner. In this this field may be satisfactory, the lack of interpretability, or paper we focus on how to retrospectively equip given code explainability, remains a significant challenge. Deep neural classifiers with interpretability, which is the focus of the post- †NationalEngineeringResearchCenterforBigDataTechnologyandSys- hocapproach.However,existingpost-hocmethodssufferfrom tem,ServicesComputingTechnologyandSystemLab,HubeiKeyLaboratory the following two problems. (i) The first problem is incurred of Distributed System Security, Hubei Engineering Research Center on Big by the possible out-of-distribution of a perturbed example DataSecurity,ClusterandGridComputingLab ∗Correspondingauthor in the perturbation-based feature saliency method. This is 3202 peS 91 ]ES.sc[ 1v44601.9032:viXrainevitablebecausethemethodusesperturbationstoassessfea- with the same functionality as the original example). This ture importance, by identifying the feature(s) whose absence similarity allows us to compute a loss in interpretability and causes a significant decrease in prediction accuracy. When leverage thisloss to trainthe interpreter. (ii)Corresponding to a legitimate example is perturbed into an out-of-distribution mixup, we generate a set of virtual examples by linearly in- input, it is unknown whether the drop in accuracy is caused terpolating the original examples and their perturbed versions |
by the absence of certain feature(s) or because of the out- in the feature space; these examples are virtual because they of-distribution of the perturbed example [21], [22]. (ii) The are obtained in the feature space (rather than example space) second problem is the lack of robustness, which is inherent to and thus may not correspond to any legitimate code example thelocalinterpretationapproachandthuscommontoboththe (e.g., a virtual example may not correspond to a legitimate perturbation-based feature saliency method and the program program). Different from traditional data augmentation, we reduction method. This is because the local interpretation train the interpreter and two approximators jointly rather than approach optimizes the interpretation of each example inde- solely training the interpreter on virtual examples due to the pendentofothers,meaningthatoverfittingthenoiseassociated lack of ground truth of the virtual examples (i.e., what the k withindividualexamplesisverylikely[23].Asaconsequence, important features are). aninterpretationwouldchangesignificantlyevenbyincurring Third, we empirically evaluate Robin’s effectiveness and aslightmodificationtoanexample,andthiskindofsensitivity compare it with the known post-hoc methods in terms of fi- couldbeexploitedbyattackerstoruintheinterpretability[24]. delity,robustness,andeffectiveness.Experimentalresultsshow The weaknesses of the existing approaches motivate us to that on average the interpreter produced by Robin achieves investigate better methods to interpret the predictions of deep a 6.11% higher fidelity (evaluated on the classifier), 67.22% learning-based code classifiers. higher fidelity (evaluated on the approximator), and 15.87x higher robustness than that of the three existing interpreters Our Contributions. This paper introduces Robin, a novel weevaluated.Moreover,theinterpreteris47.31%lessaffected method for producing high-fidelity and Robust interpreters in by out-of-distribution examples than that of LEMNA [25]. the post-hoc approach with local interpretation. Specifically, We have made the source code of Robin publicly available this paper makes three contributions. at https://github.com/CGCL-codes/Robin. First, we address the aforementioned out-of-distribution Paper Organization. Section II presents a motivating in- problem by introducing a hybrid interpreter-approximator stance. Section III describes the design of Robin. Section IV structure. More specifically, we design (i) an interpreter to presents our experiments and results. Section V discusses the identify the features that are important to make accurate limitations of the present study. Section VI reviews related predictions, and (ii) two approximators such that one is used prior studies. Section VII concludes the paper. to make predictions based on these important features and the other is used to make predictions based on the other II. AMOTIVATINGINSTANCE features (than the important ones). These approximators are To illustrate the aforementioned problem inherent to the reminiscent of fine-tuning a classifier with perturbed training local interpretation approach, we consider a specific instance examples while removing some features. As a result, a per- of code classification in the context of code functionality turbedtestexampleisnolongeranout-of-distributionexample classification via TBCNN [2]. Although TBCNN offers code totheapproximators,meaningthatthereducedaccuracyofthe functionality classification capabilities, it does not offer any classifier can be attributed to the removal of features (rather interpretability on its predictions. To make its predictions than the out-of-distribution examples). To assess the impor- interpretable, an interpreter is required. We adapt the method tance of the features extracted by the interpreter, we use the proposed in [16], which was originally used to interpret approximators (rather than the given classifier) to mitigate the software vulnerability detectors, to code functionality classifi- side-effectthatmaybecausedbyout-of-distributionexamples. cation because there are currently no existing interpreters for Second, we address the lack of interpretation robustness thispurpose(tothebestofourknowledge).Thisadaptationis by leveraging the ideas of adversarial training and mixup feasible because the method involves deleting features in the to augment the training set. More specifically, we generate a feature space and observing the impact on predictions, which set of perturbed examples for a training example (dubbed the is equally applicable to code functionality classification. original example) as follows. (i) Corresponding to adversarial TheoriginalcodeexampleinFigure1(a)isusedtocompare training but different from traditional adversarial training in two strings for equality. We create a perturbed example by other contexts, the ground-truth labels (i.e., what the k impor- changing the variable names, as illustrated in Figure 1 (b). tantfeaturesare)cannotbeobtained,makingitdifficulttoadd Despite the change in variable names, the perturbed example perturbed examples to the training set for adversarial training. maintainsthesamefunctionalityandsemanticsastheoriginal We overcome this by measuring the similarity between the example. Additionally, both the original and perturbed exam- interpretation of the prediction on the original example and ples are classified into the same category by the classifier. the interpretation of the prediction on the perturbed example, Upon applying an interpreter, adapted from [16], to TBCNN, which is obtained in the example space rather than feature the five most important features of the original and perturbed |
space (i.e., the perturbed example is still a legitimate program examples are identified, and highlighted in Fig. 1(a) and 1(b)int main() { char a[100],b[100]i;nt main() { int main(){ fidelity,thetwoapproximatorsaretrainedsimultaneouslysuch int m,n,t=1,r[100]={0c}h,ia,jr; a[100],b[100]; char S[100],shuchuin[1t0 m0]a;in(){ thattheimportantfeaturescontainthemostusefulinformation scanf("%s%s",a,b); int m,n,t=1,r[100]={0},i,j; int Q,cc3,cc2=1,L[100c]h=a{0r }S,[d1,s0o0r]t,;shuchu[100]; m=strlen(a); scanf("%s%s",a,b); scanf("%s%s",S,shuchinut) ;Q,cc3,cc2=1,L[100]={0}f,od,rsormt;aking predictions while the other features contain the n=strlen(b); m=strlen(a); Q=strlen(S); scanf("%s%s",S,shuchu); least useful information for making predictions. if(m==n){ n=strlen(b); cc3=strlen(shuchu); Q=strlen(S); for(i=0;i<=m-1;i++i)f(m==n){ if(Q==cc3){ cc3=strlen(shuchu); To make the interpreter robust, we leverage two ideas. The for(j=0;j<=n-1;j++)for(i=0;i<=m-1;i++) for(d=0;d<=Q-1;d+i+f()Q==cc3){ first idea is to use adversarial training [27], [28] where an if(b[j]==a[i]&&r[j]=fo=0r()j ={0;j<=n-1;j++) for(sort=0;sort<=ccfo3-r1(d;s=o0r;td+<+=)Q-1;d++) r[j]=1; if(b[j]==a[i]&&r[j]==0) { if(shuchu[sort]==S[fdo]r&(s&or Lt=[s0o;rsto]r=t=<0=)c {c3-o1;rsiogrti+n+)alexampleanditsperturbedexamplewillhavethesame break; r[j]=1; L[sort]=1; if(shuchu[sort]==S[pd]r&e&d Li[csotirot]=n=.0)I {n sharp contrast to traditional adversarial training } break; break; L[sort]=1; for(i=0;i<=n-1;i++) } } break; in other contexts where ground-truth can be obtained, it is if(r[i]==0) {t=0; brefaokr(;i}=0;i<=n-1;i++) for(d=0;d<=cc3-1;d++}) difficulttoobtaintheground-truthlabelsinthissettingbecause } if(r[i]==0) {t=0; break;} if(L[d]==0) {cc2=0; fborre(da=k;0};d<=cc3-1;d++) else t=0; } } if(L[d]==0) {cc2=0; brewake;}donotknowwhichfeaturesareindeedthemostimportant if(t==1) printf("YES\ne"l)s;e t=0; else cc2=0; } onesevenforthetrainingexamples.Thatis,wecannotsimply else printf("NO\n");if(t==1) printf("YES\n"); if(cc2==1) printf("YESe\lnse") c;c2=0; return 0; else printf("NO\n"); else printf("NO\n");if(cc2==1) printf("YES\n");use traditional adversarial training method to add perturbed } return 0; return 0; else printf("NO\n"); examples to training set because the “labels” (i.e., the k im- } } return 0; (a) Originalexample (b) Perturbe}dexample portant features) of original examples and perturbed examples Fig. 1. An original example and its perturbed example (modified code is cannot be obtained. We overcome this by (i) generating a set highlighted in blue color and italics), where red boxes highlight the 5 most of perturbed examples via code transformation such that the importantfeatures. prediction on the perturbed example remains the same, and respectively. Notably, only one important feature is common (ii) adding a constraint term to the loss function to make between the two examples, revealing that the interpreter lacks the interpretations of the original example and the perturbed robustness.Thislackofrobustnessoftheinterpretermaycause example as similar to each other as possible. users to question the reliability of the classifier’s predictions The second idea is to leverage mixup [29] to augment the due to the erroneous interpretation. trainingset.Insharpcontrasttotraditionaldataaugmentation, we cannot train the interpreter from the augmented dataset III. DESIGNOFROBIN for the lack of ground-truth (i.e., the important features of Notations. A program code example, denoted by x i, can an original example and its perturbed examples can not be represented as a n-dimensional feature vector x i = be obtained). We overcome this issue by (i) using code (x i,1,x i,2,...,x i,n),wherex i,j (1≤j ≤n)isthejthfeature transformation to generate a perturbed example such that its ofx i.Acodeclassifier(i.e.,classificationmodel)M islearned predictionremainsthesameasthatoftheoriginalexample,(ii) fromatrainingset,denotedbyX,whereeachexamplex i ∈X mixing the original examples with the perturbed examples to is associated with a label y i. Denote by M(x i) the prediction generatevirtualexamples,and(iii)optimizingthepreliminary of classifier M on a example x i. interpreter by training the interpreter and two approximators |
Our goal is to propose a novel method to produce an jointly on virtual examples. Note that the difference between interpreter, denoted by E, for any given code classifier M theaforementionedadversarialexamplesandvirtualexamples and test set U such that for test example u i ∈U, E identifies is that the former are obtained by perturbation in the example k important features to explain why M makes a particular space but the latter is obtained in the feature space. prediction on u , where k ≪ n. It is intuitive that the k im- i Design Overview. Fig. 2 highlights the training process of portantfeaturesofexampleu shouldbelargely,ifnotexactly, i the same as the k important features of u′ which is perturbed Robin,whichproducesanoptimizedinterpreterinthreesteps. i fromu i.DenotebyE(u i)=(u i,α1,...,u i,αk)thek important • Step I: Generating perturbed examples. This step gen- features identified by E, where {α ,...,α }⊂{1,...,n}. erates perturbed examples from a training example by 1 k conducting semantics-preserving code transformations such A. Basic Idea and Design Overview that the perturbed example has the same prediction as that Basic Idea. In terms of the out-of-distribution problem asso- of the original example. ciated with existing interpretation methods, we observe that • Step II: Generating a preliminary interpreter. Given a the absence of perturbed examples in the training set makes classifier for which we want to equip with interpretability, a classifier’s prediction accuracy with respect to the perturbed thisstepleveragestheperturbedexamplesgeneratedinStep examples affected by the out-of-distribution examples. Our I to train two approximators and an interpreter Siamese idea to mitigate this problem is to fine-tune a classifier for networkinaniterativefashion.TheinterpreterSiamesenet- perturbedexamplesbyusingahybridinterpreter-approximator work identifies the important features of original examples structure [26] such that (i) one interpreter is for identifying andthatoftheirperturbedexamples,andthencomputesthe the important features for making accurate prediction, (ii) one difference between these two sets. approximator is for using the important features (identified • Step III: Optimizing the preliminary interpreter. This by the interpreter) to making predictions, and (iii) another step optimizes the preliminary interpreter generated in Step approximatorisforusingtheotherfeatures(thantheimportant II by using mixup [29] to augment the training set and up- features) for making predictions. To improve the interpreter’s datethepreliminaryinterpreter’sparameters.TheoptimizedInput Step I: Generating perturbed examples Output Filtering out Identifying Generating Code examples coding style candidate perturbed perturbed examples Perturbed whose prediction in training set X attributes examples labels change examples Step II: Generating a preliminary interpreter Code classification Training the interpreter A preliminary model to be Training two approximators Siamese network interpreter explained Step III: Optimizing the preliminary interpreter Generating virtual Updating the interpreter’s An optimized examples parameters interpreter Fig. 2. Overview of the training process of Robin which produces an optimized interpreter in three steps: generating perturbed examples, generating a preliminaryinterpreter,andoptimizingthepreliminaryinterpreter. Code classification Prediction label: model to be explained 68 … Code … Candidate 1 typedef long longint lli; 2int main() { example 1 int main() { perturbed 3 cin>> tc; 2 cin>> tc; example 3 for (int t = 1; t <= tc; t++) { 4 for (int t = 1; t <= tc; t++) { 4 char c[30]; 5 char c[30]; 5 long longint num; 6 llinum; 6 string s; 7 string s; 7 scanf("%s", c); 8 scanf("%s", c); 8 sscanf(c, "%lld", &num); 9 sscanf(c, "%lld", &num); 9 s = c; 10 s = c; 10 long longint goal = ctdy(s); 11 lligoal = ctdy(s); 12 lliub= num, lb= 0, m; 11 long longint ub= num, lb= 0, m; Prediction label: 12 for (;ub-lb> 1;) { 13 while (ub-lb> 1) { 13 m = (ub+ lb) / 2; 68 14 m = (ub+ lb) / 2; 14 num = m; 15 num = m; 15 sprintf(c, "%lld", num); 16 sprintf(c, "%lld", num); 16 s = c; 17 s = c; 18 if (ctdy(s) != goal) { Line 1: Use global declaration 17 if (ctdy(s) != goal) { 19 lb= m; Line 4: Increment operation is used after the 18 lb= m; 20 } else { variable 19 } else { Perturbed 2 21 2 }ub= m; L Li in ne e 4 1: 3 L : o Loo op p s t sr tu ruct cu ture r e( (f fo or r , ,w wh hil ie le) ) 2 20 1 }ub= m; examples 23 } Line 24: Library function calls (printf, cout) 22 } 24 printf("Case #%d: %lld\n", t, ub); Line 26: Usage of return statement 23 cout<< "Case #" << t << ":“ << ub<< endl; 25 } … 24 } 25 return 0;} 26 } (a) Input (b) Step I.1: Identifying all (c) Step I.2: Generating (d) Step I.3: Filtering out (e) Output coding style attributes candidate perturbed examples perturbed examples whose prediction labels change Fig.3. Acodeexampleshowinggenerationofperturbedexamples(selectedcodingstyleattributesandmodifiedcodearehighlightedinred). |
interpreter identifies important features of a test example. x ,x ....x , where x (1 ≤ j ≤ m) denotes the i,+1 i,+2 i,+m i,+j jth perturbed example generated by the semantic-equivalent B. Generating Perturbed Examples code transformation of code example x i. The labels of the m perturbed examples preserve the original example x ’s label i Thisstephasthreesubsteps.First,foreachtrainingexample y owing to the semantic-equivalent code transformations. As x ∈ X, we identify its coding style attributes related to the i i an instance, Fig. 3(c) shows a candidate perturbed example example’slayout,lexis,syntax,andsemantics(e.g.,asdefined generated by transforming randomly selected coding style in[30]).Lett denotethenumberofcodingstyleattributesfor i attributes, which are highlighted in red in Fig. 3(b). Third, x ∈X.Fig.3(a)showsatrainingexamplewherethefirstline i we filter out perturbed examples whose prediction labels uses a global declaration but can be transformed such that no are different from the prediction labels of the corresponding global declaration is used; Fig. 3(b) describes its coding style original examples in X. The reason is that if the prediction attributes. Second, we randomly select θ (θ < t ) coding i i i labels of the perturbed examples change, the robustness of styleattributes,repeatthisprocessformtimes,andtransform the interpreter cannot be judged by the difference of the the value of each of these coding style attributes to any one interpretationbetweentheperturbedexamplesandtheoriginal of the other semantically-equivalent coding style attributes. examples. As an instance in Fig. 3(d), the prediction label Consequently, we obtain m candidate perturbed examplesStep II: Generating a preliminary interpreter Input Interpreter Siamese network Approximators Important Interpreter features Approximator Classification loss 𝐿 !for Code examples E 𝐴 ! original examples in training set X Weights Step II.1: Training sharing Non-important two approximators while attempting Inter Ep ’reter features Appro 𝐴x "imator Clas os ri if gic ina ati lo en x alo mss p l𝐿 e" sfor to minim 𝐿ize 𝐿 !and " Discrepancy loss 𝐿 #$%%of Code classification two interpreters Output model to be Apreliminary explained Copy Copy Iterations interpreter Interpreter Approximators Siamese network Important Interpreter features Approximator Classification loss 𝐿 !for E 𝐴 ! original examples Step II.2: Training Weights the interpreter sharing Non-important Siamese network Interpreter features Approximator Classification loss 𝐿 "for while attempting to Perturbed E’ 𝐴 " original examples minimize 𝐿 !and examples 𝐿 #$%%andmaximize 𝐿 Discrepancy loss 𝐿 #$%%of " two Interpreters Fig.4. OverviewofStepII(generatingapreliminaryinterpreter),involvingtrainingtwoapproximatorsandtrainingtheinterpreterSiamesenetworkiteratively. of the code example in Fig. 3(a) and the prediction label should be as high as possible. On the other hand, non- of the candidate perturbed example are the same, so the importantfeaturescontainlessinformationimportantforcode candidate perturbed example is a perturbed example that can classification, so the accuracy of the approximator using only be used for robustness enhancement. Finally, we obtain the non-important features as input should be as low as possible. set of perturbed examples for robustness enhancement of the (ii) To achieve high robustness, we introduce the interpreter interpreter. Siamese network with two interpreters that have the same neural network structure and share weights, using the original C. Generating a Preliminary Interpreter code examples and perturbed examples as input respectively. Foreachoriginalexampleanditscorrespondingperturbedex- An ideal interpreter is simultaneously achieving high fi- amples,theSiamesenetworkcalculatesthesimilaritydistance delity and high robustness. (i) High fidelity indicates that betweentheimportantfeaturesoftheoriginalexampleandthe the important features identified by interpreter E contain as important features of the perturbed examples identified by the much information as possible that is most useful for code two interpreters, and adds the similarity distance to the loss classification, and the remaining non-important features con- value to improve the interpreters’ robustness during training. tain as little information as possible that is useful for code classification. (ii) High robustness indicates that the important Fig. 4 shows the structure of the neural network involving featuresidentifiedbyinterpreterE toexplainwhyM predicts an interpreter Siamese network and two approximators. The x i as the label y i should not change dramatically for small interpreter Siamese network involves two interpreters which perturbed examples which are predicted as label y i. Robin have the same neural network structure and share weights. achieves this by first generating a preliminary interpreter in Their neural networkstructure depends on the structure of the Step II and then optimizing the preliminary interpreter further code classifier to be explained. We divide the code classifier in Step III. to be explained into two parts. One part is to extract the The purpose of Step II is to generate a preliminary in- featuresfromtheinputcodeexamplesthroughneuralnetwork terpreter by training two approximators and the interpreter toobtainthevectorrepresentationofthecodeexamples,which Siamese network iteratively. The basic idea is as follows: (i) is equivalent to encoder, and this part usually uses Batch To achieve high fidelity, we introduce two approximators that Normalization, Embedding, LSTM, Convolutional layer, etc. |
havethesameneuralnetworkstructureforcodeclassification, The other part maps the vector representation to the output using the identified important features and non-important fea- vector. When generating the structure of the interpreter, the turesasinputrespectively.Sinceimportantfeaturescontainthe first part of the code classifier is kept and the latter part most useful information for code classification, the accuracy is modified to a fully connected layer and a softmax layer, of the approximator using only important features as input which maps the learned representation of code examples tothe output space, and the output is of the same length as the features for the original examples and for the perturbed number of features, indicating whether each feature is labeled examples, the smaller the Jaccard distance, and the smaller asimportantornot.Thesetwointerpretersareusedtoidentify the corresponding difference value L . diff the important features of the code examples in training set X During the training process, Step II.1 and Step II.2 are and the perturbed examples generated in Step I, respectively. iterated until both the interpreters and the approximators The two approximators have the same neural network converge. structure and are used to predict labels using important fea- D. Optimizing the Preliminary Interpreter tures and non-important features, respectively. They have the identical neural network architecture as the code classifier The purpose of this step is to optimize the preliminary to be interpreted. However, instead of the code example interpretergeneratedinStepIIinbothfidelityandrobustness. as input, the interpreter provides the approximator with the The basic idea is to use mixup [29] for data augmentation important or non-important features identified. As a result, to optimize the interpreter. There are two substeps. First, the approximators can be seen as fine-tuned versions of the we generate virtual examples. For each code example x i code classifier, trained on the datasets of important and non- in training set X, x is a randomly selected perturbed i′,+j important features. example of x , where x is randomly selected from X, and i′ i′ Fig. 4 also shows the training process to generate a prelim- may or may not be identical to x . A virtual example is i inary interpreter, involving the following two substeps. generated by mixing code examples and their corresponding StepII.1:Trainingtwo approximatorswhileattemptingto labels.Specifically,thevirtualexamplex isgeneratedby i,mix minimize L and L . When training the approximator, only linear interpolation between the original example x and the s u i the model parameters of the approximator are updated. The perturbed example x , and the label y of x is i′,+j i,mix i,mix training goal is to minimize the loss of both approximators also generated by linear interpolation between the label y of i A and A , which is the sum of cross-entropies loss of A original example x and the label y of perturbed example s u s i i′,+j and A : x , shown as follows: u i′,+j min (L +L ), (1) s u As,Au x i,mix =λ ix i+(1−λ i)x i′,+j (4) whereL isthecross-entropylossofapproximatorA andL y =λ y +(1−λ )y s s u i,mix i i i i′,+j is the cross-entropy loss of approximator A . The loss of the u where the interpolation coefficients λ is sampled from the β approximatorindicatestheconsistencybetweentheprediction i distribution. Second, we update the interpreter’s parameters labels and the labels. based on the generated virtual examples. Since the output Step II.2: Training the interpreter Siamese network while of the interpreter is the important features in code examples attempting to minimize L and L , and maximize L . s diff u ratherthantheclassificationlabels,itisimpossibletotrainthe WhentrainingtheinterpreterSiamesenetwork,onlythemodel interpreter individually for enhancement. Therefore, we use parameters of the interpreter are updated. The training goal is approximatorsforjointoptimizationwiththeinterpreterE.In to minimize the loss of A and the discrepancy of the outputs s this case, the input of the overall model are code examples between two interpreters E and E′, and maximize the loss of and the output are the labels of code examples, which can A : u be directly trained and optimized using the generated vir- min(L −L +L ), (2) E s u diff tual examples. In the optimization process, the interpreter’s where L is the discrepancy of the outputs between two parameters are updated while preserving the approximators’ diff interpreters E and E′. The interpreter is trained so that (i) the parameters unchanged. loss of prediction using important features is minimized, (ii) IV. EXPERIMENTSANDRESULTS the loss of prediction using non-important features is maxi- mized, and (iii) the discrepancy of the outputs between two A. Evaluation Metrics and Research Questions interpreters is minimized to improve the robustness of the in- EvaluationMetrics.Weevaluateinterpretersviatheirfidelity, terpreter.ThedifferencevalueL diff intheinterpreterSiamese robustness against perturbations, and effectiveness in coping networkrepresentsthedistancebetweentheimportantfeatures with out-of-distribution examples. identifiedbytheinterpreterfortheoriginalexamplesandthose For quantifying fidelity, we adopt the metrics defined in for the perturbed examples. We use Jaccard distance [31] to [26],[32].ConsideracodeclassifierM trainedfromatraining measure the distance as follow: set X, an interpreter E, and a test set U. Denote by E(u ) i L =1−(cid:88) |E(x i)∩E(x i,+j)| (3) thesetofimportantfeaturesidentifiedbyinterpreterE fortest |
diff N ·m·|E(x i)∪E(x i,+j)| example u i ∈ U. We train an approximator A s in the same i,j fashion as how M is trained except that we only consider where N is the number of original code examples in the the important features, namely ∪ E(u ). Let M(u ) and ui∈U i i training set X, and m is the number of perturbed examples A (u ) respectively denote the prediction of classifier M s i corresponding to each original example. The more robust the and approximator A on example u . Then, interpreter E’s s i interpreter is, the higher the similarity between the important fidelity is defined as a pair (FS-M∈ [0,1], FS-A∈ [0,1]),where FS-M= |{ui∈U:M(ui)=M(E(ui))}| is the fraction of test attribution. In our experiment, we use a dataset from |U| examples that have the same predictions by M using all the Google Code Jam (GCJ) [34], [35], involving 1,632 features and by M only using the important features, and FS- C++ program files from 204 authors for 8 programming A= |{ui∈U:M(ui)=As(E(ui))}| is the fraction of test examples challenges and has been widely used in code authorship |U| that have the same predictions by M using all features and attribution task [30], [35]. This dataset is different from by A only using the important features [32]. Note that a the one used in [7], which is not available to us. s larger (FS-M, FS-A) indicates a higher fidelity, meaning that • TBCNN [2]. The method represents source code as an the important features are indeed important in terms of their AbstractSyntaxTree(AST),encodestheresultingASTas contribution to prediction. avector,usesatree-basedconvolutionallayertolearnthe For quantifying robustness against perturbations, we adopt featuresintheAST,andusesafully-connectedlayerand themetricproposedin[33],whichisbasedontheaverageJac- softmax layer for making predictions. In our experiment, card similarity between (i) the important features of an origi- we use the dataset of pedagogical programming Open nal example and (ii) the important features of the perturbed Judge system, involving 52,000 C programs for 104 example[31].Thesimilarityisdefinedoverinterval[0,1]such programming problems. This dataset is the same as the that a higher similarity indicates a more robust interpreter. one used in [2] because it is publicly available. For quantifying effectiveness in coping with out-of- We implement Robin in Python using Tensorflow [36] to distribution examples, we adopt the metric defined in [21]. retrofit the interpretability of DL-CAIS and TBCNN. We run Specifically, we take the number of features n over 8, and experiments on a computer with a RTX A6000 GPU and an incrementally and equally sample q features among all the Intel Xeon Gold 6226R CPU operating at 2.90 GHz. features, starting at q = n 8, i.e. q ∈ Q = {n 8,2 8n··· ,7 8n}. Interpreters for Comparison. We compare Robin with three For a given q, we use the same training set to learn the existing interpreters: LIME [13], LEMNA [25], and the one same kind of classifier M q by removing the q least im- proposed in [16], which would represent the state-of-the-art portant features (with respect to the interpreter), namely in interpretability of code classifier in feature-based post-hoc ∪ ui∈UE(cid:101)(u i), where E(cid:101)(u i) is the code example u i with localinterpretation.Morespecifically,LIME[13]makessmall q least important features (with respect to the interpreter) local perturbations to an example and obtains an interpretable removed, and the difference of accuracy between classi- linear regression model based on (i) the distance between the fier M and retrained classifier M q is defined as AD q = perturbedexampleandtheoriginalexampleand(ii)thechange |{ui∈U:M(ui)=M(E(cid:101)(ui))}|−|{ui∈U:M(ui)=Mq(E(cid:101)(ui))}|. The de- to the prediction. As such, LIME can be applied to explain |U| greetowhichtheinterpreterisimpactedbyout-of-distribution anyclassifier.LEMNA[25]approximateslocalnonlineardeci- inputs is the average AD for each q ∈Q. A smaller average sionboundariesforcomplexclassifiers,especiallyRNN-based q difference of accuracy indicates a reduced impact of out-of- ones with sequential properties, to provide interpretations in distribution inputs on the interpreter. securityapplications.Meanwhile,themethodin[16]interprets Correspondingtotheprecedingmetrics,ourexperimentsare vulnerabilitydetectorpredictionsbyperturbingfeaturevalues, driven by three Research Questions (RQs): identifyingimportantfeaturesbasedontheirimpactonpredic- • RQ1: What is Robin’s fidelity? (Section IV-C) tions, training a decision-tree with the important features, and • RQ2: What is Robin’s robustness against code perturba- extracting rules for interpretation. Additionally, we establish a tions? (Section IV-D) random feature selection method as a baseline. • RQ3: What is Robin’s effectiveness in coping with out- C. What Is Robin’s Fidelity? (RQ1) of-distribution examples? (Section IV-E) To determine the effectiveness of Robin on fidelity, we first B. Experimental Setup train two code classifiers DL-CAIS [7] and TBCNN [2] to be explained according to the settings of the literature, acheiving Implementation. We choose two deep learning-based code 88.24% accuracy for code authorship attribution and 96.72% |
classifiers: DL-CAIS [7] for code authorship attribution and accuracy for code functionality classification. Then we apply TBCNN [2] for code functionality classification. We choose Robin and the interpreters for comparison to DL-CAIS and these two classifiers because they offer different code clas- TBCNN models. For Robin, we set the candidate number of sification tasks, use different kinds of code representations selected coding style attributes θ to 4 and the number of i and different neural networks, are representative of the state- important features selected by the interpreter k to 10. We of-the-art in code classification, and are open-sourced; these split the dataset randomly by 3:1:1 for training, validation, characteristicsarenecessarytotestRobin’swideapplicability. and testing for TBCNN and use 8-fold cross-validation for • DL-CAIS [7]. This classifier leverages a Term DL-CAIS when training the interpreter. Frequency-Inverse Document Frequency based approach Table I shows the fidelity evaluation results on DL-CAIS to extract lexical features from source code and a and TBCNN for different interpreters. We observe that LIME Recurrent Neural Network (RNN) is employed to learn and LEMNA achieve an average FS-M of 0.49% and an the code representation, which is then used as input to average FS-A of 2.70% for DL-CAIS, and an average FS- a random forest classifier to achieve code authorship M of 6.73% and an average FS-A of 9.47% for TBCNN,TABLEI TABLEIII FIDELITYEVALUATIONRESULTSFORDIFFERENTINTERPRETERS ABLATIONANALYSISRESULTSOFFIDELITYEVALUATION(UNIT:%) DL-CAIS TBCNN Method FS-M(%) FS-A(%) FS-M(%) FS-A(%) Method DL-CAIS TBCNN Baseline 1.96 2.45 10.29 9.23 FS-M FS-A FS-M FS-A LIME[13] 0.49 3.43 7.98 10.67 Robin 13.73 92.65 20.67 83.65 LEMNA[25] 0.49 1.96 5.48 8.27 Robinw/oFactor1 12.25 90.69 20.19 81.44 Zouetal.[16] 33.33 69.60 18.75 31.63 Robinw/oFactor2 11.76 90.20 19.90 82.12 Robin 13.73 92.65 20.67 83.65 Robinw/oFactor1&2 12.25 87.75 18.94 80.77 TABLEII TABLEIV AVERAGEINTERPRETATIONTIMEOFEACHCODEEXAMPLEFOR FIDELITYEVALUATIONRESULTSFORDIFFERENTINTERPRETERSON DIFFERENTINTERPRETERS DL-CAISWITHDIFFERENTNEURALNETWORKSTRUCTURES(UNIT:%) Method DL-CAIS(ms) TBCNN(ms) DL-CAIS DL-CAIS-CNN DL-CAIS-MLP Method Baseline 1.00 1.43 FS-M FS-A FS-M FS-A FS-M FS-A LIME[13] 61957.71 111484.20 Baseline 1.96 2.45 2.94 3.92 4.41 3.43 LEMNA[25] 17448.35 43722.97 LIME[13] 0.49 3.43 4.90 6.37 3.43 6.86 Zouetal.[16] 166298.43 243142.95 LEMNA[25] 0.49 1.96 1.47 1.47 0.98 1.47 Robin 1.71 408.04 Zouetal.[16] 33.33 69.60 40.50 54.90 20.09 17.65 Robin 13.73 92.65 40.69 99.51 63.24 97.06 performing even worse than baseline. This can be explained 1.53-2.88% for TBCNN. Robin without Factor1 and Factor2 by the fact that LIME and LEMNA do not perform well in achieves the worst results. This indicates the significance of multi-class code classification tasks due to the more complex Factor1 and Factor2 for the fidelity of Robin. decision boundaries of the classifiers. We also observe that Robin outperforms other interpreters in terms of FS-M and EffectivenessofFidelityWhenAppliedtoDifferentNeural FS-A metrics significantly except Zou et al.’s method [16] in Network Structures. To demonstrate the applicability of terms of FS-M on DL-CAIS. Robin achieves 23.05% higher Robintovariousneuralnetworkstructures,wetakeDL-CAIS FS-A at the cost of 19.60% lower FS-M. However, Zou et for instance to replace the Recurrent Neural Network (RNN) al.’s method [16] is much less robust to perturbed examples layers of DL-CAIS with the Convolutional Neural Network than Robin which we will discuss in Section IV-D. Compared (CNN) layers (denoted as “DL-CAIS-CNN”) and replace the with other interpreters, Robin achieves 6.11% higher FS-M RNN layers of DL-CAIS with the Multi-Layer Perception and67.22%higherFS-Aonaverage,whichindicatesthehigh (MLP)layers(denotedas“DL-CAIS-MLP”),respectively.We fidelity of Robin. first train two code authorship attribution models DL-CAIS- Forthetimecostofinterpreters,TableIIshowstheaverage CNN and DL-CAIS-MLP to be explained according to the interpretation time (in milliseconds) for each code example. settingsofDL-CAIS[7].WeobtainaDL-CAIS-CNNwithan We observe that Robin significantly outperforms the other accuracy of 91.18% and a DL-CAIS-MLP with an accuracy three interpreters in terms of time cost. Note that while of 90.69% for code authorship attribution. Then we apply baselineislesstime-consuming,ithasmuchlowerfidelityand RobinandotherinterpretersforcomparisontoDL-CAIS-CNN robustness than Robin (see Section IV-D). Other interpreters and DL-CAIS-MLP respectively. Table IV shows the fidelity aresignificantlymoretimecostlythanRobinbecausetheyare evaluation results for different interpreters on DL-CAIS with optimizedindependentlyonasinglecodeexampleandrequire different neural networks. For DL-CAIS-CNN and DL-CAIS- a new perturbation and analysis each time a code example is MLP, Robin achieves a 40.07% higher FS-M and an 83.50% |
interpreted, while Robin directly constructs an interpreter that higherFS-Aonaveragethantheotherthreeinterpreters,which applies to all code examples and automatically identifies the shows the effectiveness of Robin applied to different neural important features by simply feeding code examples into the network structures. interpreter model. Robin achieves a 99.75% reduction in time UsefulnessofRobininUnderstandingReasonsforClassifi- cost than the other three interpreters on average. cation.ToillustratetheusefulnessofRobininthisperspective, Ablation Analysis. Robin has two modules to improve the we consider a scenario of code functionality classification via interpreter, i.e., adding L to the loss of the interpreter TBCNN [2]. The code example in Fig. 5 is predicted by diff (denoted as “Factor1”), and data augmentation using mixup the classifier as the functionality class “finding the number (denoted as “Factor2”). To show the contribution of each of factors”. The interpreter generated by Robin extracts five moduleinRobintotheeffectivenessoffidelity,weconductthe featuresofthecodeexample,whicharedeemedmostrelevant ablationstudy.WeexcludeFactor1,Factor2,andbothFactor1 with respect to the prediction result and are highlighted via and Factor2 to generate three variants of Robin, respectively, red boxes in Fig. 5. These five features are related to the andcompareRobinwiththethreevariantsintermsoffidelity. remainder, division, and counting operators. By analyzing Table III summarizes the fidelity evaluation results of Robin these five features, it becomes clear that the code example and its variants on DL-CAIS and TBCNN. We observe that is predicted as “finding the number of factors” because the Robin without Factor1, Factor2, or both Factor1 and Factor2 example looks for, and counts, the number of integers that can reduce FS-M of 1.48-1.97% and FS-A of 1.96-4.90% can divide the input integer. for DL-CAIS, and reduce FS-M of 0.48-1.73% and FS-A of Insight 1: Robin achieves a 6.11% higher FS-M and aint change (int a, int p) { Code int change (int a, int p) { Code int i, count = 0; example int i, count = 0; example for (i= p; i< a; i++) { for (i= p; i< a; i++) { if (a % i== 0 && a / i>= i) { if (a % i== 0 && a / i>= i) { count++; count++; int k, t; int k, t; k = (int) sqrt(a / i); k = (int) sqrt(a / i); for (t = 2; t <= k; t++) { Classifier for (t = 2; t <= k; t++) { if ((a / i) % t == 0) { if ((a / i) % t == 0) { Interpretation of the count += change (a / i, i); count += change (a / i, i); break; Code functionality break; classification: } classification } The classifier predicts the function of the example as } } "finding the number of } } } Prediction class: } factors" because the return count; Finding the number return count; example looks for and } of factors } counts the number of integers that divide the int main() { int main() { input integer exactly. int n, i, a; int n, i, a; cin>> n; cin>> n; for (i= 1; i<= n; i++) { for (i= 1; i<= n; i++) { int total = 0; Interpreter int total = 0; cin>> a; cin>> a; total += change (a, 2); total += change (a, 2); cout<< total + 1 << endl; cout<< total + 1 << endl; } } return 0; return 0; } } Fig.5. Theinterpretationofaspecificinstanceofcodeclassificationinthecontextofcodefunctionalityclassification,whereredboxeshighlightthe5most importantfeatures. TABLEV TABLEVI ROBUSTNESSEVALUATIONRESULTSFORDIFFERENTINTERPRETERS ABLATIONANALYSISRESULTSOFROBUSTNESSEVALUATION Method DL-CAIS TBCNN Method DL-CAIS TBCNN Baseline 0.0121 0.0348 Robin 0.9275 0.5269 LIME[13] 0.0592 0.0962 Robinw/oFactor1 0.9181 0.5194 LEMNA[25] 0.0157 0.0475 Robinw/oFactor2 0.9174 0.5025 Zouetal.[16] 0.3681 0.3852 Robinw/oFactor1&2 0.9073 0.4931 Robin 0.9275 0.5269 67.22% higher FS-A on average than the three interpreters CAIS and TBCNN. This indicates that Robin is insensitive we considered. to semantics-preserving code transformations and has higher robustness against perturbations. D. What Is Robin’s Robustness? (RQ2) Ablation Analysis. To show the contribution of Factor1 and To evaluate the robustness of Robin against perturbations, Factor2 in Robin to the robustness, we conduct the ablation we generate perturbed examples by using the semantics- study. We exclude Factor1, Factor2, and both Factor1 and preservingcodetransformationtocodeexamplesinthetestset Factor2 to generate three variants of Robin, respectively, and andfilterouttheperturbedexamplesthatchangethepredicted compare Robin with the three variants in terms of robustness labelsoftheclassifier.Weusetheseperturbedexamplestotest for the number of important features k = 10. Table VI the robustness of interpreters. summarizes the robustness evaluation results of Robin and Table V summarizes the robustness evaluation results for its three variants on DL-CAIS and TBCNN. We observe that differentinterpreters.WeobservethattherobustnessofLIME Robin achieves the highest robustness, and removing Factor1 and LEMNA on the code classifier is very poor and only and/orFactor2candecreaseitsrobustness,whichindicatesthe slightly higher than the baseline. This is caused by the fol- significance of Factor1 and Factor2 to Robin’s robustness. lowing:LIMEandLEMNAsufferfromuncertainty,thusthere To show the impact of the number of important features k may be differences between the important features obtained on the robustness, we take DL-CAIS for example to compare |
whenthesamecodeexampleisinterpretedmultipletimes.We Robin and its three variants when applied to DL-CAIS in alsoobservethattherobustnessofZouetal.’smethod[16]is termsoftherobustnessofinterpretersbasedonk (e.g.,10,20, higher than that of LIME and LEMNA, but still much lower 30, 40, and 50) important features, respectively. As shown in thanthatofRobin.TheaverageJaccardsimilaritybetweenthe TableVII,therobustnessdecreasesask increases.Thiscanbe importantfeaturesoftheoriginalexamplesidentifiedbyRobin explained by the following: As k increases, the less important andtheimportantfeaturesoftheadversarialexamplesis1.94x features are added to the selected important features; these higherthanthestate-of-the-artmethod[16]and15.87xhigher less important features are difficult to be recognized by the on average than the three interpreters we evaluated for DL- interpreter due to their less prominent contribution to the pre-TABLEVII TABLEIX ABLATIONANALYSISRESULTSOFROBUSTNESSEVALUATIONON THEDIFFERENCEOFACCURACYADqBETWEENCLASSIFIERAND DL-CAISINDIFFERENTkVALUE RETRAINEDCLASSIFIERWITHqNON-IMPORTANTFEATURESREMOVED Method k=10 k=20 k=30 k=40 k=50 DL-CAIS Robin 0.9275 0.9129 0.8962 0.8939 0.8835 Method q=100 q=200 q=300 q=400 q=500 q=600 q=700 Average Baseline 0.1029 0.1029 0.1274 0.0979 0.098 0.0735 0.0931 0.0994 Robinw/oFactor1 0.9181 0.8981 0.8932 0.8819 0.8808 LEMNA 0.0834 0.0784 0.1030 0.0637 0.3579 0.2402 0.0245 0.1359 Robinw/oFactor2 0.9174 0.8969 0.8793 0.8799 0.8784 Robin 0.0736 0.0834 0.0785 0.0883 0.1030 0.1128 0.1814 0.1030 Robinw/oFactor1&2 0.9073 0.8943 0.8744 0.8731 0.8640 TBCNN Method q=25 q=50 q=75 q=100 q=125 q=150 q=175 Average TABLEVIII Baseline 0.0074 0.0042 0.0204 0.0138 0.0043 0.0355 0.0358 0.0173 LEMNA 0.0105 0.0159 0.0241 0.0227 0.1804 0.0569 0.0445 0.0507 ROBUSTNESSEVALUATIONRESULTSOFDIFFERENTINTERPRETERSON Robin 0.0299 0.0037 0.0184 0.0156 0.0200 0.0029 0.0145 0.0150 DL-CAISWITHDIFFERENTNEURALNETWORKSTRUCTURES Method DL-CAIS DL-CAIS-CNN DL-CAIS-MLP TABLEX Baseline 0.0121 0.0121 0.0121 THEDIFFERENCEOFACCURACYADqBETWEENCLASSIFIERAND LIME[13] 0.0592 0.2651 0.0592 RETRAINEDCLASSIFIERWITHqNON-IMPORTANTFEATURESREMOVED LEMNA[25] 0.0157 0.0167 0.0157 ONDL-CAISWITHDIFFERENTNEURALNETWORKSTRUCTURES Zouetal.[16] 0.3681 0.3812 0.3059 DL-CAIS Robin 0.9275 0.4922 0.3298 Method q=100 q=200 q=300 q=400 q=500 q=600 q=700 Average Baseline 0.1029 0.1029 0.1274 0.0979 0.098 0.0735 0.0931 0.0994 LEMNA 0.0834 0.0784 0.1030 0.0637 0.3579 0.2402 0.0245 0.1359 diction, thus perform worse robustness against perturbations. Robin 0.0736 0.0834 0.0785 0.0883 0.1030 0.1128 0.1814 0.1030 DL-CAIS-CNN Wealsoobservethat(i)Robinachievesthebestrobustnesson Method q=100 q=200 q=300 q=400 q=500 q=600 q=700 Average DL-CAISinallk values,and(ii)removingFactor1orFactor2 Baseline 0.0147 0.0049 0.0147 0.0049 0.0049 0.0196 0.0818 0.0207 LEMNA 0.0490 0.0396 0.0245 0.0195 0.2745 0.1618 0.0196 0.0840 or both of them from Robin can decrease the robustness of Robin 0.0196 0.0049 0.0095 0.0196 0.0294 0.0294 0.0735 0.0266 DL-CAIS-MLP Robin,whichindicatesthesignificanceofFactor1andFactor2 Method q=100 q=200 q=300 q=400 q=500 q=600 q=700 Average Baseline 0.0197 0.0196 0.0588 0.0147 0.0049 0.0687 0.0765 0.0376 for the robustness of Robin. LEMNA 0.0049 0.0047 0.0196 0.0147 0.2598 0.2010 0.0490 0.0791 Robin 0.0098 0.0049 0.0049 0.0196 0.0196 0.0490 0.0490 0.0224 Robustness Evaluation When Applied to Different Neural Network Structures. To show the robustness of Robin when baselinemethodoutperformsRobinonDL-CAIS,ithasmuch applied to different neural network structures, we adopt DL- lower fidelity and robustness than Robin which we have CAIS, DL-CAIS-CNN, and DL-CAIS-MLP we have trained discussed in Section IV-C and Section IV-D. In contrast, in Section IV-C for interpretation. For DL-CAIS-CNN and the average difference of accuracy achieved by LEMNA is DL-CAIS-MLP, we generate perturbed examples by using the notably larger than those of Robin and the baseline method, semantics-preserving code transformations to code examples becauseLEMNAreliesonthechangesofclassifier’saccuracy inthetestsetandfilterouttheperturbedexamplesthatchange to calculate the importance of features. Robin achieves a the prediction labels of the classifier. Table VIII shows the 24.21% smaller average difference of accuracy for DL-CAIS robustness evaluation results for different interpreters on DL- and a 70.41% smaller average difference of accuracy for CAIS with different neural networks. For DL-CAIS-CNN and TBCNNthanLEMNA,indicatingthatRobinachieves47.31% DL-CAIS-MLP,Robinachievesa10.05xhigherrobustnesson less affected by the out-of-distribution examples compared to |
average, compared with the other three interpreters. Though LEMNA on average. Robin is minimally affected by the out- Robin achieves different robustness for different neural net- of-distribution examples, which attributes to introducing the work structures, Robin achieves the highest robustness among prediction accuracy of the retrained classifier to evaluate the all interpreters we evaluated. importance of features. Insight2:Robinachievesa1.94xhigherrobustnessthanthe state-of-the-art method [16] and a 15.87x higher robustness EffectivenessinCopingwithOut-of-DistributionExamples on average than the three interpreters we evaluated. When Applied to Different Neural Network Structures. E. What Is Robin’s Effectiveness in Coping with Out-of- To show the effectiveness in coping with out-of-distribution Distribution Examples? (RQ3) examples when applied to different neural network structures, we adopt DL-CAIS, DL-CAIS-CNN, and DL-CAIS-MLP we To demonstrate the effectiveness of Robin in copying with have trained in Section IV-C for interpretation. Table X out-of-distributionexamples,weconductexperimentswiththe describes the difference of accuracy AD between the given number of removed non-important features q ∈Q={100, 200, q classifier and the retrained classifier after removing q non- 300, 400, 500, 600, 700} for DL-CAIS and q ∈ Q={25, 50, important features while using different neural network struc- 75, 100, 125, 150, 175} for TBCNN according to the number tures.ForDL-CAIS-CNNandDL-CAIS-MLP,Robinachieves of all features. Table IX shows the difference of accuracy a 70.00% less affected by out-of-distribution examples when AD between the classifier and the retrained classifier with q q compared with LEMNA on average, where the average is non-important features removed. We observe that the average taken over DL-CAIS-CNN and DL-CAIS-MLP, which shows difference of accuracy of Robin and the baseline method is the effectiveness of Robin in coping with out-of-distribution very small, indicating that they are less affected by out-of- examples when applied to different neural network structures. distribution examples. This can be explained by the fact that bothofthesemethodsdonotemploythechangeinclassifier’s Insight 3: Robin achieves a 47.31% less affected by out-of- accuracy to assess the importance of features. Although the distribution examples when compared with LEMNA.V. LIMITATIONS robust(SectionIV-D);inparticular,existingmethodsforlocal interpretation suffers from the problem of out-of-distribution The present study has limitations, which represent exciting examples[21],[22].Robinaddressesboththerobustnessissue open problems for future studies. First, our study does not and the out-of-distribution issue in the post-hoc approach to evaluate the effectiveness of Robin on graph-based code local interpretation, by introducing approximators to mitigate classifiers and pre-training models like CodeT5 [37] and out-of-distributionexamplesandusingadversarialtrainingand CodeBERT [38]. The unique characteristics of these models data augmentation to improve robustness. pose challenges that require further investigation, particularly in the context of applying Robin to classifiers with more Prior Studies on Improving Robustness of Interpretation complex model structures. Second, Robin can identify the Methods. These studies have been conducted in other appli- most important features but cannot give further explanations cation domains than code classification. In the image domain, why a particular prediction is made. To our knowledge, this one idea is to aggregate multiple interpretation [24], [48], and kind of desired further explanation is beyond the reach of another idea is to smooth the model’s decision surface [47], the current technology in deep learning interpretability. Third, [49]. In the text domain, one idea is to eliminate the uncer- Robincanidentifythemostimportantfeaturesthatleadtothe tainties that are present in the existing interpretation methods particularpredictionofagivenexample,butcannottellwhich [50], [51], and another idea is to introduce continuous small training examples in the training set that leads to the code perturbations to interpretation and use adversarial training for classifiercontributetotheparticularprediction.Achievingthis robustness enhancement [27], [28]. To our knowledge, we are type of training examples traceability is important because it thefirsttoinvestigatehowtoachieverobustinterpretabilityin may help achieve better interpretability. the code classification domain, while noting that none of the aforementionedmethodsthatareeffectiveintheotherdomains VI. RELATEDWORK can be adapted to the code classification domain. This is because program code must follow strict lexical and syntactic Prior Studies on Deep Learning-Based Code Classifiers. requirements, meaning that perturbed representations may not We divide these models into three categories according to be mapped back to real-world code examples, which is a the code representation they use: token-based [5], [7], [39] general challenge when dealing with programs. This justifies vs. tree-based [4], [8], [40] vs. graph-based [10], [15], [41]. whyRobininitiatesthestudyofanewandimportantproblem. Token-basedmodelsrepresentapieceofcodeasasequenceof individualtokens,whileonlyperformingbasiclexicalanalysis. These models are mainly used for code authorship attribution VII. CONCLUSION and vulnerability detection. Tree-based models represent a We have presented Robin, a robust interpreter for deep piece of code as a syntax tree, while incorporating both learning-based code classifiers such as code authorship at- lexical and syntax analysis. These models are widely used tribution classification and code function classification. The |
for code authorship attribution, code function classification, key idea behind Robin is to (i) use approximators to mitigate and vulnerability detection. Graph-based models represent a the out-of-distribution example problem, and (ii) use adver- piece of code as a directed graph, where a node represents sarial training and data augmentation to improve interpreter an expression or statement and an edge represents a control robustness, which is different from the widely-adopted idea flow, control dependence, or data dependence. These models of using adversarial training to achieve classifier’s (rather are suitable for complex code structures such as vulnerability than interpreter’s) robustness. Experimental results show that detection.WehaveshownhowRobincanofferinterpretability Robin achieves a high fidelity and a high robustness, while to token- and tree-based code classifiers [2], [7], but not to mitigating the effect of out-of-distribution examples caused graph-based models as discussed in the preceding section. byperturbations. Thelimitationsof Robinserve asinteresting Prior Studies on Interpretation Methods for Deep Learn- open problems for future research. ing Models. These studies are often divided into two ap- proaches: ante-hoc [11], [12] vs. post-hoc [13], [42]–[47], ACKNOWLEDGMENTS wherethelattercanbefurtherdividedintoglobal(i.e.,seeking model-level interpretability) [42], [43] vs. local (i.e., seeking We thank the anonymous reviewers for their comments example-level interpretability) [13], [44]–[47] interpretation whichguidedusinimprovingthepaper.Theauthorsaffiliated methods. In the context of code classification, the ante-hoc with Huazhong University of Science and Technology were approach leverages the attention weight matrix [14], [15]. supported by the National Natural Science Foundation of Thereiscurrentlynopost-hocapproachaimingatglobalinter- ChinaunderGrantNo.62272187.ShouhuaiXuwassupported pretationincodeclassification;whereas,thepost-hocapproach in part by the National Science Foundation under Grants aiming at local interpretation mainly leverages perturbation- #2122631, #2115134, and #1910488 as well as Colorado based feature saliency [16], [17] and program reduction [18], State Bill 18-086. Any opinions, findings, conclusions or [19].Sinceante-hocinterpretationmethodscannotprovidein- recommendations expressed in this work are those of the terpretationsforgivenclassifiers,wewillnotdiscussthemany authors and do not reflect the views of the funding agencies further. On the other hand, existing poc-hoc methods are not in any sense.REFERENCES [18] S.Suneja,Y.Zheng,Y.Zhuang,J.A.Laredo,andA.Morari,“Probing model signal-awareness via prediction-preserving input minimization,” [1] J.Zhang,X.Wang,H.Zhang,H.Sun,K.Wang,andX.Liu,“Anovel in Proceedings of the 29th ACM Joint Meeting on European Software neuralsourcecoderepresentationbasedonabstractsyntaxtree,”inPro- EngineeringConferenceandSymposiumontheFoundationsofSoftware ceedingsofthe41stInternationalConferenceonSoftwareEngineering Engineering(ESEC/FSE),Athens,Greece,2021,pp.945–955. (ICSE),QC,Canada. IEEE,2019,pp.783–794. [19] M.R.I.Rabin,V.J.Hellendoorn,andM.A.Alipour,“Understanding neuralcodeintelligencethroughprogramsimplification,”inProceedings [2] L. Mou, G. Li, L. Zhang, T. Wang, and Z. Jin, “Convolutional neural of the 29th ACM Joint Meeting on European Software Engineering networks over tree structures for programming language processing,” ConferenceandSymposiumontheFoundationsofSoftwareEngineering in Proceedings of the 30th AAAI Conference on Artificial Intelligence (ESEC/FSE),Athens,Greece,2021,pp.441–452. (AAAI),Phoenix,Arizona,USA. AAAIPress,2016,pp.1287–1293. [20] A.ZellerandR.Hildebrandt,“Simplifyingandisolatingfailure-inducing [3] A. Caliskan-Islam, R. Harang, A. Liu, A. Narayanan, C. Voss, F. Ya- input,”IEEETransactionsonSoftwareEngineering,vol.28,no.2,pp. maguchi, and R. Greenstadt, “De-anonymizing programmers via code 183–200,2002. stylometry,” in Proceedings of the 24th USENIX Security Symposium [21] S.Hooker,D.Erhan,P.-J.Kindermans,andB.Kim,“Abenchmarkfor (USENIXSecurity),Washington,D.C.,USA,2015,pp.255–270. interpretabilitymethodsindeepneuralnetworks,”inProceedingsofAn- [4] B.Alsulami,E.Dauber,R.Harang,S.Mancoridis,andR.Greenstadt, nualConferenceonNeuralInformationProcessingSystems(NeurIPS), “Sourcecodeauthorshipattributionusinglongshort-termmemorybased Vancouver,BC,Canada,2019,pp.9734–9745. networks,”inProceedingsofthe22ndEuropeanSymposiumonResearch [22] L. Brocki and N. C. Chung, “Evaluation of interpretability methods inComputerSecurity(ESORICS),Oslo,Norway,2017,pp.65–82. and perturbation artifacts in deep neural networks,” arXiv preprint [5] X.Yang,G.Xu,Q.Li,Y.Guo,andM.Zhang,“Authorshipattributionof arXiv:2203.02928,2022. sourcecodebyusingbackpropagationneuralnetworkbasedonparticle [23] M. Bajaj, L. Chu, Z. Y. Xue, J. Pei, L. Wang, P. C.-H. Lam, and swarmoptimization,”PloSone,vol.12,no.11,p.e0187204,2017. Y. Zhang, “Robust counterfactual explanations on graph neural net- |
[6] E.Bogomolov,V.Kovalenko,Y.Rebryk,A.Bacchelli,andT.Bryksin, works,” in Proceedings of Annual Conference on Neural Information “Authorship attribution of source code: A language-agnostic approach ProcessingSystems(NeurIPS),VirtualEvent,2021,pp.5644–5655. and applicability in software engineering,” in Proceedings of the 29th [24] X.Zhang,N.Wang,H.Shen,S.Ji,X.Luo,andT.Wang,“Interpretable ACMJointMeetingonEuropeanSoftwareEngineeringConferenceand deeplearningunderfire,”inProceedingsofthe29thUSENIXSecurity Symposium on the Foundations of Software Engineering (ESEC/FSE), Symposium(USENIXSecurity),VirtualEvent,2020,pp.1659–1676. Athens,Greece,2021,pp.932–944. [25] W. Guo, D. Mu, J. Xu, P. Su, G. Wang, and X. Xing, “LEMNA: [7] M.Abuhamad,T.AbuHmed,A.Mohaisen,andD.Nyang,“Large-scale Explainingdeeplearningbasedsecurityapplications,”inProceedingsof and language-oblivious code authorship identification,” in Proceedings the2018ACMSIGSACConferenceonComputerandCommunications of the 2018 ACM SIGSAC Conference on Computer and Communica- Security(CCS),Toronto,ON,Canada,2018,pp.364–379. tionsSecurity(CCS),Toronto,ON,Canada,2018,pp.101–114. [26] J. Chen, L. Song, M. Wainwright, and M. Jordan, “Learning to ex- [8] G.Lin,J.Zhang,W.Luo,L.Pan,andY.Xiang,“Poster:Vulnerability plain:Aninformation-theoreticperspectiveonmodelinterpretation,”in discoverywithfunctionrepresentationlearningfromunlabeledprojects,” Proceedingsofthe35thInternationalConferenceonMachineLearning inProceedingsofthe2017ACMSIGSACConferenceonComputerand (ICML),Stockholmsma¨ssan,Stockholm,Sweden,2018,pp.883–892. Communications Security (CCS), Dallas, TX, USA, 2017, pp. 2539– [27] H.Lakkaraju,N.Arsov,andO.Bastani,“Robustandstableblackbox 2541. explanations,” in Proceedings of the 37th International Conference on [9] Z.Li,D.Zou,S.Xu,X.Ou,H.Jin,S.Wang,Z.Deng,andY.Zhong, MachineLearning(ICML),VirtualEvent,2020,pp.5628–5638. “VulDeePecker: A deep learning-based system for vulnerability detec- [28] E. La Malfa, A. Zbrzezny, R. Michelmore, N. Paoletti, and tion,”inProceedingsofthe25thAnnualNetworkandDistributedSystem M.Kwiatkowska,“OnguaranteedoptimalrobustexplanationsforNLP SecuritySymposium(NDSS),SanDiego,California,USA,2018,pp.1– models,”inProceedingsofthe30thInternationalJointConferenceon 15. ArtificialIntelligence(IJCAI),VirtualEvent,2021,pp.2658–2665. [10] Z. Li, D. Zou, S. Xu, H. Jin, Y. Zhu, and Z. Chen, “SySeVR: A [29] H. Zhang, M. Cisse, Y. N. Dauphin, and D. Lopez-Paz, “mixup: framework for using deep learning to detect software vulnerabilities,” Beyondempiricalriskminimization,”inProceedingsofthe6thInterna- IEEETransactionsonDependableandSecureComputing,vol.19,no.4, tionalConferenceonLearningRepresentations(ICLR),Vancouver,BC, pp.2244–2258,2022. Canada,2018. [11] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. [30] Z.Li,G.Chen,C.Chen,Y.Zou,andS.Xu,“RopGen:Towardsrobust Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in code authorship attribution via automatic coding style transformation,” Proceedings of Annual Conference on Neural Information Processing inProceedingsofthe44thInternationalConferenceonSoftwareEngi- Systems(NeurIPS),LongBeach,CA,USA,2017,pp.5998–6008. neering(ICSE),Pittsburgh,PA,USA,2022,pp.1906–1918. [12] E.Choi,M.T.Bahadori,J.Sun,J.Kulas,A.Schuetz,andW.Stewart, [31] M.LevandowskyandD.Winter,“Distancebetweensets,”Nature,vol. “Retain:Aninterpretablepredictivemodelforhealthcareusingreverse 234,no.5323,pp.34–35,1971. time attention mechanism,” in Proceedings of Annual Conference on [32] J. Liang, B. Bai, Y. Cao, K. Bai, and F. Wang, “Adversarial infidelity Neural Information Processing Systems (NeurIPS), Barcelona, Spain, learning for model interpretation,” in Proceedings of the 26th ACM 2016,pp.3504–3512. SIGKDD International Conference on Knowledge Discovery & Data [13] M. T. Ribeiro, S. Singh, and C. Guestrin, “‘Why should I trust you?’ Mining(KDD),VirtualEvent,2020,pp.286–296. Explainingthepredictionsofanyclassifier,”inProceedingsofthe22nd [33] M. B. Zafar, M. Donini, D. Slack, C. Archambeau, S. Das, and ACMSIGKDDInternationalConferenceonKnowledgeDiscoveryand K. Kenthapadi, “On the lack of robust interpretability of neural text DataMining(KDD),SanFrancisco,CA,USA,2016,pp.1135–1144. classifiers,” in Proceedings of the Association for Computational Lin- [14] N. D. Bui, Y. Yu, and L. Jiang, “Autofocus: Interpreting attention- guisticsFindings(ACL/IJCNLP),VirtualEvent,2021,pp.3730–3740. basedneuralnetworksbycodeperturbation,”inProceedingsofthe34th [34] https://codingcompetitions.withgoogle.com/codejam,2022. IEEE/ACMInternationalConferenceonAutomatedSoftwareEngineer- [35] E.Quiring,A.Maier,andK.Rieck,“Misleadingauthorshipattribution |
ing(ASE),SanDiego,CA,USA,2019,pp.38–41. of source code using adversarial learning,” in Proceedings of the 28th [15] D.Zou,Y.Hu,W.Li,Y.Wu,H.Zhao,andH.Jin,“mVulPreter:Amulti- USENIX Security Symposium, Santa Clara, CA, USA. USENIX granularity vulnerability detection system with interpretations,” IEEE Association,2019,pp.479–496. TransactionsonDependableandSecureComputing,pp.1–12,2022. [36] M.Abadi,P.Barham,J.Chen,Z.Chen,A.Davis,J.Dean,M.Devin, [16] D. Zou, Y. Zhu, S. Xu, Z. Li, H. Jin, and H. Ye, “Interpreting S.Ghemawat,G.Irving,M.Isard,M.Kudlur,J.Levenberg,R.Monga, deeplearning-basedvulnerabilitydetectorpredictionsbasedonheuristic S. Moore, D. G. Murray, B. Steiner, P. A. Tucker, V. Vasudevan, searching,”ACMTransactionsonSoftwareEngineeringandMethodol- P. Warden, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: A system ogy,vol.30,no.2,pp.1–31,2021. for large-scale machine learning,” in Proceedings of the 12th USENIX [17] J.Cito,I.Dillig,V.Murali,andS.Chandra,“Counterfactualexplanations SymposiumonOperatingSystemsDesignandImplementation(OSDI), formodelsofcode,”inProceedingsofthe44thIEEE/ACMInternational Savannah,GA,USA. USENIXAssociation,2016,pp.265–283. ConferenceonSoftwareEngineering:SoftwareEngineeringinPractice [37] Y. Wang, W. Wang, S. Joty, and S. C. Hoi, “CodeT5: Identifier-aware (ICSE-SEIP),Pittsburgh,PA,USA,2022,pp.125–134. unifiedpre-trainedencoder-decodermodelsforcodeunderstandingandgeneration,”inProceedingsofthe2021ConferenceonEmpiricalMeth- odsinNaturalLanguageProcessing(EMNLP),VirtualEvent,2021,pp. 8696–8708. [38] Z.Feng,D.Guo,D.Tang,N.Duan,X.Feng,M.Gong,L.Shou,B.Qin, T. Liu, D. Jiang, and M. Zhou, “CodeBERT: A pre-trained model for programmingandnaturallanguages,”inProceedingsofFindingsofthe 2020ConferenceonEmpiricalMethodsinNaturalLanguageProcessing (EMNLP),VirtualEvent,2020,pp.1536–1547. [39] R. Russell, L. Kim, L. Hamilton, T. Lazovich, J. Harer, O. Ozdemir, P. Ellingwood, and M. McConley, “Automated vulnerability detection in source code using deep representation learning,” in Proceedings of the 17th IEEE International Conference on Machine Learning and Applications(ICMLA),Orlando,FL,USA. IEEE,2018,pp.757–762. [40] U. Alon, M. Zilberstein, O. Levy, and E. Yahav, “code2vec: Learning distributedrepresentationsofcode,”Proc.ACMProgram.Lang.,vol.3, no.POPL,pp.1–29,2019. [41] M.Allamanis,M.Brockschmidt,andM.Khademi,“Learningtorepre- sentprogramswithgraphs,”inProceedingsofthe6thInternationalCon- ference on Learning Representations (ICLR), Vancouver, BC, Canada, 2018. [42] N. Puri, P. Gupta, P. Agarwal, S. Verma, and B. Krishnamurthy, “MAGIX: Model agnostic globally interpretable explanations,” arXiv preprintarXiv:1706.07160,2017. [43] J.Wang,L.Gou,W.Zhang,H.Yang,andH.-W.Shen,“DeepVID:Deep visual interpretation and diagnosis for image classifiers via knowledge distillation,”IEEETransactionsonVisualizationandComputerGraph- ics,vol.25,no.6,pp.2168–2180,2019. [44] K.Simonyan,A.Vedaldi,andA.Zisserman,“Deepinsideconvolutional networks: Visualising image classification models and saliency maps,” in Proceedings of the 2nd International Conference on Learning Rep- resentations(ICLR),Banff,AB,Canada,2014. [45] S. M. Lundberg and S.-I. Lee, “A unified approach to interpreting model predictions,” in Proceedings of Annual Conference on Neural InformationProcessingSystems(NeurIPS),LongBeach,CA,USA,2017, pp.4765–4774. [46] P. Schwab and W. Karlen, “CXPlain: Causal explanations for model interpretationunderuncertainty,”inProceedingsofAnnualConference on Neural Information Processing Systems (NeurIPS), Vancouver, BC, Canada,2019,pp.10220–10230. [47] D. Smilkov, N. Thorat, B. Kim, F. Vie´gas, and M. Wattenberg, “SmoothGrad: Removing noise by adding noise,” arXiv preprint arXiv:1706.03825,2017. [48] L. Rieger and L. K. Hansen, “A simple defense against adversarial attacksonheatmapexplanations,”inProceedingsofthe2020Workshop on Human Interpretability in Machine Learning (WHI), Virtual Event, 2020. [49] Z. Wang, H. Wang, S. Ramkumar, P. Mardziel, M. Fredrikson, and A. Datta, “Smoothed geometry for robust attribution,” in Proceed- ings of Annual Conference on Neural Information Processing Systems (NeurIPS),VirtualEvent,2020,pp.13623–13634. [50] X. Zhao, W. Huang, X. Huang, V. Robu, and D. Flynn, “BayLIME: Bayesian local interpretable model-agnostic explanations,” in Proceed- ings of the 37th Conference on Uncertainty in Artificial Intelligence (UAI),VirtualEvent,2021,pp.887–896. [51] Z.Zhou,G.Hooker,andF.Wang,“S-LIME:Stabilized-limeformodel explanation,”inProceedingsofthe27thACMSIGKDDConferenceon KnowledgeDiscovery&DataMining(KDD),VirtualEvent,2021,pp. 2429–2438. |
2309.14677 XGV-BERT: Leveraging Contextualized Language Model and Graph Neural Network for ffi E cient Software Vulnerability Detection VuLeAnhQuana,b,ChauThuanPhata,b,KietVanNguyena,b,PhanTheDuya,b,Van-HauPhama,b aInformationSecurityLaboratory,UniversityofInformationTechnology,HoChiMinhcity,Vietnam bVietnamNationalUniversityHoChiMinhCity,HochiminhCity,Vietnam Abstract Withtheadvancementofdeeplearning(DL)invariousfields,therearemanyattemptstorevealsoftwarevulnerabilitiesbydata- driven approach. Nonetheless, such existing works lack the effective representation that can retain the non-sequential semantic characteristics and contextual relationship of source code attributes. Hence, in this work, we propose XGV-BERT, a framework thatcombinesthepre-trainedCodeBERTmodelandGraphNeuralNetwork(GCN)todetectsoftwarevulnerabilities. Byjointly training the CodeBERT and GCN modules within XGV-BERT, the proposed model leverages the advantages of large-scale pre- training, harnessing vast raw data, and transfer learning by learning representations for training data through graph convolution. The research results demonstrate that the XGV-BERT method significantly improves vulnerability detection accuracy compared to two existing methods such as VulDeePecker and SySeVR. For the VulDeePecker dataset, XGV-BERT achieves an impressive F1-score of 97.5%, significantly outperforming VulDeePecker, which achieved an F1-score of 78.3%. Again, with the SySeVR dataset,XGV-BERTachievesanF1-scoreof95.5%,surpassingtheresultsofSySeVRwithanF1-scoreof83.5%. Keywords: DeepLearning,SoftwareSecurity,VulnerabilityDetection,GraphNeuralNetworks,NLP 1. Introduction ability detection [7], [8], [9], [10]. With the continuous inno- vationanddevelopmentofDLtechnology,significantadvance- Recently, as technology continues its rapid evolution, the mentshavebeenmadeinNaturalLanguageProcessing(NLP). software development landscape has witnessed an exponential ModelssuchasGPT[11]andBERT[12]havepropelledNLP surge. While these software innovations offer unprecedented technologyforward. Sourcecodeisessentiallyatextinaspe- convenience,theyalsobringforthaloomingspecter:aproblem cific format, making it logically feasible to utilize NLP tech- ofsoftwarevulnerabilities.Thesevulnerabilitiesareformidable niquesforcodeanalysis. Infact, modelslikeCodeBERT[13] adversariestotheseamlessfunctioningofsoftwaresystems[1]. havebeenproposedbyseveralresearchers,andsomecode-level Theglobaleconomictoll,bothdirectandindirect,inflictedby tasks have been addressed, yielding promising results. These these vulnerabilities has surpassed billions of dollars, making findingsdemonstratethepotentialofusingNLPtechnologyfor it an issue of paramount concern. It is an undeniable fact that automatedvulnerabilitydetectionresearch. thevastmajorityofsoftwareapplicationsharborvarioustypes However, there are various directions of research that em- of vulnerabilities. Notable among these are buffer overflow ployDLforsecurityvulnerabilitydetection. Accordingtothe vulnerabilities like CVE-2019-8917, library/API function call surveybyZengetal. [14],therearefourmainresearchdirec- vulnerabilities like CVE-2016-10033, array usage vulnerabil- tions. ThefirstinvolvesusingDLmodelstolearnthesemantic ities like CVE-2020-12345, and many more extensively cata- representationsofprograms, asproposedbyWangetal. [15]. logedwithintheCommonVulnerabilitiesandExposures(CVE) The second direction focuses on end-to-end solutions for de- database[2].Astimeelapses,thelongeravulnerabilitypersists tectingBufferOverflowvulnerabilities,asexploredbyChoiet unaddressed, the more it becomes an inviting target for mali- al. [16]. The third direction involves extracting vulnerability- cious actors. This, in turn, exposes companies and organiza- containingcodepatternstotrainmodels,asdemonstratedbyLi tionstotheominousspecterofsubstantialdamages[3]. Conse- etal. [17]. Finally,thefourthdirectionaddressesvulnerability quently,thequestfortheautomateddetectionofsoftwarevul- detectionforbinarycode,asstudiedbyLiuetal. [18]. nerabilitieswithinstringenttimeframesstandsatthevanguard Each of these research directions has its own advantages ofadvancedresearchendeavors[4],[5],[6]. andlimitations. Basedonthepreviousresearchoutcomes,ex- Ontheotherhand,DeepLearning(DL)technologycanpro- tracting vulnerability-containing patterns to create a training videthecapabilitytoachievemoreaccurateautomaticvulner- datasethasachievedpromisingresults,displayingrelativelygood effectiveness and potential for further development [19]. No- Emailaddresses:19520233@uit.edu.vn(VuLeAnhQuan), tableexamplesofthisapproachincludetheVulDeePeckerpa- 19520827@gm.uit.edu.vn(ChauThuanPhat),kietnv@uit.edu.vn(Kiet per by Li et al. [17] and SySeVR by Li et al. [20], as well VanNguyen),duypt@uit.edu.vn(PhanTheDuy),haupv@uit.edu.vn (Van-HauPham) as VulDeBERT by Kim et al. [21]. However, both SySeVR PreprintsubmittedtoElsevier September27,2023 3202 peS 62 ]RC.sc[ 1v77641.9032:viXraandVulDeBERTstillexhibitcertainshortcomings;namely,the tional Network (GCN) model for analyzing C and C++ |
processed data consists solely of isolated code statements ex- sourcecode. tractedfromthesourcecode, lackingcontextuallinkage. This • Our experimental results indicate that XGV-BERT out- deficiencyinherentlydiminishestheprecisionofthemodel. performsthestate-of-the-artmethod[17][20]ontheSARD Meanwhile,toretainthenon-sequentialsemanticattributes [30]andNVD[31]datasets. inherentinthesourcecodeoftheprogram,certaingraph-based methodologies have been introduced, as documented by [22], Theremainingsectionsofthisarticleareconstructedasfol- [23], [24], [25], [26], [27], [28], [29]. These studies advo- lows.Section3introducedsomerelatedworksindetectingvul- cate the transformation of contract source code into a graph nerabilitiesinsoftware. InSection2, wegivetheoverviewof representation,followedbytheutilizationofgraphneuralnet- background knowledge used in our work. Next, the proposed works for vulnerability detection. These existing works indi- framework and methodology are discussed in Section 4. Sec- catethatthereisthepotentialforutilizinggraphrepresentations tion 5 describes the experimental settings and result analysis for inspecting the relationship of source code components to ofvulnerabilitydetectiononvariousdatasets. Finally,wecon- reveal software vulnerability. Specifically, the slices extracted cludethepaperinSection6. fromthesourcecodearetransformedintographrepresentations andsubsequentlyincorporatedintothedeeplearningmodelfor 2. Background training. The conversion of slices into graphs aims to capture relationships between words, as well as between words and 2.1. AbstractSyntaxTree-AST slices,enhancingthemodel’scapacitytounderstandcodepat- 2.1.1. Definition ternsandrelationshipsamongvariouscomponents. In terms of software, an Abstract Syntax Tree (AST) [32] Additionally,intheevolvinglandscapeofsoftwaresecurity, embodies a tree-like depiction of the abstract syntax structure Natural Language Processing (NLP) has emerged as a potent inherentinafragmentoftext,commonlyreferredtoassource toolwiththepotentialtorevolutionizethefield. Itsapplication code,authoredinaformallanguage. Eachnodewithinthistree extendstobridgingthesubstantialsemanticgapbetweenNatu- servesasarepresentationofadiscerniblestructurefoundwithin ralLanguage(NL)andprogramminglanguages(PL).Nonethe- the text. More specifics, abstraction in the AST is manifested less,asignificantsemanticdisparityexistsbetweenNaturalLan- bynotrepresentingeverydetailpresentintheactualsyntaxbut guage(NL)andprogramminglanguages(PL).Tomitigatethis focusing on the structure and content-related aspects. For in- divergence and gain a deeper comprehension of the semantic stance,unnecessarysinglequotesinthesyntacticstructureare contentwithinNL,scholarsintroducedCodeBERT[13],alan- notrepresentedasseparatenodesinthetree.Similarly,asyntax guagemodelthatemploysthemaskedlanguagemodelandto- structurelikethe”if”conditionalstatementcanberepresented ken replacement detection approach to pre-train both NL and byasinglenodewiththreebranches. TheASTisavitaltoolin PL.CodeBERThasdemonstratedremarkablegeneralizationabil- parsingprogramminglanguages. Itprovidesanabstractstruc- ities, exhibiting a strong competitive edge in various down- tural representation of the source code, enabling programs to stream tasks related to multiple programming languages. The understandandprocesscodemoreeasily.Theabstractioninthe embeddingvectorsderivedfromCodeBERTencapsulateawe- AST allows us to concentrate on essential syntax components alth of information, thereby enhancing the DL model’s ability and overlook irrelevant details. This simplifies the language toyieldoptimalresultspost-training. analysisandprocessing,whileprovidingaconvenientstructure Therefore, to cope with the above challenges, we propose forworkingwithandinteractingwiththesourcecode. theXGV-BERTmodelforbetterobtainingthecontextualrep- resentationofprograms. Specifically,weleverageapre-trained 2.1.2. Design model named CodeBERT for code embedding because it is a ThedesignoftheASTisoftencloselyintertwinedwiththe modelpre-trainedonmultipleprogramminglanguagesthatun- design of the compiler. The core requirements of the design derstand better the source code. Subsequently, the integration includethefollowing: oftheGNNmodelwithgraphsconstructedfromextracteddata helps to enhance the connections between words and slices in • Preservation of Variables: Variables must be retained, source code to disclose the vulnerability in the software pro- alongwiththeirdeclarationpositionsinthesourcecode. gram. Insummary,thecontributionsofourworkareillustrated asfollows: • RepresentationofExecutionOrder: Theorderofexecu- tionstatementsmustberepresentedandexplicitlydeter- • Weleveragethelanguagemodeltoconstructamethodof mined. representing vulnerable source code for security defect • ProperHandlingofBinaryOperators: Theleftandright detection,usingCodeBERTembeddingmodeltoreplace componentsofbinaryoperatorsmustbestoredandaccu- theWord2vecembeddingmethodusedinpreviousstud- ratelydetermined. ies[17][20]. • WeproposeavulnerabilitydetectionsystemcalledXGV- • Storage of Identifiers and Assigned Values: The identi- fiersandtheirassignedvaluesmustbestoredwithinthe BERT that utilizes CodeBERT with a Graph Convolu- assignmentstatements. 22.2. CodeBERT around a central data point using learnable filters and |
CodeBERT[13]isapre-trainedBERTmodelthatcombines weights. Spectral-based networks operate on a similar both natural language (NL) and programming language (PL) principle,aggregatingtheattributesofneighboringnodes encodings to create a comprehensive model suitable for fine- foracentralnode. However,spectral-basedmethodsof- tuning on source code tasks. The model is trained on a large tenhavehighercomputationalcomplexityandhavegrad- dataset sourced from code repositories and programming doc- uallybeenreplacedbyspatial-basedmethods. uments,leadingtoimprovedeffectivenessinsoftwareprogram • Spatial ConvolutionalNetwork: Thisapproach provides trainingandsourcecodeanalysis. a simpler and more efficient way to handle data. It em- During the pre-training stage, the input data is formed by bedsnodesbasedontheirneighboringnodes.Thisspatial- combiningtwosegmentswithspecialseparatortoken: [CLS], based method has become popular due to its simplicity w 1,w 2,..w n,[SEP],c 1,c 2,...,c m,[EOS],with[CLS]isclassifi- andeffectivenessinprocessingdata. cationtoken,[SEP]isseparatortokenand[EOS]is”endofthe sequence”token.Onesegmentrepresentsnaturallanguagetext, 2.4. GraphConvolutionalNetwork-GCN while the other represents code from a specific programming GCN (Graph Convolutional Network) [34] is a powerful language. The [CLS] token is a special token placed before neural network architecture designed for machine learning on the two segments. Following the standard text processing in graphs.Infact,itissopowerfulthatevenarandomlyinitialized Transformer,thenaturallanguagetextistreatedasasequence two-layerGCNcanproducemeaningfulfeaturerepresentations ofwordsanddividedintoWordPieces[33]. Acodesnippetis fornodesinthegraph. regardedasasequenceoftokens. TheoutputofCodeBERTin- Specifically, the GCN model takes the graph data G = (V, cludes(1)contextualizedvectorrepresentationsforeachtoken, E)asinput,where: encompassingbothnaturallanguageandcode,and(2)therep- resentationof[CLS],servingasasummarizedrepresentation. • N ×F0 is the input feature matrix, denoted as X, where N isthenumberofnodes,andF0 isthenumberofinput 2.3. GraphNeuralNetwork-GNN featuresforeachnode. 2.3.1. Overview • N ×N istheadjacencymatrixA,representingthestruc- Agraphisadatastructureincomputersciencecomprising two components: nodes and edges G = (V, E). Each node has turalinformationaboutthegraph. edges(E)connectingittoothernodes(V).Adirectedgraphhas Thus,ahiddenlayerinGCNcanbewrittenasHi= f(Hi−1,A), arrowsonitsedges,indicatingdirectionaldependencies,while whereH0 = X,and f isthepropagationrule.EachlayerHicor- undirectedgraphslacksucharrows. respondstoafeaturematrixN×Fi,whereeachrowrepresents GraphshaveattractedconsiderableattentioninMachineLe- afeaturerepresentationofanode. Ateachlayer,thesefeatures arningduetotheirpowerfulrepresentationalcapabilities. Each areaggregatedtocreatethefeaturesforthenextlayerusingthe nodeisembeddedintoavector,establishingitspositioninthe propagationrule f. Thisway,thefeaturesbecomeincreasingly data space. Graph Neural Networks (GNNs) are specialized abstractateachconsecutivelayer. VariantsofGCNdifferonly neural network architectures that operate on graphs. The pri- inthechoiceofpropagationrule f. mary goal of GNN architecture is to learn an embedding vec- tor containing information about its local neighborhood. This embedding can be used to address various tasks, such as node 3. Relatedwork labeling,nodeandedgeprediction,andmore. 3.1. SoftwareVulnerability In essence, GNNs are a subclass of DL techniques specif- ically designed for performing inference on graph-structured 3.1.1. Concept data. They are applied to graphs and have the ability to per- Softwarevulnerabilitiesrepresenterrors,weaknesses,orim- formpredictiontasksatthenode,edge,andgraphlevels. perfections within software or operating systems that are sus- ceptible to the influence of attacks or malevolent actions that 2.3.2. Classification may inflict harm upon the system or the information it pro- GNNsaredividedintothreetypes: cesses. Softwarevulnerabilitiescanbeexploitedbymalicious actorstocarryoutactionssuchasunauthorizedsystemaccess, • Recurrent Graph Neural Network: In this network, the pilfering sensitive information, impeding the normal function- graph is bidirectional, where data flows in both direc- ingofthesystem,orfacilitatingotherformsofattacks. tions. It applies graph message passing over edges to Withtheswiftdevelopmentofnovelattacktechniques,the propagate the output from the initial direction back to severity of software vulnerabilities is continuously escalating. the graph nodes, but adjusts edge weights based on the All systems inherently harbor latent vulnerabilities; however, previouslyappliedgradientsforthatnode. thepertinentquestionremainswhetherthesevulnerabilitiesare exploitedandresultindeleteriousconsequences. • SpectralConvolutionalNetwork: Thistypesharesasim- ilarideawithCNNs. InCNNs,convolutionisperformed by summing up the values of neighboring data points 33.1.2. Thecurrentstate • ExtractionofVulnerability-ContainingCodePatternsfrom An increasing number of cyberattacks originate from soft- SourceCode: Li’smethodology[17]revolvesaroundthe ware vulnerabilities, resulting in user data breaches and tar- extractionofvulnerability-containingcodepatternsfrom nishing the reputation of companies [35]. Despite numerous sourcecodetotrainmodels. researcheffortsproposedtoaidinvulnerabilitydetection,vul- |
Direction 1 - Automating Semantic Representation Learn- nerabilitiescontinuetoposeathreattothesecureoperationof ingforVulnerabilityPrediction IT infrastructure [36]. The number of disclosed vulnerabili- ties in the Common Vulnerabilities and Exposures (CVE) and Wang’s research [15] is a pioneering study that employs NationalVulnerabilityDatabase(NVD)repositorieshassurged Deep Belief Networks (DBNs) to delve into the semantic rep- fromapproximately4,600in2010to8,000in2014beforesky- resentations of programs. The study’s aim is to harness high- rocketing to over 17,000 in 2017 [17], [37]. These vulnera- level semantic representations learned by neural networks as bilitiesmayhaveledtopotentialthreatsconcerningthesecure vulnerability-indicativefeatures. Specifically,itenablestheau- usageofdigitalproductsanddevicesworldwide[38]. tomatic acquisition of features denoting source code vulnera- bilities without relying on manual techniques. This approach 3.1.3. ExploitationMechanismsofVulnerabilities is not only suited for predicting vulnerabilities within a single projectbutalsoforcross-projectvulnerabilityprediction. Ab- Upondiscoveryofasecurityvulnerability,theattackercan stractSyntaxTrees(ASTs)areemployedtorepresentprograms capitalizeonitbycraftingprogramstoinfiltrateandtakecon- asinputforDBNsintrainingdata. Theyproposedadatapre- trolofthetargeteddevice. Oncesuccessfulingainingaccessto processingapproach,comprisingfoursteps: thetarget,attackersmayconductsystemreconnaissancetofa- miliarizethemselveswithitsworkings. Consequently,theycan • Tokenization: The first step involves parsing the source execute diverse actions such as accessing critical files or de- codeintotokens. ployingmaliciouscode. Leveragingsuchcontrol,attackerscan hijackthecomputerandpilferdatafromthevictim’sdevice. • TokenMapping: Thesecondstepmapstokenstointeger Vulnerabilities are sometimes identified either by software identifiers. developers themselves or through user and researcher alerts. However, in certain cases, hackers or espionage organizations • DBN-basedSemanticFeatureGeneration: Thethirdstep mayuncoverintrusiontechniquesbutrefrainfromnotifyingthe employs DBNs to autonomously generate semantic fea- developers, leading to so-called ”zero-day” vulnerabilities, as tures. developers have not had an opportunity to patch them. As a • VulnerabilityPredictionModelEstablishment: Thefinal result, software or hardware remains exposed to threats until steputilizesDBNstoestablishavulnerabilityprediction patchesorfixesaredistributedtousers. model. Software vulnerabilities can lead to grave consequences, grantingattackersunauthorizedaccessandcontroloverdevices. Direction2-End-to-EndBufferOverflowVulnerabilityPre- To obviate such calamities, the detection and remediation of dictionfromRawSourceCodeusingNeuralNetworks vulnerabilities assume utmost significance. Nevertheless, on Choi’sresearch[16]standsastheinauguralworkproviding certain occasions, vulnerabilities remain latent until they are anend-to-endsolutionfordetectingbufferoverflowvulnerabil- maliciouslyleveraged,wreakingconsiderablehavocuponusers. ities. Experimental studies substantiate that neural networks Tomitigaterisks,regularsoftwareandhardwareupdatestoap- possess the capability to directly learn vulnerability-relevant plypatchesandfixesareessential. characteristics from raw source code, obviating the need for code analysis. The proposed neural network is equipped with 3.2. Relatedresearchworks integratedmemoryblockstoretainextensive-rangecodedepen- TherearemyriadresearchdirectionsemployingDLforse- dencies.Consequently,adaptingthisnetworkispivotaliniden- curity vulnerability detection. According to the survey con- tifying buffer overflow vulnerabilities. Test outcomes demon- ductedbyPengZengandcolleagues[14],fourprimaryresearch strate the method’s precision in accurately detecting distinct avenuesareobserved: typesofbufferoverflow. However, this approach still harbors limitations, necessi- • UtilizingDLModelsforSemanticProgramRepresenta- tating further enhancements. A primary constraint lies in its tions: This direction involves the automatic acquisition inability to identify buffer overflow incidents occurring within of semantic program representations using DL models, external functions, as input data excludes code from external asproposedbyWang[15]. files. Anotherlimitationistherequirementforeachlinetoen- • BufferOverflowVulnerabilityPredictionfromRawSource compass data assignments for the model to function. Apply- Code:Choi’sapproach[16]entailspredictingbufferover- ing this method directly to source code containing conditional statementsprovesintricate,asattentionscoresarecomputedto flowvulnerabilitiesdirectlyfromrawsourcecode. locatethemostrelevantcodepositions. • VulnerabilityDetectionforBinaryCode: Liu’sapproach Direction 3 - Vulnerability Detection Solution for Binary [18]targetsvulnerabilitydetectionwithinbinarycode. Code 4Liu’sresearch[18]introducesaDL-basedvulnerabilityde- codealone. Conversely,Direction4hasseenconsecutivestud- tection tool for binary code. This tool is developed with the iesthatextractbothsemanticandsyntacticfeatures,asexempli- intentofexpandingthevulnerabilitydetectiondomainbymiti- fiedbySySeVR[20],enhancingthereliabilityofmodeltrain- |
gatingthescarcityofsourcecode. Totrainthedata,binaryseg- ingdata. RegardingDirection2-End-to-EndBufferOverflow ments are fed into a Bidirectional Long Short-Term Memory Vulnerability Prediction from Raw Source Code using Neural network with Attention mechanism (Att-BiLSTM). The data Networks,ithasadrawbackinthatitexclusivelydetectsBuffer processinginvolvesthreesteps: Overflowvulnerabilities. Incontrast,Direction4employsdata extractionmethodsthatcanincreasethenumberofvulnerabil- • Initially, binary segments are collected by applying the itiestoasmanyas126CWEs,dividedintofourcategories,as IDAProtoolontheoriginalbinarycode. we will discuss in Section 5. As for Direction 3 - Vulnera- bility Detection Solution for Binary Code, it holds significant • In the second step, functions are extracted from binary potential as it can detect vulnerabilities in various program- segmentsandlabeledas”vulnerable”or”non-vulnerable”. ming languages by utilizing binary code. However, its accu- • Inthethirdstep,binarysegmentsareusedasbinaryfea- racy remains lower compared to the other research directions. tures before feeding them into the embedding layer of Forthesereasons,ourteamhasdeterminedtoselectDirection Att-BiLSTM. 4asareferenceandproposeXGV-BERTforfurtherimprove- ment. In our proposed XGV-BERT, the fusion of CodeBERT Thegranularityofdetectionliesatthefunctionlevel.Multi- model and Graph Neural Networks (GNN) represents a com- pleexperimentswereconductedonopen-sourceprojectdatasets pellingstrategywithinthedomainofsoftwarevulnerabilityde- to evaluate the proposed method. The results of these experi- tection. CodeBERT,anadvancedtransformer-basedmodel,ex- ments indicate that the proposed approach outperforms other celsinacquiringmeaningfulcoderepresentations,whileGNNs binary code-based vulnerability detection methods. However, exhibit exceptional prowess in encoding semantic connections this method still has limitations. Notably, the detection accu- presentincodegraphs. ThisharmoniouscombinationofCode- racyisrelativelylow,fallingbelow80%ineachdataset. BERTandGNNselevatestheprecisionandefficiencyofsoft- Direction4-ExtractingVulnerableCodePatternsforModel warevulnerabilitydetection,facilitatingthediscoveryofintri- Training catevulnerabilitiesthatconventionalapproachesmightstruggle Li’s VulDeePecker [17] is the pioneering study that em- touncover. ploys the BiLSTM model for vulnerability detection. This re- search direction aligns with our team’s pursuit as well. This 4. Methodology studyemploysBiLSTMtoextractandlearnlong-rangedepen- dencies from code sequences. The training data for the tool 4.1. Theoverviewarchitecture isderivedfromcodegadgetsrepresentingprograms,servingas To detect vulnerabilities using DL, we need to represent inputfortheBiLSTM.Theprocessingofcodegadgetsinvolves programsinawaythatcapturesbothsyntacticandsemanticin- threestages: formation relevant to the vulnerabilities. There are two main • Theinitialstageentailsextractingcorrespondingprogram contributions presented in Li’s research [20]. To start with, slicesoflibrary/APIfunctioncalls. treatingeachfunctionintheprogramasaregionproposal[39] similartoimageprocessing. However, thisapproachisoverly • The second stage revolves around creating and labeling simplisticbecausevulnerabilitydetectiontoolsnotonlyneedto codegadgets. determine the existence of vulnerabilities within functions but also need to pinpoint the locations of the vulnerabilities. This • Thefinalstagefocusesontransformingcodegadgetsinto means we require detailed representations of the programs to vectors. effectivelydetectvulnerabilities.Secondly,theytreateachcode line or statement (used interchangeably) as a unit for vulnera- ExperimentaloutcomesdemonstrateVulDeePecker’scapac- bility detection. However, their approach has two drawbacks. ity to address numerous vulnerabilities, and the integration of More specifics, most statements in a program do not contain human expertise can enhance its effectiveness. However, this any vulnerabilities, resulting in a scarcity of vulnerable sam- methodexhibitscertainlimitationsthatrequirefurtherimprove- ples. Inaddition,manystatementshavesemanticrelationships ment. Firstly, VulDeePecker is restricted to processing pro- witheachother,buttheyarenotconsideredasawhole. gramswritteninC/C++.Secondly,itcansolelyaddressvulner- To synthesize the advantages of the two above proposals, abilitieslinkedtolibrary/APIfunctioncalls. Lastly,theevalu- we divide the program into smaller code segments (i.e., a few ation dataset is relatively small-scale, as it only encompasses linesofcode),correspondingtoregionproposals,andrepresent twotypesofvulnerabilities. thesyntacticandsemanticcharacteristicsofthevulnerabilities. ThereasonweoptedfortheDirection4isthatweassessed After studying Li’s method [20], our research group pro- that this research direction offers certain advantages over the posestheXGV-BERTmethodforefficientlydetectingsoftware other three. In the case of Direction 1 - Automating Seman- vulnerability. Weextractfeaturepatternsfromthesourcecode tic Representation Learning for Vulnerability Prediction, this usingtheCodeBERTmodeltoembedthevectors,andthenwe researchislimitedtoextractingsemanticfeaturesfromsource 5Figure1:TheworkflowofXGV-BERTframeworkforvulnerabilitydetection. feedthemintovariousDLmodels,includingRNN,LSTM,Bi- information,eachSeVCundergoestransformationintoa |
LSTM,GRU,Bi-GRU,andtheproposedGCNmodelforcom- symbolicrepresentation. parisonandevaluation.Figure1illustratesthespecificarchitec- – Removal of non-ASCII characters and comments tureofthestepsinvolvedinourproposedsoftwarevulnerability fromthecode. detectionmethod. The architecture we propose uses the original source code – Mapping of user-defined variable names to sym- asinput,followedbytheextractionofprogramslicesbasedon bolicnames(e.g.,”V1”,”V2”)inaone-to-onecor- syntax features. From the program slices, we further extract respondence. lines of code that have semantic relationships with each pro- – Mapping of user-defined function names to sym- gramslice,creatingSemantics-basedVulnerabilityCandidates bolicnames(e.g.,”F1”,”F2”)inaone-to-onecor- (SeVCs) [20] or code gadgets [17], depending on the dataset, respondence. and label them accordingly. Then, we tokenize the SeVCs or ItisimportanttonotethatdifferentSeVCsmaysharethe codegadgetsandfeedthemintotheCodeBERTmodelforvec- same symbolic representation, enhancing the generaliz- tor embedding. Finally, we construct a DL model for training abilityoftheapproach. andpredictingvulnerablesourcecodeusingtheembeddedvec- tors. • Tokenize the symbolic representations: To achieve this, ProfessorLi’steam[20]proposedtosplitthesymbolrep- 4.2. EmbeddingVectors resentationofSeVC(e.g.,”V1=V2-8;”)intoasequence Inthissection,wedelveintotheprocessoftokenizingand ofsymbolsthroughlexicalanalysis(e.g.,”V1”,”=”,”V2”, embeddingtheSeVCsextractedfromthesourcecodefortrain- ”-”,”8”,and”;”). Eachsymbolintheresultingsequence ingintheDLmodel. Thefollowingstepsoutlineourapproach: isconsideredatoken. Thisprocessisperformedforeach code line in the SeVC, resulting in a list of tokens for • Symbolic Representation of SeVCs: To ensure the in- eachSeVC. dependence of SeVCs from user-defined variables and • Afterobtainingthelistoftokens,weusetheCodeBERT function names while capturing the program’s semantic model to embed the data. The CodeBERT model used 6inthisstudyhasbeenpretrainedonsourcecodedataand WhereN isthetotalnumberofslicesinthedataset,and isretrainedwiththeinputofthetokenizedvectors. The n isthenumberofslicesthatcontaintheword.IDFhelps t modelarchitectureconsistsofmultipleTransformerlay- evaluate the importance of a word for a slice, assigning ers,whichextractfeaturesfromthedatasetandenhance lowerIDFscorestofrequentlyoccurringwords. contextualinformationforthevectors. Theoutputofthe • TheTF-IDFweightiscomputedbymultiplyingtheTerm model will be the embedded vectors, which we use as Frequency(TF)andtheInverseDocumentFrequency(IDF). inputsfortheDLmodelstoclassifythedataset. ThePositivePointwiseMutualInformation(PPMI)isused 4.3. TrainingDLModels to determine the weight of word pairs, where a higher PPMI Inthefinalpart,weutilizeDLmodelsfortrainingandclas- value indicates a stronger relationship and co-occurrence fre- sifying the dataset. Specifically, we employ a total of 6 mod- quencybetweentwowords. ThePPMIvalueofiand jiscal- els: RNN, LSTM, Bi-LSTM, GRU, Bi-GRU, and GNN. The culatedusingtheformula: input to these models is the embedded vectors. Among these models, the most significant one we propose is GNN, named p(i, j) PPMI(i, j)=max(log ,0) (2) XGV-BERT. p(i)p(j) FormodelssuchasRNNanditsvariants,includingLSTM, GRU,Bi-LSTM,andBi-GRU,weimplementthesemodelswith Where: inputs being the embedded vectors. The architecture of these • p(i)istheprobabilityofwordiappearinginaslice. models consists of two hidden layers, accompanied by fully connected layers to converge the output for text classification • p(j)istheprobabilityofword jappearinginaslice. purposes. For the GNN model, we employ the GCN architecture for • p(i, j)isthejointprobabilityofbothiand jappearingin training. OurGCNmodelisdesignedtotakeinputdatainthe aslice. formofgraphs. Theembeddedvectorsobtainedinsection4.2 Forthenodesintheadjacencymatrix,weutilizetheoutput need to be processed into graph-structured data before being embeddings of the CodeBERT model and use them as input fedintoourtrainingmodel. Toaccomplishthis,wecreateadja- representations for the slice nodes. Once the data, including cencymatricesfortheembeddedvectors,andthesevectorsare the adjacency matrix and the nodes, is constructed, we utilize transformedintographnodes. Figure2illustratesthearchitec- both of them as inputs for the GCN model. Our GCN model tureusingtheproposedGCNmodelfortrainingandpredicting consistsoftwohiddenlayers,alongwithfullyconnectedlayers, resultsonthedataset. toconvergetheoutputfortextclassificationpurposes. Specifically, we construct a non-homogeneous graph con- sisting of both word nodes and slice nodes based on the idea proposed by Yao [40]. In this adjacency matrix, each word or 5. ExperimentsandAnalysis slice is represented as a one-hot vector and used as input for theGCNmodel. Wecreateedgesbetweennodesbasedonthe Inthissection,weconductexperimentstocomparetheXGV- occurrence of a word in a slice (slice-word edge) and the co- BERT’sdetectionaccuracytothestate-of-the-artsolutions,VulDeeP- |
occurrenceofwordsacrosstheentiredataset(word-wordedge). ecker [17] and SySeVR [20]. Before discussing the effective- Theweightofanedgebetweentwonodesiandjisdefinedas nessofXGV-BERT,wefirstgoovertheimplementationspecifics follows: anddatasetsusedinthetrials. 5.1. DatasetandPreprocessing A =TPP FM -IDI( Fi, (ij ,), j), i, ij ia sre slw ico e,rd js isan wd oi rd(cid:44) j 5.1.1 F. orB te hn ech em vaa lr uk atD ioa nta ose ft our approach, we use two datasets i,j 1 0, , i ot= hej rwise f [r 1o 7m ]. t Bw oo thre ds ae ta ar sc eh tsp wa ep re ers c: olS ley cS tee dV fR ro[ m20 t] wa on sd ouV ru cl eD s:ee thP eec Nke ar - tionalVulnerabilityDatabase(NVD)[31]andtheSoftwareAs- TheTermFrequency-InverseDocumentFrequency(TF-IDF) suranceReferenceDataset(SARD)[30]. valueofawordinaslicedeterminestheweightoftheedgebe- The NVD dataset provides vulnerable code snippets from tweenaslicenodeandawordnode.Thisvalueisusedtoassess varioussoftwareproducts,includingbothvulnerablecodeand theimportanceofawordinaslice,whereahighervalueindi- their corresponding patches. On the other hand, the SARD catesahigherimportanceofthewordfortheslice.Specifically: datasetoffersacollectionoftest,synthetic,andacademicpro- • TF (Term Frequency) is the number of times the word grams,labeledas”good”(havingnovulnerabilities),”bad”(con- appearsintheslice. tainingvulnerabilities),and”mixed”(havingvulnerabilitieswhose patchedversionsarealsoavailable). • IDF (Inverse Document Frequency) is calculated as fol- Regarding the VulDeePecker dataset, they offer program lows: N pieces that concentrate on two categories of CWE related to IDF=log (1) Library/API Function Call vulnerabilities, including resource n t 7Figure2:TheproposedGCNmodelinXGV-BERTframework. management error vulnerabilities (CWE-399) and buffer error doesnotreceivedirectinputsfromexternalsources vulnerabilities(CWE-119).Weproduced29,313non-vulnerable ortheprogramenvironment. code gadgets and 10,440 vulnerable code gadgets for CWE- – Generateprogramslicescorrespondingtotheargu- 119.Weproduced10,440vulnerableand14,600non-vulnerable mentsofthelibrary/APIfunctioncallsextractedin code gadgets for CWE-399. The number of code gadgets that the previous step. Program slices are further clas- weextractedfromtheVulDeePeckerdatasetisshowninTable sifiedintoforwardslicesandbackwardslices. For- 1. ward slices contain statements affected by specific Thespecificstepsforcodegadgetextractionareasfollows: arguments, while backward slices comprise state- • Extractinglibrary/APIfunctioncallsandtheircorrespond- mentsinfluencingspecificarguments. ingprogramslices: • Extractingcodegadgetsandassigninglabels: – Categorizelibrary/APIfunctioncallsintotwotypes: – Extractcodegadgets: forwardlibrary/APIfunctioncallsandbackwardli- brary/APIfunctioncalls. Theforwardtypereceives * Constructapartofthecodegadgetbycombin- inputs directly from external sources, like a com- ing statements containing arguments from the mand line, program, or file. The backward type library/API function calls. These statements 8belong to the same user-defined function and * Interprocedural backward slice bs’ i is formed are ordered based on the corresponding pro- bymergingthebackwardslicebs offunction i gramslice. Anyduplicatestatementsareelim- f withthebackwardslicesoffunctionscalled i inated. by f andthefunctionsthatcall f. i i * Completethecodegadgetbyincorporatingstate- * Finally,programsliceps iiscreatedbymerging ments from other functions containing the ar- fs’ andbs’. i i guments from the library/API function calls, – Transform the program slice into SeVCs with the following the order in the corresponding pro- followingsteps: gramslice. * Convert the statements belonging to function – Labelthecodegadgets: Thosewithoutvulnerabil- f andappearinginprogramsliceps asanode i i ities receive the label ”0,” while those containing intoSeVCswhilepreservingtheoriginalorder vulnerabilitiesarelabeled”1.” ofstatementsinfunction f. i * Convertthestatementsbelongingtootherfunc- Table1:NumberofcodegadgetsextractedfromtheVulDeePeckerdataset tions, which are related to function f through i functioncalls,intoSeVCs. Non- Vulnerable Dataset Total code gad- vulnerable • Step 3. Label the SeVCs: To differentiate between vul- code gad- gets nerableandsafecodepatterns,welabeltheSeVCs,and gets their corresponding vectors, accordingly. A SeVC con- CWE-119 39,753 10,440 29,313 taining a known vulnerability is labeled as ”1,” while it CWE-399 21,885 7,285 14,600 islabeledas”0”iftheSeVCissafeanddoesnotcontain AllDataset 61,637 17,725 43,913 anyvulnerabilities. FortheSySeVRdataset,itprovidesC/C++programscon- Table2:NumberofSeVCsextractedfromtheSySeVRdataset taining 126 CWE related to four types of vulnerabilities: Li- brary/API Function Call (FC-kind), Array Usage (AU-kind), Non- PointerUsage(PU-kind),andArithmeticExpression(AE-kind). Vulnerable Dataset Total vulnerable In total, we have extracted 547,347 SeVCs from the dataset, SeVCs SeVCs comprising52,227SeVCscontainingvulnerabilitiesand495,120 FC-kind 141,023 17,005 124,018 SeVCswithoutvulnerabilities.ThedistributionofSeVCsbased |
AU-kind 55,772 7,996 47,776 onvulnerabilitytypesandtheircorrespondingCWEidentifiers PU-kind 340.324 25.377 314.946 ispresentedinTable2. AE-kind 10.234 1.848 8.386 • Step 1. Extract Syntax-based Vulnerability Candidates AllDataset 61,637 17,725 43,913 (SyVCs). By leveraging these datasets, we were able to comprehen- – RepresenteachfunctionasanAbstractSyntaxTree sively evaluate the effectiveness and performance of our pro- (AST).TherootoftheASTcorrespondstothefunc- posedmethodindetectingsoftwarevulnerabilities. tion,theleavesrepresentthetokensofthefunction, andtheintermediatenodescorrespondtothestate- 5.2. PerformanceMetrics mentsofthefunction. 5.2.1. Detectionmetrics – Comparethecodeelements(comprisingoneormore Toaccordinglyevaluatethemodelprediction,wediscussed consecutivetokens,includingidentifiers,operators, and defined ground truth values as follows: true positive (TP) constants, and keywords) in the AST with a set of represents the number of vulnerable samples that are detected syntactic vulnerability patterns. If a code element asvulnerable;truenegative(TN)representsthenumberofsam- matchesanyelementinthisset,itbecomesaSyVC. plesthatarenotvulnerableandaredetectedasnotvulnerable; • Step2. TransformSyVCsintoSemantic-basedVulnera- False positive (FP) represents the number of samples are not bilityCandidates(SeVCs). vulnerable but are detected as vulnerable; False negative (FN) represents the number of vulnerable samples that are detected – CreateCFGsforeachfunctionintheprogram.From asnotvulnerable. CFGs,generatePDGsforeachfunction. Therefore, we use four metrics as follows for our experi- – Based on the PDG, create program slices ps for ments: i eachSyVC. • Accuracyistheratioofcorrectandtotalpredictions. * Interproceduralforwardslicefs’ i isformedby mergingtheforwardslicefs ioffunction f iwith Accuracy= TP+TN (3) theforwardslicesoffunctionscalledby f i. TP+TN+FP+FN 9• Precisionistheratiooftrulyvulnerablesamplesamong 5.4. ExperimentalResults thedetectedvulnerablesamples. 5.4.1. EffectivenessofCodeBERTembeddingmethod Todemonstratetheeffectivenessofourembeddingmethod TP Precision= (4) using the CodeBERT model (see Section 4.2), we conducted TP+FP experiments and evaluated the performance of the Word2vec • Recallistheproportionoftrulyvulnerablesamplesamong andCodeBERTembeddingmodels.Wecomparedtheseresults totheexperimentalfindingsoftworelatedresearchstudies,Sy- thesamplesthatwerepredictedasnotcontainingvulner- SeVR[20]andVulDeePecker[17]. Tables 5and 6summarize abilities. TP theevaluation’sfindings. Recall= (5) TP+FN Table 5: Evaluation results of vector embedding methods on VulDeePecker • F1-score measures the overall effectiveness by calculat- dataset ingtheharmonicmeanofPrecisionandRecall. Trainingmethod Acc Precision Recall F1 F1−score=2· Recall·Precision (6) VulDeePecker(BiLSTM) 90.8 79.1 77.5 78.3 Recall+Precision OurWord2Vec+LSTM 82.6 85.4 82.3 83.8 OurWord2Vec+BiLSTM 83.4 86.9 81.9 84.3 5.3. ExperimentalSettings CodeBERT+LSTM 83.5 80.9 87.7 84.2 We conducted our experiments on a virtual machine envi- CodeBERT+BiLSTM 86.0 89.5 85.3 87.3 ronmentrunningUbuntu20.04witha8coreCPUs,81.5GBof RAM,40GBofGPURAM,andastoragecapacityof100GB. Table 5 demonstrates that our embedding method using TABLE 3 and TABLE 4 show the architecture for the LSTM the CodeBERT model has achieved significantly better eval- model and XGV-BERT, respectively. To perform our experi- uation results overall compared to VulDeePecker [17]. The ments,wetrainedthedatasetsusingthefollowingconfiguration F1scoreforourapproachis87.3%, representingasubstantial onbothmodels: AdamOptimizerwithlearning rate = 0.001, improvement over VulDeePecker’s F1 score of 79.3%. Sim- epoch = 50 and batch size = 32 for RNN, LSTM, BiLSTM, ilarly, the Precision and Recall metrics have also shown im- GRUandBiGRUmodelsandthesameconfigurationforXGV- provements compared to VulDeePecker. Although the detec- BERT model, but with epoch = 4. In the settings for both tionaccuracywhenweusedCodeBERT(86.0%)wasnotsupe- VulDeePecker and SySeVR datasets, we choose 80% samples rior to VulDeepecker (90.8%), it showed improvement when forthetrainingsetandtheremaining20%forthetestset. compared with our experimental evaluation using Word2vec (83.4%). Inthesecases,theF1metricismoresuitableforeval- Table3:ThearchitectureofCNN&GRUmodel uatingtheperformanceofvulnerabilitydetectionbecauseofthe minorityofvulnerablecodecomparedtocleancode. Layer(ID) Activation Outputshape Connectedto Input(1) - (768,1) [] Table6:EvaluationresultsofvectorembeddingmethodsonSySeVRdataset LSTM(2) ReLU (768,300) [(1)] Dropout(3) - (768,300) [(2)] Trainingmethod Acc Precision Recall F1 LSTM(4) ReLU (300) [(3)] SySeVR(LSTM) 95.2 85.2 78.3 81.6 Dropout(5) - (300) [(4)] SySeVR(BiLSTM) 96.0 86.2 82.5 84.3 Dense(6) Sigmoid (1) [(5)] OurWord2Vec+LSTM 91.8 89.8 81.1 85.2 |
OurWord2Vec+BiLSTM 92.6 92.1 81.5 86.4 CodeBERT+LSTM 93.3 88.3 99.8 93.7 Table4:ThearchitectureofXGV-BERTmodel CodeBERT+BiLSTM 93.8 89.1 99.7 94.1 Connected Layer(ID) Activation Outputshape Meanwhile,Table 6showstheexperimentalresultsonthe to SySeVR dataset [20]. Again, our embedding method using [Node: (1,768), Input(1) - [] CodeBERToutperformstheembeddingmethodusingSySeVR’s Edge: (1,768)] Word2vec. By combining BiLSTM with CodeBERT embed- GraphConv ReLU Node: (1,200) [(1)] dings, we achieved a significant improvement in the F1-score, (2) increasingitfrom84.3%to94.1%whencomparedtoSySeVR. GraphConv ReLU Node: (1,200) [(2)] (3) 5.4.2. EffectivenessofDLmodels Dropout(4) - Node: (1,200) [(3)] TheTABLE7andTABLE8presentsthedetectionperfor- Dropout(5) - Node: (1,200) [(4)] mancemetricsofsixmodelsontheVulDeePecker[17]andSy- Dense(6) - Edge: (2) [(5)] SeVR [20] datasets, where all models use CodeBERT’s em- bedding vectors as input. Overall, all models performed ex- ceptionally well, exhibiting high scores across all evaluation 10Table7:TestperformanceofvariousmodelsusingCodeBERTembeddingmethodonVulDeePeckerdataset Dataset Metrics RNN LSTM BiLSTM GRU BGRU XGV-BERT Accuracy 69.3 77.4 79.4 78.5 79.1 98.4 CWE Precision 63.6 70.0 73.3 71.5 75.0 97.9 119 Recall 90.3 96.0 92.6 94.6 87.5 98.1 F1-score 74.6 81.0 81.8 81.5 80.8 97.7 Accuracy 80.6 89.0 91.0 89.3 91.0 98.3 CWE Precision 72.3 84.5 92.4 86.5 91.7 97.9 399 Recall 72.3 84.5 92.4 86.5 92.4 98.1 F1-score 82.4 88.5 91.1 89.0 91.2 98.0 Accuracy 76.8 83.5 86.0 82.4 86.0 97.8 All Precision 74.1 80.9 89.5 79.6 81.6 97.3 Dataset Recall 82.4 87.7 85.3 87.2 94.4 97.7 F1-score 78.1 84.2 87.3 83.2 87.6 97.5 Table8:TestperformanceofvariousmodelsusingCodeBERTembeddingmethodonSySeVRdataset Dataset Metrics RNN LSTM BiLSTM GRU BGRU XGV-BERT Accuracy 88.4 91.5 93.0 92.1 92.8 97.2 Library/API Precision 81.8 85.6 87.9 86.5 87.5 94.4 FunctionCall Recall 99.8 99.7 99.7 99.7 99.7 92.5 F1-score 89.5 92.1 93.4 92.7 93.2 93.4 Accuracy 87.3 91.3 91.9 91.1 92.2 98.6 Array Precision 79.9 85.4 86.4 94.9 86.6 96.7 Usage Recall 99.8 99.7 99.6 99.9 99.7 97.9 F1-score 88.7 92.0 92.5 91.8 92.7 97.3 Accuracy 89.4 94.2 94.6 94.0 94.6 99.7 Pointer Precision 82.7 89.8 90.7 89.6 90.6 98.5 Usage Recall 99.6 99.7 99.4 99.6 99.5 97.5 F1-score 90.4 94.5 94.9 94.3 94.9 98.0 Accuracy 78.9 84.5 88.7 85.4 87.6 95.5 Arithmetic Precision 72.1 76.5 81.6 77.5 80.6 90.1 Expression Recall 94.3 99.5 99.7 99.7 98.9 96.3 F1-score 81.7 86.5 89.8 87.2 88.8 92.8 Accuracy 88.4 93.3 93.8 93.0 93.9 97.8 All Precision 81.4 88.3 89.1 87.8 89.4 94.8 Dataset Recall 99.7 99.8 99.7 99.7 99.7 96.2 F1-score 89.6 93.7 94.1 93.4 94.3 95.5 metrics. Notably, for the VulDeePecker dataset, our proposed synergybetweenCodeBERTandGNNsenhancestheaccuracy XGV-BERTmethodachievedthehighestratingonallfourin- and efficacy of software vulnerability detection, enabling the dices,outperformingtheremainingfivemodels. Similarly,for identification of complex vulnerabilities that may remain elu- the SySeVR dataset, XGV-BERT achieved the highest scores sivethroughconventionalmethods. onthreeoutoffourmetrics,includingaccuracy,precisionand F1-score.Basedonthisevaluationresult,wecanseethatXGV- 6. Conclusion BERTgivesthebestclassifierperformanceinthecomparedDL models. These results indicate that XGV-BERT, which lever- In concluding, this study introduces a novel method em- ages the CodeBERT and GNN, can represent the contextual ployingcontextualembeddinganddeeplearningtechniquesfor data to identify the vulnerable code in the software with high theclassificationofsoftwareprogramswithvulnerabilities.The performance. In summary, the integration of the CodeBERT newly devised framework, termed XGV-BERT, leverages the model with GNN has proven as a promising approach in the sophisticatedcapabilitiesofcontextualizedembeddingsthrough realm of software vulnerability detection. CodeBERT, a pre- CodeBERT to delineate the intricate interconnections among trainedTransformer-basedmodel,excelsatlearningrepresenta- code attributes essential for identifying security flaws. Within tionsofsourcecode,whileGNNspossessaremarkablecapabil- the realm of source code analysis, such embeddings are piv- ity to capture semantic relationships within code graphs. This otalindiscerningthenuancedrelationshipsbetweentokensor 11words present in code fragments. Such embeddings empower [7] T. Marjanov, I. Pashchenko, F. Massacci, Machine learning for source the model to depict this variable uniquely, contingent on its codevulnerabilitydetection:Whatworksandwhatisn’tthereyet,IEEE Security&Privacy20(5)(2022)60–76. |
positionalcontext. Furthermore, XGV-BERTintegratesCode- [8] G.Lin,S.Wen,Q.-L.Han,J.Zhang,Y.Xiang,Softwarevulnerability BERTwiththeadvancedGraphConvolutionalNetwork(GCN) detectionusingdeepneuralnetworks:asurvey,ProceedingsoftheIEEE deep learning paradigm. A salient feature of GCNs is their 108(10)(2020)1825–1848. adeptnessatassimilatingcontextualintelligencefromelaborate [9] R. A. Khan, S. U. Khan, H. U. Khan, M. Ilyas, Systematic mapping studyonsecurityapproachesinsecuresoftwareengineering,IeeeAccess graphformations. Thesenetworksintrinsicallyevolvecontext- 9(2021)19139–19160. sensitive attributes, obviating the necessity for labor-intensive [10] D.Zou,S.Wang,S.Xu,Z.Li,H.Jin,”µvuldeepecker:Adeeplearning- featurecrafting.Significantly,GCNsexcelinidentifyingmulti- basedsystemformulticlassvulnerabilitydetection”,IEEETransactions layered contextual associations by analyzing not only the im- onDependableandSecureComputing18(5)(2021)2224–2236. doi: 10.1109/TDSC.2019.2942930. mediatecontextofagivenentitybutalsothesurroundingenvi- [11] A.Radford, K.Narasimhan, T.Salimans, I.Sutskever, Improvinglan- ronment of its neighboring entities and their interconnections. guageunderstandingbygenerativepre-training(2018). This intrinsic property renders GCNs exceptionally equipped [12] J.Devlin,M.-W.Chang,K.Lee,K.Toutanova,Bert:Pre-trainingofdeep forapprehendingmultifaceteddependencieswithingraph-centric bidirectional transformers for language understanding, arXiv preprint arXiv:1810.04805(2018). data,therebybolsteringtheirutilityacrossdiverseapplications. [13] Z.Feng,D.Guo,D.Tang,N.Duan,X.Feng,M.Gong,L.Shou,B.Qin, Suchanamalgamationaugmentsthedepthoflearningandin- T.Liu,D.Jiang,M.Zhou,CodeBERT:Apre-trainedmodelforprogram- formationextractionfromthemultifarioussegmentsinherentin mingandnaturallanguages, in: FindingsoftheAssociationforCom- sourcecodes. putational Linguistics: EMNLP 2020, Association for Computational Linguistics,Online,2020,pp.1536–1547. doi:10.18653/v1/2020. Ourframeworkcanhelpcybersecurityexpertsdetecterrors findings-emnlp.139. andvulnerabilitiesinsoftwareprogramsautomaticallywithhigh URLhttps://aclanthology.org/2020.findings-emnlp.139 accuracy.Theexperimentalresultsonthetwobenchmarkdatasets, [14] P.Zeng,G.Lin,L.Pan,Y.Tai,J.Zhang,Softwarevulnerabilityanalysis including VulDeePecker and SySeVR, demonstrate the effec- anddiscoveryusingdeeplearningtechniques: Asurvey, IEEEAccess (2020). tiveness of the proposed framework in improving the perfor- [15] S. Wang, T. Liu, L. Tan, Automatically learning semantic features for mance and detection accuracy of DL-based vulnerability de- defectprediction,Proc.38thInt.Conf.Softw.Eng.(ICSE)(2016)297– tection systems. In the future, we aim to enhance the source 308. [16] M.-j. Choi, S. Jeong, H. Oh, J. Choo, End-to-end prediction of buffer code extraction framework. Our primary objective is to refine overruns from raw source code via neural memory networks, arXiv vulnerabilitydetectiongranularity. Atpresent, oursystemop- preprintarXiv:1703.02458(2017). erates at the slice-level, focusing on multiple semantically in- [17] Z.Li,D.Zou,S.Xu,X.Ou,H.Jin,S.Wang,Z.Deng,Y.Zhong,Vuldeep- terrelatedlinesofcode. Additionally, weaspiretoexpandour ecker: Adeeplearning-basedsystemforvulnerabilitydetection, Proc. Netw.Distrib.Syst.Secur.Symp.(2018). vulnerabilitydetectioncapabilitiesacrossdiverseprogramming [18] S.Liu,M.Dibaei,Y.Tai,C.Chen,J.Zhang,Y.Xiang,Cybervulnerabil- languages,asthecurrentframeworkislimitedtoextractingin- ityintelligenceforinternetofthingsbinary,IEEETrans.Ind.Informat. formationsolelyfromC/C++sourcecode. (2020)2154–2163. [19] R.Croft,Y.Xie,M.A.Babar,Datapreparationforsoftwarevulnerability prediction:Asystematicliteraturereview,IEEETransactionsonSoftware Acknowledgment Engineering49(3)(2022)1044–1063. [20] Z. Li, D. Zou, S. Xu, H. Jin, Y. Zhu, Z. Chen, Sysevr: A framework forusingdeeplearningtodetectsoftwarevulnerabilities,arXivpreprint ThisresearchwassupportedbyTheVNUHCM-University arXiv:1807.06756(2021). ofInformationTechnology’sScientificResearchSupportFund. [21] S.Kim,J.Choi,M.E.Ahmed,S.Nepal,H.Kim,Vuldebert: Avulner- abilitydetectionsystemusingbert,2022IEEEInternationalSymposium onSoftwareReliabilityEngineeringWorkshops(ISSREW)(2022). References [22] H.Wang,G.Ye,Z.Tang,S.H.Tan,S.Huang,D.Fang,Y.Feng,L.Bian, Z.Wang,Combininggraph-basedlearningwithautomateddatacollec- [1] Y.Zhu,G.Lin,L.Song,J.Zhang,Theapplicationofneuralnetworkfor tionforcodevulnerabilitydetection,IEEETransactionsonInformation softwarevulnerabilitydetection:areview,NeuralComputingandAppli- Forensics and Security 16 (2021) 1943–1958. doi:10.1109/TIFS. |
cations35(2)(2023)1279–1301. 2020.3044773. [2] Commonvulnerabilitiesexposures(cve). [23] S. Cao, X. Sun, L. Bo, R. Wu, B. Li, C. Tao, Mvd: Memory-related URLhttps://cve.mitre.org/ vulnerabilitydetectionbasedonflow-sensitivegraphneuralnetworks,in: [3] Y.Shin,A.Meneely,L.Williams,J.A.Osborne,Evaluatingcomplexity, Proceedingsofthe44thInternationalConferenceonSoftwareEngineer- codechurn,anddeveloperactivitymetricsasindicatorsofsoftwarevul- ing, ICSE’22, AssociationforComputingMachinery, NewYork, NY, nerabilities,IEEETransactionsonSoftwareEngineering,vol.37,no.6 USA,2022,p.1456–1468.doi:10.1145/3510003.3510219. (2010)772–787. URLhttps://doi.org/10.1145/3510003.3510219 [4] P.Zeng,G.Lin,L.Pan,Y.Tai,J.Zhang,Softwarevulnerabilityanalysis [24] S.Wang,X.Wang,K.Sun,S.Jajodia,H.Wang,Q.Li,Graphspd:Graph- anddiscoveryusingdeeplearningtechniques: Asurvey,IEEEAccess8 basedsecuritypatchdetectionwithenrichedcodesemantics, in: 2023 (2020)197158–197172. IEEESymposiumonSecurityandPrivacy(SP),2023, pp.2409–2426. [5] H.Hanif,M.H.N.M.Nasir,M.F.AbRazak,A.Firdaus,N.B.Anuar, doi:10.1109/SP46215.2023.10179479. The rise of software vulnerability: Taxonomy of software vulnerabili- [25] Y. Wu, D. Zou, S. Dou, W. Yang, D. Xu, H. Jin, Vulcnn: An image- tiesdetectionandmachinelearningapproaches,JournalofNetworkand inspiredscalablevulnerabilitydetectionsystem,in: Proceedingsofthe ComputerApplications179(2021)103009. 44thInternationalConferenceonSoftwareEngineering,ICSE’22,As- [6] T.H.Le,H.Chen,M.A.Babar,Asurveyondata-drivensoftwarevul- sociation for Computing Machinery, New York, NY, USA, 2022, p. nerabilityassessmentandprioritization,ACMComputingSurveys55(5) 2365–2376.doi:10.1145/3510003.3510229. (2022)1–39. URLhttps://doi.org/10.1145/3510003.3510229 [26] D. Hin, A. Kan, H. Chen, M. A. Babar, Linevd: Statement-level vul- 12nerabilitydetectionusinggraphneuralnetworks,in: Proceedingsofthe 19th InternationalConferenceon MiningSoftware Repositories, 2022, pp.596–607. [27] V.-A.Nguyen,D.Q.Nguyen,V.Nguyen,T.Le,Q.H.Tran,D.Phung, Regvd: Revisitinggraphneuralnetworksforvulnerabilitydetection,in: Proceedings of the ACM/IEEE 44th International Conference on Soft- wareEngineering: CompanionProceedings,ICSE’22,Associationfor ComputingMachinery,NewYork,NY,USA,2022,p.178–182. doi: 10.1145/3510454.3516865. URLhttps://doi.org/10.1145/3510454.3516865 [28] W.Guo,Y.Fang,C.Huang,H.Ou,C.Lin,Y.Guo,Hyvuldect: Ahy- bridsemanticvulnerabilityminingsystembasedongraphneuralnetwork, Computers&Security(2022)102823. [29] W.Tang,M.Tang,M.Ban,Z.Zhao,M.Feng,Csgvd: Adeeplearning approachcombiningsequenceandgraphembeddingforsourcecodevul- nerabilitydetection,JournalofSystemsandSoftware199(2023)111623. [30] Softwareassurancereferencedataset. URLhttps://samate.nist.gov/SRD/index.php [31] Nationalvulnerabilitydatabase. URLhttps://nvd.nist.gov/ [32] M.Hicks,J.S.Foster,I.Neamtiu,Understandingsourcecodeevolution usingabstractsyntaxtreematching,ACMSIGSOFTSoftwareEngineer- ingNotes(2005). [33] Y.Wu,M.Schuster,Z.Chen,Q.V.Le,Google’sneuralmachinetrans- lationsystem:Bridgingthegapbetweenhumanandmachinetranslation, ComputerScience,ComputationandLanguage(2016). [34] T.N.Kipf,M.Wellingd,Semi-supervisedclassificationwithgraphcon- volutionalnetworks,ComputerScience,MachineLearning(2016). [35] G.Lin,S.Wen,Q.-L.Han,J.Zhang,Y.Xiang,Softwarevulnerability detectionusingdeepneuralnetworks: Asurvey, Proc.IEEE,vol.108, issue:10(2020)1825–1848. [36] D. Votipka, R. Stevens, E. Redmiles, J. Hu, M. Mazurek, Hackers vs. testers: A comparison of software vulnerability discovery processes, Proc.IEEESymp.Secur.Privacy(SP)(2018)374–391. [37] Record-breaking number of vulnerabilities disclosed in 2017: Report. (2018). URL https://www.securityweek.com/ record-breaking-number\-vulnerabilities-disclosed-2017-report [38] R. Coulter, Q.-L. Han, L. Pan, J. Zhang, Y. Xiang, Data-driven cyber securityinperspective—intelligenttrafficanalysis,IEEETransactionson Cybernetics,vol.50,issue:7(2019)3081–3093. [39] S.Ren,K.He,R.Girshick,J.Sun,Fasterr-cnn:Towardsreal-timeobject detectionwithregionproposalnetworks,IEEETransactionsonPattern AnalysisandMachineIntelligence,vol.39,issue:6(2017)1137–1149. [40] L.Y.etal.,Graphconvolutionalnetworksfortextclassification,,AAAI 33rd(2019). 13 |
2309.14742 SyzTrust: State-aware Fuzzing on Trusted OS Designed for IoT Devices Qinying Wang∗†, Boyu Chang∗, Shouling Ji∗((cid:0)),, Yuan Tian‡, Xuhong Zhang∗, Binbin Zhao§, Gaoning Pan∗, Chenyang Lyu∗, Mathias Payer†, Wenhai Wang∗, Raheem Beyah§ ∗Zhejiang University, †EPFL, ‡University of California, Los Angelos, §Georgia Institute of Technology E-mails: {wangqinying, bychang, sji}@zju.edu.cn, yuant@ucla.edu, zhangxuhong@zju.edu.cn, binbin.zhao@gatech.edu, {pgn, puppet}@zju.edu.cn, mathias.payer@nebelwelt.net, zdzzlab@zju.edu.cn, rbeyah@ece.gatech.edu Abstract—Trusted Execution Environments (TEEs) embedded Samsung have designed TEEs for low-end Microcontroller in IoT devices provide a deployable solution to secure IoT Units (MCUs) [20], [57], [61] and device manufacturers applications at the hardware level. By design, in TEEs, the have embedded the TEE in IoT devices such as unmanned Trusted Operating System (Trusted OS) is the primary com- aerialvehiclesandsmartlocks,toprotectsensitivedataand ponent. It enables the TEE to use security-based design tech- toprovidekeymanagementservices.ATEEiscomposedof niques, such as data encryption and identity authentication. Client Applications (CAs), Trusted Applications (TAs), and Once a Trusted OS has been exploited, the TEE can no aTrustedOperatingSystem(TrustedOS).Amongthem,the longerensuresecurity.However,TrustedOSesforIoTdevices Trusted OS is the primary component to enable the TEE using security-based design techniques, and its security is have received little security analysis, which is challenging theunderlyingpremiseofareliableTEEwherethecodeand from several perspectives: (1) Trusted OSes are closed-source data are protected in terms of confidentiality and integrity. and have an unfavorable environment for sending test cases Unfortunately,implementationflawsinTrustedOSesviolate and collecting feedback. (2) Trusted OSes have complex data the protection guarantees, bypassing confidentiality and in- structuresandrequireastatefulworkflow,whichlimitsexisting tegrityguarantees.Theseflawsleadtocriticalconsequences, vulnerability detection tools. including sensitive information leakage (CVE-2019-25052) To address the challenges, we present SYZTRUST, the and code execution within the Trusted OS context [13], first state-aware fuzzing framework for vetting the security of [42]. Once attackers gain control of Trusted OSes, they can resource-limitedTrustedOSes.SYZTRUSTadoptsahardware- launch critical attacks, such as creating a backdoor to the assisted framework to enable fuzzing Trusted OSes directly Linux kernel [12], and extracting full disk encryption keys on IoT devices as well as tracking state and code coverage of Android’s KeyMaster service [11]. non-invasively.SYZTRUSTutilizescompositefeedbacktoguide While TEEs are increasingly embedded in IoT devices, the fuzzer to effectively explore more states as well as to the security of Trusted OS for IoT devices remains under increasethecodecoverage.WeevaluateSYZTRUSTonTrusted studied. Considering the emerging amount of diversified OSes from three major vendors: Samsung, Tsinglink Cloud, MCUs and IoT devices, manual analysis, such as reverse and Ali Cloud. These systems run on Cortex M23/33 MCUs, engineering, requires significant expert efforts and is there- which provide the necessary abstraction for embedded TEEs. fore infeasible at scale. Recent academic works use fuzzing We discovered 70 previously unknown vulnerabilities in their to automate TEE testing. However, unlike Trusted OSes Trusted OSes, receiving 10 new CVEs so far. Furthermore, for Android devices, Trusted OSes for IoT devices are compared to the baseline, SYZTRUST has demonstrated sig- built on TrustZone-M with low-power and cost-sensitive nificant improvements, including 66% higher code coverage, MCUs, including NuMicro M23. Thus, Trusted OSes for 651%higherstatecoverage,and31%improvedvulnerability- IoT devices are more hardware-dependent and resource- findingcapability.Wereportalldiscoverednewvulnerabilities constrained, complicating the development of scalable and to vendors and open source SYZTRUST. usable testing approaches with different challenges. In the following, we conclude two challenges for fuzzing IoT 1. Introduction Trusted OSes. Challenge I: The inability of instrumentation and re- Trusted Execution Environments (TEEs) are essential stricted environment. Most Trusted OSes are closed- to securing important data and operations in IoT devices. source. Additionally, TEE implementations, especially the GlobalPlatform, the leading technical standards organiza- Trusted OSes are often encrypted by IoT vendors, which tion, has reported a 25-percent increase in the number implies the inability to instrument and monitor the code ex- of TEE-enabled IoT processors being shipped quarterly, ecution in the secure world. Accordingly, classic feedback- year-over-year [29]. Recently, major IoT vendors such as driven fuzzing cannot be directly applied to the scenario of testing TEEs including TAs and Trusted OSes. Existing ShoulingJiisthecorrespondingauthor. works either rely on on-device binary instrumentations [15] 1 3202 peS 62 ]RC.sc[ 1v24741.9032:viXraor require professional domain knowledge and rehosting IoT devices is resource constrained, which makes storing through proprietary firmware emulation [32] to enable test- ETM traces on board difficult and limits the fuzzing speed. ingandcoveragetracking.However,asfortheTrustedOSes Additionally, the TEE internals are complicated and have designed for IoT devices, the situation is more challenging multiple components, which generate noisy trace packets. |
due to the following two reasons. First, IoT devices are Therefore, we offload heavy-weight tasks to a PC and care- severely resource-limited, while existing binary instrumen- fully scheduled the fuzzing subprocesses in a more parallel tations are heavy-weight for them and considerably limit way.Wealsopresentanevent-andaddress-basedtracefilter their execution speed. Second, as for rehosting, IoT devices to handle the noisy trace packets that are not executed by are mostly hardware-dependent, rendering the reverse en- the Trusted OS. Additionally, we adopt an efficient code gineering and implementation effort for emulated software coverage calculation algorithm directly on the raw packets. andhardwarecomponentsunacceptable.Inaddition,rehost- Second, as for the Challenge II, the vulnerability detec- ing faces the limitation of the inaccuracy of modeling the tion capability of coverage-based fuzzers is limited, and a behavior of hardware components. To our best knowledge, more effective fuzzing strategy is required. Therefore, we the only existing TEE rehosting solution PartEmu [32] is propose a composite feedback mechanism, which enhances not publicly available and does not support the mainstream codecoveragewithstatefeedback.Tobespecific,weutilize TEE based on Cortex-M MCUs designed for IoT devices. state variables that control the execution contexts to present Challenge II: Complex structure and stateful work- thestatesofaTrustedOS.However,suchstatevariablesare flow. Trusted OSes for IoT devices are surprisingly com- usually stored in closed-source and customized data struc- plex. Specifically, Trusted OSes implement multiple cryp- tures within Trusted OSes. Existing state variable inference tographic algorithms, such as AES and MAC, without un- methods either use explicit protocol packet sequences [45] derlying hardware support for these algorithms as would or require source codes of target software [35], [68], which be present on Cortex-A processors. To implement these are unavailable for Trusted OSes. Therefore, to identify the algorithms in a secure way, Trusted OSes maintain several state-related members from those complex data structures, state diagrams to store the execution contexts and guide the SYZTRUST collects some heuristics for Trusted OS and execution workflow. To explore more states of a Trusted utilizes them to perform an active state variable inference OS, a fuzzer needs to feed syscall sequences in several algorithm.Afterthat,SYZTRUSTmonitorsthestatevariable specific orders with different specific state-related argument values during the fuzzing procedure as the state feedback. values. Without considering such statefulness of Trusted Finally, SYZTRUST is the first end-to-end solution ca- OSes,coverage-basedfuzzersareunlikelytoexplorefurther pable of fuzzing Trusted OSes for IoT devices. Moreover, states, causing the executions to miss the vulnerabilities thedesignoftheon-devicefuzzingframeworkandmodular hidden in a deep state. Unfortunately, existing fuzzing tech- implementation make SYZTRUST more extensible. With niques lack state-awareness for Trusted OSes. Specifically, several MCU-specific configurations, SYZTRUST scales to they have trouble understanding which state a Trusted OS Trusted OSes on different MCUs from different vendors. reaches since there are no rich-semantics response codes to Evaluation. We evaluate SYZTRUST on real-world Trusted indicate it. In addition, due to the lack of source code and OSes from three leading IoT vendors Samsung, Tsinglink theinabilityofinstrumentation,itishardtoinferandextract Cloud, and Ali Cloud. The evaluation result shows that the state variables by program analysis. SYZTRUST is effective at discovering new vulnerabilities Oursolution.Toaddresstheabovekeychallenges,wepro- andexploringnewstatesandcodes.Asaresult, SYZTRUST poseandimplementSYZTRUST,thefirstfuzzingframework hasdiscovered70newvulnerabilities.Amongthem,vendors targetingTrustedOSesforIoTdevices,supportingstateand confirmed 28, and assigned 10 CVE IDs. The vendors coverage-guided fuzzing. Specifically, we propose an on- are still investigating others. Compared to state-of-the-art device fuzzing framework and leverage a hardware-in-the- approaches, SYZTRUST findsmorevulnerabilities,hits66% loop approach. To support in-depth vulnerability detection, higher code branches, and 651% higher state coverage. weproposeacompositefeedbackmechanismthatguidesthe Summary and contributions. fuzzer to explore more states and increase code coverage. SYZTRUST necessitates diverse contributions. First, to • We propose SYZTRUST, the first fuzzing framework tackle Challenge I, we propose a hardware-assisted fuzzing targetingTrustedOSesforIoTdevices,supportingeffective framework to execute test cases as well as collect code state and code coverage guided fuzzing. With a carefully coverage feedback. Specifically, we decouple the execution designed hardware-assisted fuzzing framework and a com- engine from the rest of the fuzzer to enable directly exe- posite feedback mechanism, SYZTRUST is extensible and cuting test cases in the protective TEE secure world on the configurable to different IoT devices. resource-limited MCU. To support coverage tracking, we • With SYZTRUST, we evaluate three popular Trusted presentaselectivetracecollectionapproachinsteadofcostly OSes on three leading IoT vendors and detect several code instrumentation to enable tracing instructions on a previously unknown bugs. We have responsibly reported targetMCU.Inparticular,weleveragetheARMEmbedded these vulnerabilities to the vendors and got acknowledged Trace Macrocell (ETM) feature to collect raw trace packets from vendors such as Samsung. We release SYZTRUST |
by monitoring instruction and data buses on MCU with as an open-source tool for facilitating further studies at a low-performance impact. However, the Trusted OS for https://github.com/SyzTrust. 22.2. Debug Probe Normal World Secure World Client Applications (CAs) Trusted Applications (TAs) A debug probe is a special hardware device for low- level control of ARM-based MCUs, using DAP (Debug CA CA CA TA TA TA Access Port) provided by the ARM CoreSight Architecture [25]. It bridges the connection between a computer and an TEE Client API TEE Internal API MCU and provides full debugging functionality, including watchpoints, flash memory breakpoints, memory, as well as Rich OS Switch Trusted OS register examination or editing. In addition, a debug probe (e.g., FreeRTOS) Instructions can record data and instruction accesses at runtime through the ARM ETM feature. ETM is a subsystem of ARM Figure 1: Structure of TrustZone-M based TEE. Coresight Architecture and allows for traceability, whose function is similar to Intel PT. The ETM generates trace elements for executed signpost instructions that enable 2. Background reconstruction of all the executed instructions. Utilizing the above features, the debug probe has shown its effectiveness in tracing and debugging malware [49], unpacking Android apps [65], or fuzzing Linux peripheral drivers [37]. 2.1. TEE and Trusted OS 3. Threat Model A TEE is a secure enclave on a device’s main processor Our attacker tries to achieve multiple goals: gaining that is separated from the main OS. It ensures the confiden- control over, extracting confidential information from, and tiality and integrity of code and data loaded inside it [64]. causing crashes in other Trusted Applications (TAs) hosted ForstandardizingtheTEEimplementations,GlobalPlatform on the same Trusted OS or the Trusted OS itself. We (GP)hasdevelopedanumberofspecifications.Forinstance, considertwopracticalattackscenarios.First,anattackercan it specifies the TEE Internal Core API implemented in the exploitourdiscoveredvulnerabilitiesbyprovidingcarefully Trusted OS to enable a TA to perform its security functions crafted data to a TA. They can utilize a malicious Client [30]. However, it is difficult for vendors to implement a Application (CA) to pass the crafted data to a TA. For Trusted OS correctly since there are lots of APIs with instance, in mTower, CVE-2022-38511 (ID 1 in Table 1) complexandstatefulfunctionsdefinedintheGPTEEspec- can be triggered by passing a large key size value from a ifications. For instance, the TEE Internal Core API defines CA to a TA. Second, an attacker can exploit our discovered six types of APIs, including cryptographic operations APIs vulnerabilities by injecting a malicious TA into the secure supportingmorethan20complexcryptographicalgorithms. world. They can do this through rollback attacks or electro- In addition, the TEE Internal Core API also requires that a magnetic fault injections (CVE-2022-47549). Trusted OS shall implement state diagrams to manage the operation.Thetwoaforementionedfactorsmakeitchalleng- 4. Design ing for device vendors to develop a secure Trusted OS. ARMTrustZonehasbecomethede-factohardwaretech- Figure 2 gives an overview of SYZTRUST’s design. nologytoimplementTEEsinmobileenvironments[17]and SYZTRUST includestwomodules:thefuzzingengineonthe has been deployed in servers [33], low-end IoT devices [8], Personal Computer (PC) and the execution engine on the [53], and industrial control systems [27]. For IoT devices, MCU. The fuzzing engine generates and sends test cases theCortex-M23/33MCUs,introducedbytheARMCommu- to the MCU via the debug probe. The execution engine nity in 2016, are built on new TrustZone for ARMv8-M as executes the received test case on the target Trusted OS. security foundations for billions of devices [22]. TrustZone At a high level, we propose a hardware-assisted fuzzing forARMv8-MCortex-Mhasbeenoptimizedforfastercon- framework and a composite feedback mechanism to guide textswitchandlow-powerapplicationsandisdesignedfrom the fuzzer. Given the inaccessible environment of Trusted the ground up instead of being reused from Cortex-A [54]. OSes, we design a TA and CA pair as a proxy to the As Figure 1 shows, instead of utilizing a secure monitor TrustedOSandutilizeadebugprobetoaccesstheMCUfor in TrustZone for Cortex-A, the division of the secure and feedback collection. To handle the challenge of limited re- normal world is memory-based, and transitions take place sources,wedecoupletheexecutionenginefrom SYZTRUST automatically in the exception handle mode. Based on the and only run it on the MCU. This allows SYZTRUST, TrustZone-M, IoT vendors provide Trusted OS binaries to with its resource-demanding core components, to run more thedevicemanufacturers andthenthedevicemanufacturers effectively on a PC. To handle the statefulness of Trusted produce devices with device-specific TAs for the end users. OSes, we include state feedback with code coverage in the This paper focuses on the Trusted OSes from different IoT compositefeedback.Statevariablesrepresentinternalstates, vendors and provides security insights for device manufac- and our inference method identifies them in closed-source turers and end users. Trusted OSes. 3State Variable Trusted OS Inference State Variables Fuzzing Engine (on PC) Execution Engine (on MCU) Test cases Initial Seeds Manager Hardware-assisted Controller Test cases Proxy CA & TA Code coverage Trace A Collector composite feedback State coverage State Variables Feedback Trusted OS Monitor Debug Probe Fuzzing Loop Test cases Syscall Feedback Templates Figure 2: Overview of SYZTRUST. SYZTRUST consists of a fuzzing engine running on a PC, an execution engine running on an MCU, and a debug probe to bridge the fuzzing engine and the execution engine. The main workflow is as follows. First, SYZTRUST a target Trusted OS by mutating them. However, there is |
accepts two inputs, including initial seeds and syscall tem- no off-the-shelf seed corpus for testing Trusted OSes, and plates. Second, the manager generates test cases through constructing one is not trivial, as the syscall sequences twofuzzingtasks,includinggeneratinganewtestcasefrom with arguments should follow critical crypto constraints, scratchbasedonsyscalltemplatesorbymutatingaselected and manually constructing valid sequences will need much seed. Then, the generated test cases are delivered to the effort. For example, a valid seed includes a certain order execution engine on MCU through the debug probe. The of syscall sequences to initiate an encryption operation, and executionengineexecutesthesetestcasestotesttheTrusted the key and the encryption mode should be consistent with OS. Meanwhile, the debug probe selectively records the supporting encryption. Fortunately, OP-TEE [39], an open- executed instruction trace, which is processed by our trace sourceTEEimplementationforCortex-A,offersatestsuite, collector as an alternative to code coverage. Additionally, which can be utilized to construct seeds for most Trusted thestatevariablemonitortracksthevaluesofasetoftarget OSes. Specifically, we automatically inject codes in TAs state variables to calculate state coverage via the debug provided by the test suite to log the syscall names and probe. Finally, the code and state coverage is fed to the their arguments. Then we automatically convert those logs manager as composite feedback to guide the fuzzer. The into seeds following the format required by SYZTRUST. In aboveproceduresareiterativelyexecutedandwedenotethe addition, we automatically add data dependencies between iterative workflow as the fuzzing loop. syscalls in the seed corpus. Accordingly, we identify the returnvaluesfromsyscallsthatareinputargumentsofother 4.1. Inputs syscalls and add helper syscalls for the identified values. Althoughthesyscalltemplateandseedcorpusconstruc- SyscalltemplatesandinitialseedsarefedtoSYZTRUST. tion require extra work, it is a one-shot effort and can be Syscall templates. Provided syscall templates, used in the further study of Trusted OS security. SYZTRUST is more likely to generate valid test cases by following the format defined by the templates. Syscall 4.2. A Hardware-assisted Framework templates define the syscalls with their argument types as well as special types that provide the semantics of an Now our hardware-assisted fuzzing framework enables argument. For instance, resources represent the values the generated test cases to be executed in protective and that need to be passed from an output of one syscall resource-constrained trusted environments on IoT devices. to an input of another syscall, which refers to the data To handle the constrained resources in Challenge I, we dependency between syscalls. We manually write syscall decouple the execution engine from the other components templates for the Internal Core API for the GP TEE in of SYZTRUST to ease the execution overhead of the MCU. Syzlang (SYZKALLER’s syscall description language [31]). As shown in Figure 2, the fuzzing engine, which includes However, a value with a complex structure type, which is most core components of SYZTRUST and requires more common in Trusted OSes, cannot be used as a resource computing resources and memory space, runs on a PC, value due to the design limitation of Syzlang. To handle while only the execution engine runs on the MCU. Thus, this limitation, we extend the syscall templates with helper SYZTRUST does the heavy tasks, including seed preserva- syscalls, which accept values of complex structures and tion, seed selection, type-aware generation, and mutation. output their point addresses as resource values. In AsfortheprotectedexecutionenvironmentinChallenge total, we have 37 syscall templates and 8 helper syscalls, I, we design a pair of CA and TA as the execution engine covering all the standard cryptographic APIs. that executes the test cases to test the Trusted OSes. We Initial seeds. Since syscall templates are used to gen- utilize a debug probe to bridge the connection between the erate syntactically valid inputs, SYZTRUST requires initial fuzzing engine and the execution engine. In the following, seeds to collect some valid syscall sequences and argu- weintroducethedesignofdeliveringtestcasesandthepair ments and speed up the fuzzing procedure. Provided initial ofCAandTA.First,asshowninFigure2,thedebugprobe seeds, SYZTRUST continuously generates test cases to test transferstestcasesandfeedbackbetweenthefuzzingengine 4and the execution engine. Specifically, a debug probe can 10 InvokeOneSyscall(); directly access the memory of the target MCU. Thus, the 11 // Data access event is triggered , stop tracing debug probe accepts generated test cases from the manager 12 stop event++; 13 if (AllSyscallsExecuted()){ engine and writes them to a specific memory of the target 14 break; MCU.Asfortestcasetransportation,wedesignaserializer 15 } 16 } while(1); to encode the test cases into minimum binary data denoted 17 } as payloads before sending them to the MCU. A payload 18 includes a sequence of syscalls with their syscall names, Listing 1: A code snippet containing the main execution syscall argument types, and syscall argument values. In ad- logic from our designed TA. dition,SYZTRUSTdenotessyscallnamesasordinalnumbers tominimizethetransportationoverhead.Second,thepairof However, the IoT Trusted OSes are highly resource- CAandTAplaystheroleofaproxytohandlethetestcases. constrained, making locally storing ETM traces infeasi- |
Accordingly, the CA monitors the specific memory in the ble and limiting the speed of local fuzzing. Therefore, MCU. If a payload is written, the CA reads the binary data SYZTRUST utilizes the debug probe (see Section 4.2) to fromthespecificmemoryanddeliversthemtotheTA.Then stream all ETM trace data to the host PC in real time while the TA deserializes the received binary data and executes the target system is running. Moreover, SYZTRUST enables them one by one to test a Trusted OS implementation. parallel execution of test case generation, transmission, and Accordingly, the TA invokes specific syscalls according execution, as well as coverage calculation, thereby boosting to the ordinal numbers and fills them in with arguments the speed of fuzzing. extracted from the payload. To this end, the TA hardcodes For code coverage, we cannot directly use the raw the syscalls declaration codes that are manually prepared. instruction trace packets generated by the ETM as a re- The manual work to prepare the declaration codes is a one- placement for branch coverage due to two issues. First, shot effort that can be easily done by referring to the GP there is a gap between the raw ETM trace packets and TEE Internal Core API specification. the instruction traces generated by the Trusted OS. The TEE internals are complicated and the ETM component 4.3. Selective Instruction Trace Tracking recordsinstructiontracesgeneratedbythesoftwarerunning on the MCU, including the CA, rich OS, the TA, and the Thissectionintroduceshowtoobtainthecodecoverage Trusted OS. Thus, we design a selective instruction trace feedback in SYZTRUST when testing the Trusted OSes. collection strategy to generate fine-grained traces. ARM To handle the inability of instrumentation in Challenge ETM allows enabling/disabling trace collection when cor- I, we present a selective instruction trace track, which is responding events occur. We configure the different events implemented in the trace collector in the fuzzing engine. via the Data Watchpoint and Trace Unit (DWT) hardware Thetracecollectorcontrolsthedebugprobetocollecttraces featureto filterout noisypackets.In SYZTRUST,we aimto synchronouslywhenthetestcasesarebeingexecuted.After calculate the code coverage triggered by every syscall in a completing a test case, it calculates the code coverage and test case and generated by Trusted OS. Thus, as shown in delivers it to the manager as feedback. Listing1,weconfiguretheevent-basedfiltersbyaddingtwo In the selective instruction trace track, we utilize the datawriteaccesseventconditions.Thetwoeventconditions ETM component to enable non-invasive monitoring of the start ETM tracing before invoking a syscall and stop ETM executioncontextoftargetTrustedOSes.Theoverallwork- tracingaftercompletingthesyscall,respectively.Inaddition, flow is as follows. (1) Before each payload is sent to we specify the address range of the secure world shall be the MCU, the hardware-assisted controller resets the target included in the trace stream to filter noisy trace packets MCU via the debug probe. (2) Once the execution engine generated from the normal world. reads a valid payload from the specific memory space and Second,thereisagapbetweentherawETMtracepack- starts to execute it, the hardware-assisted controller starts ets to the quantitive coverage results. To precisely recover the ETM component to record the instruction trace for each thebranchcoverageinformation,wehavetodecodetheraw syscall from the payload. Specifically, in the meanwhile trace packets and map them to disassembled binary instruc- of executing the syscalls, the generated instruction trace is tionaddress.Afterthat,wecanrecovertheinstructiontraces synchronously recorded and delivered to the fuzzing engine and construct the branch coverage information [69] [65]. via the debug probe. (3) After completing a payload, the However,disassemblingcodeintroducessignificantrun-time hardware-assisted controller computes the coverage based overhead as it incurs high computation cost [18]. Thus, we on the instruction traces for the manager. calculate the branch coverage directly using the raw trace packets [37]. At a high level, this calculation mechanism 1 extern int32 t start event; 2 extern int32 t stop event; utilizes a special basic block generated with Linear Code 3 void TA ProcessEachPayload(){ Sequence and Jump (LCSAJ) [66] to reflect any change in 4 RecvPayloadFromCA(); basic block transitions. LCSAJ basic blocks consist of the 5 DecodePayload(); 6 //Start invoking syscalls one by one basicaddressofarawETMbranchpacketandasequenceof 7 do { branch conditions. The branch conditions indicate whether 8 // Data access event is triggered , start tracing 9 start event++; the instructions followed by the basic address are executed. 5This mechanism performs several hash operations on the Test Cases LCSAJ basic blocks to transform them into random IDs, TEE_AllocateOperation which we utilize as branch coverage feedback. (TEE_OperationHandle *operation, uint32_t algorithm, uint32_t mode, uint32_t maxKeySize) TEE_ResetOperation(…) 4.4. State Variable Inference and Monitoring Test Harness … Output Buffer 0000 0000 00c0 0000 0000 0008 0000 Trusted OS Here we introduce how to identify the internal state State Variable 0000 0000 00a8 2d00 2000 0000 0000 … of the Trusted OS and how to obtain the state coverage Inferer feedback through the state variable monitor component. In particular, the state variable inference provides the state struct __TEE_OperationHandle{ variablemonitorwiththeaddressrangesoftheinferredstate [0:3] algorithm: 0000 0000 00c0 0000 … Hardware-assisted State … variables. Then the state variable monitor tracks the values [40:43] operationState: 0001 Variables Monitor … of these state variables synchronously when the execution } |
engine executes test cases. After completing a test case, Figure 3: State variable inference. the state variable monitor calculates the state coverage and deliversittothemanagerasfeedback.Belowarethedetails about state variable inference and monitoring. implementations. This method is based on the assumption According to the GP TEE Core API specification, that the state variables in the above two handles will have Trusted OSes have to maintain several complex state ma- different values according to different cryptographic oper- chines to achieve the cryptographic algorithm in a highly ation configurations. For instance, in Samsung’s Trusted secure way. To explore all states of Trusted OS, the OS implementation, after executing several syscalls to set fuzzer needs to feed syscall sequences in several specific cryptographic arguments, the value of a state variable from orders with different specific state-related argument val- TEE_OperationHandle changes from 0 to 1, which ues. Coverage-based fuzzers are unlikely to explore further means a cryptographic operation is initialized. states, causing the executions to miss some vulnerabilities Based on the assumption, SYZTRUST uses a test har- hidden in a deep state. For instance, a syscall sequence ness to generate and execute test cases carefully. Then achieves a new DES cryptographic operation configuration SYZTRUST records the buffers of the two handles and ap- by filling different arguments, whose code coverage may plies state variable inference to detect the address ranges of be the same as the syscall sequence to achieve a new statevariablesintherecordedbuffers,asshowninFigure3. AEScryptographicoperationconfiguration.Preservingsuch SYZTRUST firstfiltersrandomlychangeablebytesequences syscall sequences as a seed and further exploring them that store pointers, encryption keys, and cipher text. Specif- will achieve new cryptographic operations and gain new ically, SYZTRUST conducts a 24-hour basic fuzzing proce- code coverages. However, a coverage-based fuzzer may dure on the initial seeds from Section 4.1 and collects the discard such syscall sequences that seem to have no code buffersofthetwohandles.SYZTRUSTthenparsesthebuffer coverage contribution but trigger new internal states. Thus, intofour-bytesequencestorecognizechangeablevalues.Af- SYZTRUST additionally adopts state coverage as feedback terthat,SYZTRUSTcollectsalldifferentvaluesforeachbyte to handle the statefulness of Trusted OSes. sequenceinthefuzzingprocedureandcalculatesthenumber State variable inference. By referring to the GP of times that these different values occur. SYZTRUST con- TEE Internal Core API specification and several open- sidersthebytesequencesthatoccurover80timesasbuffers source TEE implementations from Github, we find that and excludes them from the state variables. For the remain- Trusted OSes maintain two important structures, includ- ing byte sequences, SYZTRUST applies the following infer- ing the struct named TEE_OperationHandle and encetoidentifystatevariables.Inourobservations,thecryp- the struct named TEE_ObjectHandle, which present tographic operation configurations are determined by the the internal states and control the execution context. We operation-related arguments, including the operation mode further find that several vital variables (with names such (encryption, decryption, or verification) and cryptographic as operationState and flags) in the two com- algorithm (AES, DES, or MAC). The cryptographic oper- plex structs determine the Trusted OS’ internal state. ation configurations are also determined by the operation- Thus, we utilize the value combinations of state variables related syscall sequences. We identify such arguments and to present the states of Trusted OS and track all state- syscallsbyreferringtoGPTEEInternalCoreAPIspecifica- related variables to collect their values. Then, we con- tion and conclude the operation-related syscalls include the sider a new value of the state variable combination as a syscalls that accept the two handles as an input argument new state. However, the TEE_OperationHandle and and are specified to allocate, init, update, and finish a cryp- TEE_ObjectHandle implementations are customized tographic operation, such as TEE_AllocateOperation andclosesource,makingrecognizingthestatevariablesand and TEE_AllocateTriansientObject. SYZTRUST their addresses challenging. performs mutated seeds that include the operation-related To handle it, we come up with an active infer- syscalls and records the buffers of the two handles. These ence method to recognize the state variables in the bytesequencesthatvarywithcertainsyscallsequenceswith TEE_OperationHandle and TEE_ObjectHandle certain arguments are considered state variables. Finally, 6State Hashes 𝑆𝑡(cid:2869) 𝑆𝑡(cid:2870) . . . 𝑆𝑡(cid:2898) 𝑆𝑑(cid:2869) 𝑆𝑑(cid:2870) Seed Buckets 𝑆𝑑(cid:2871) 2022/12/3 11 . . . preserves and prioritizes seeds that trigger new code or Hit Times 𝐻𝑖𝑡(cid:2869) 𝐻𝑖𝑡(cid:2870) . . . 𝐻𝑖𝑡(cid:2898) states. We design two maps as the seed corpus to preserve HitMap seeds according to code coverage and state feedback. With suchseedcorpus,SYZTRUSTthenperiodicallyselectsseeds SeedMap from the hash table to explore new codes and new states. This section introduces how SYZTRUST fine-tunes the evo- lution through a novel composite feedback mechanism, in- A Seed Node cluding seed preservation and seed selection strategy. •A Syscall Sequence Seed preservation. Given the composite feedback with Arguments •Branch Coverage mechanism, we thus provide two maps as the seed corpus 𝑆𝑑(cid:3014) to store seeds that discover new state values and new code, respectively.AsshowninFigure4,intwomaps,SYZTRUST |
Figure 4: Seed corpus in SYZTRUST. calculates the hash of combination values of state variables as the keys. SYZTRUST maps the state hashes to their hit times in the HitMap and maps the state hashes to seed SYZTRUST outputs the address ranges of these byte se- buckets in the SeedMap. In the SeedMap, for a certain quences and feeds them to the hardware-assisted controller. state hash, the mapping seed bucket contains one or several State variable monitor. The identified state variables seedsthatcanproducethematchingstatevariablevalues.To and their address ranges are then used as configurations construct the SeedMap, SYZTRUST handles the following of the hardware-assisted controller. SYZTRUST utilizes the two situations. First, if a syscall from a test case triggers a debug probe to monitor the state variables. When a new new value combination of state variables, SYZTRUST adds TEE_OperationHandle or TEE_ObjectHandle is a new state hash in the SeedMap. Then, SYZTRUST maps allocated, SYZTRUST records its start address. Given the the new state hash to a new seed bucket. To construct the start address of the two handles and address ranges of state seedbucket,SYZTRUSTutilizesthetestcasewithitsbranch variables, SYZTRUST calculatesthememoryrangesofstate coverage to construct a seed node and appends the seed variablesanddirectlyreadsthememoryviathedebugprobe. bucket with the newly constructed seed node. Second, if Inthefuzzingloop,foreachsyscall,besidethebranchcov- a syscall from a test case triggers a new code coverage, erage, SYZTRUST records the hash of combination values SYZTRUST adds a new seed node in a seed bucket. Specifi- of the state variables as the state coverage. cally, SYZTRUST calculates the hash of combination values ofstatevariablesproducedbythesyscall.Then,SYZTRUST 4.5. Fuzzing Loop looks up the seed bucket that is mapped to the state hash and appends a new seed node that contains the test case After collecting code and state coverage feedback, the and its branch coverage to the seed bucket. Noted, a test fuzzer enters the main fuzzing loop implemented in the case could be stored in multiple seed buckets if it triggers manager. SYZTRUST has a similar basic fuzzing loop to multiple feedbacks at the same time. As for the HitMap, SYZKALLER. It schedules two tasks for test case gener- eachtimeaftercompletingatestcase,SYZTRUSTcalculates ation: generating a new test case from syscall templates all the state hashes it triggers and updates the hit times for or mutating an existing one based on selected seeds. As these state hashes according to their hit times. for the generation task, SYZTRUST faithfully borrows the Seed selection. Given the preserved seed corpus, generation strategy from SYZKALLER. As for the muta- SYZTRUST applies a special seed selection strategy to tion task, SYZTRUST utilizes a composite feedback mech- improve the fuzzing efficiency. Algorithm 1 shows how anism to explore rarely visited states while increasing code SYZTRUST selectsseedsfromthecorpus.First, SYZTRUST coverage. Notably, SYZTRUST does not adopt the Triage chooses a state and then chooses a seed from the map- scheduling tasks from SYZKALLER due to two key reasons. ping seed buckets according to the state. In state selection, First, SYZTRUST fuzzes directly on the MCU, enabling SYZTRUST is more likely to choose a rarely visited state swift MCU resets after each test case, thus mitigating false by a weighted random selection algorithm. The probability positives in branch coverage. Branch coverage validation of choosing a seed is negatively correlated with its hit through triaging is, therefore, unnecessary. Second, exe- times. Thus, we assign each seed a weight value, which cuting test cases for Trusted OSes (averaging 30 syscalls) is the reciprocal of its hit times. Finally, The probability of is time-consuming, incurring a significant overhead for choosing a seed is equal to the proportion of its weight in SYZTRUST to minimize a syscall sequence by removing thesumofallweights.Inseedselection,SYZTRUSTismore syscalls one by one. Appendix A further experimentally likely to choose a seed with high branch coverage. To this demonstrates our intuition for the new scheduling tasks. end,SYZTRUSTchoosesaseedbasedonaweightedrandom selectionalgorithm,andtheprobabilityofchoosingacertain 4.6. Composite Feedback Mechanism seed is equal to the proportion of its branch coverage in all SYZTRUST adopts a novel composite feedback mecha- coverages.Notably,theprobabilitiesofchoosingastateand nism, leveraging both code and state coverage to guide mu- seedaredynamicallyupdatedsincethehittimesandthesum tationtaskswithinthefuzzingloop.Specifically,SYZTRUST of coverage is updated in the fuzzing procedure. 7Algorithm 1: Seed Selection Algorithm structuresintheTrustedOSandtheuseofourstatevariable inference to collect address ranges. These structures can Input: C, SeedMap that maps state hashes to a set be extracted from the documents and header files. We rely of seeds on two heuristics to help extract them. First, state-related Input: H, HitMap that maps state hashes to hit data structures usually have common names, e.g., related to times contextorstate.Second,thestatestructureswillbethein- Output: s, the selected seed putsandoutputsofseveralcryptooperation-relatedsyscalls. 1 Sum W ←0; For example, on Link TEE Air, a pointer named context 2 Map W = initMap() //maps state hashes to their is used among cryptographic syscalls such as tee aes init weights ; and tee aes update, and can be further utilized to infer 3 for each key ∈H do state variables. This information can be obtained from the 4 t← getValue(H,key); 5 Sum W ←Sum W +t−1; crypto.h header file. 6 end 7 for each key ∈H do 5. Implementation |
8 t← getValue(H,key); 9 Map W ←Map W ∪(key,t−1/Sum W); We have implemented a prototype of SYZTRUST on 10 end top of SYZKALLER. We replaced SYZKALLER’s execution 11 St← WeightedRandom StateSelection(Map W); engine with our custom CA and TA pair, integrating our 12 seedSets← getValue(C,St); extendedsyscalltemplates,eliminatingSYZKALLER’striage 13 s← WeightedRandom SeedSelection(seedSets); scheduling task, and implementing our own seed preserva- tion and selection strategy. Sections 4.2, 4.5, and 4.6 detail these adaptations. 4.7. Scope and Scalability of SYZTRUST. Below are details about our implementations. (1) As for theoverallfuzzingframework,weusetheSEGGERJ-Trace SYZTRUST targets Trusted OSes provided by IoT ven- Pro debug probe to control the communication between dors and assumes that (i) a TA can be installed in the the fuzzing engine and the execution engine, as shown in Trusted OS, and (ii) target devices have ETM enabled. Figure 5. The pair of CA and TA is developed following These assumptions align with typical IoT Trusted OS sce- the GP TEE Internal Core API specification and is loaded narios. First, given that IoT device manufacturers often into the MCU following the instructions provided by the needtoimplementdevice-specificTAs,TrustedOSbinaries IoT vendors. To control the debug probe, we developed a supplied by IoT vendors generally allow TA installation. hardware-assisted controller based on the SEGGER J-Link Second, SYZTRUST tests IoT Trusted OSes by deploying SDK. The hardware-assisted controller receives commands them on development boards where ETM is enabled by fromthemanagerandsendsfeedbackcollectedontheMCU default. Moreover, SYZTRUST can directly test the Trusted tothemanagerviasocketcommunications.(2)Fortheselec- OSesfollowingtheGPTEEInternalCoreAPIspecification tive instruction trace track, SYZTRUST integrates the ETM withMCU-specificconfigurations.Ithasbuilt-insupportfor tracingcomponentoftheSEGGERJ-TraceProandrecords testing on alternative Trusted OSes, including proprietary theinstructiontracesfromTrustedOSesnon-invasively.The ones. Appendix B extends discussion with concrete data. raw ETM packets decoder and branch coverage calculation SYZTRUST can support other Trusted OSes. To test a are accomplished in the hardware-assisted controller. (3) newTrustedOSesonadifferentMCU, SYZTRUST requires For the state variable inference and monitoring, SYZTRUST MCU configurations, including the address ranges of spe- follows the testing strategy in Section 4.4 and utilizes an cific memory for storing payloads, the addresses of the data RTT component [2] from the SEGGER J-Trace Pro to events for the event-based filter, and the address ranges of record related state variable values and deliver them to the secure memory for the address-based filter. We developed fuzzing engine. The RTT component accesses memory in tooling in the CA and TA to automatically help the analyst the background with high-speed transmission. obtain all required addresses. In addition, by following the Several tools help us analyze the root cause of detected development documents from IoT TEE vendors, the CA crashes. We utilize CmBacktrace, a customized backtrace and TA may require slight adjustments to meet the format tool [70], to track and locate error codes automatically. Ad- required by the new Trusted OSes and loaded into the Rich ditionally, we develop TEEKASAN based on KASAN [34] OS and Trusted OS. andMCUASAN[59]tohelpidentifyout-of-boundanduse- To extend SYZTRUST to proprietary Trusted OSes, we after-free vulnerabilities. We integrate TEEKASAN with augment the syscall templates and the API declarations in lightweight compiler instrumentation and develop shadow our designed TA and test harness with the new version memory checking for bug triaging on the open source of these customized APIs. This can be done by referring Trusted OSes. Since TEEKASAN only analyzed a small to the API documents provided by IoT vendors, which is numberofvulnerabilitiesduetothelimitationoftheinstru- simple and requires minimum effort. To enable the state- mentationtoolandmostTrustedOSesareclosed-source,we aware feature, we need expert analysis of the state-related manually triage the remaining vulnerabilities. 8 Power interface USB interface MCU (connected to computer) SWD/ETM interface Debug probe 5 H V H W 7 U D Q V I H U ( [ H F X W L R Q & |