text
stringlengths 64
2.99M
|
---|
employsmultipledifferentdeduplicationsettings.Hence,giventhatwehaveaccesstotheoriginaldataset,we canreproducethedatasetthatthemodelwastrainedonbyfollowingthesameprocedures. • SantaCoderistrainedforcodecompletion,usingtheFIMobjective.Assuch,itnativelysupportstheFIMtask thatisusedforTraWiC. HuggingFacehasreleasedmultipleversionsofSantaCoder.Eachmodelistrainedondatathatwereconstructedusing differentdata-cleaningcriteria.Forourexperiments,wechosethecomment_to_codemodel1becausethecomment-to- coderatioforthismodeliscalculatedby“usingtheastandtokenizemodulestoextractdocstringsandcomments fromPythonfiles”[9];whichallowsustoaccuratelyreproducethedata.Followingthedatacleaningstepsoutlinedin [9],thetrainingdatasetforthisversionwasconstructedbyfilteringanyscriptinTheStackthathasacomment-to-code ratiobetween1%and80%onacharacterlevelafterthestandarddatacleaningsteps(removingfilesfromopt-out requests,near-deduplication,PII-reduction,etc). 4.1.2 Llama-2. Llama-2isafamilyofLLMsdevelopedbyMeta[71].Llama-2waspre-trainedon2trilliontokensof data,hasacontextlengthof4096tokens,andusesGrouped-QueryAttention(GQA)[8].ThisfamilyofLLMscomesin differentsizes(7B,13B,34B,and70B),andboththepre-trainedandinstructmodelshavebeenreleasedonHuggingFace2. Thepre-trainedbaseversionsprovideafoundationforfurtherfine-tuningaccordingtospecificdownstreamtasks whilethefine-tunedversionsallowforenhancedperformanceinconversationalapplications.Forourexperiments,we usethepre-trainedversionwith7billionparametersandfine-tuneitonadatasetthatwehaveconstructed. 4.1.3 Mistral7B. Mistral7B[38]isa7-billion-parameterLLMdevelopedbytheMistralresearchgroup,trainedfor languagemodelingtaskssuchaschat,codecompletion,andcomplexreasoning.ThismodelusesGQA[8]similar 1https://huggingface.co/bigcode/santacoder 2https://huggingface.co/collections/meta-llama/llama-2-family-661da1f90a9d678b6f55773b ManuscriptsubmittedtoACM14 Majdinasabetal. toLlama-2andadditionallyusesSlidingWindowAttention(SWA)[13],whichallowsforfasterinferencespeedand reducedcomputationalcosts.Alongsideitsfastinference,MistralalsooutperformsLlama-213Bonallevaluation benchmarks[38].SimilartoLlama-2,Mistral7Bhasbeenreleasedintwoversions:Apre-trainedbase,andfine-tuned version.Forourexperiments,wechosethepre-trainedversion3foritsflexibilityandpotentialforfine-tuning. BothMistral7BandLlama-2wereselectedforthisstudyduetotheirwidespreaduseasfoundationalmodelsfor fine-tuning[50].Unlikemodelsalreadyoptimizedforspecificdownstreamtasks,thesepre-trainedbasemodelsallowed ustodemonstratetheeffectivenessofourapproachonversatile,general-purposelanguagemodels.Theirselection alsoprovidedabalancedrepresentationofdifferentmodelarchitecturesandtrainingprocesses[50].Eventhoughthe developersofbothLlama-2andMistralprovidesomegeneraldescriptionsaboutthedatasetsthatwereusedtotrain themodels,unlikeSantaCoder,thedatasetsthatwereusedtotrainthesemodelsarenotpubliclyavailable.Moreover, unlikeSantaCoder,neitherMistralnorLlama-2supporttheFIMobjectivenatively.Therefore,forourexperiments,we firstconstructadatasetandfine-tunebothmodelsontheconstructeddataset.Weexplainourdatasetconstructionin Section4.2andourfine-tuningprocessinSection4.3. 4.2 DatasetConstruction Inordertoassessourapproach’scorrectness,weneedtohaveaccesstothedatasetthatanLLMwastrainedon.By knowingwhichscripts/projectswereusedfortrainingthemodel,wecanconstructagroundtruthfortrainingand validatingtheclassifierfordatasetinclusiondetection.Todoso,weusetwodatasetsasfollows. 4.2.1 TheStack. TheStack4isalargedataset(3terabytes)ofcodecollectedfromGitHubbyHuggingFace5.Thisdataset containscodesfor30differentprogramminglanguagesand137.36millionrepositories[40].WefocusonthePython repositoriesinthisdatasetasthedatacleaningprocessusedforSantaCoderwhichweoutlineinSection4.1isaccurately replicableforPython.GiventhelargenumberofPythonscriptsandthedatacleaningapproachusedforproducingthe trainingdatafortheLLM,somescriptswithinaprojectmightbeincludedintheLLM’sdatasetwhileothersmightbe excluded.Therefore,wefirstfilterthePythonprojectsbyconsideringaminimumof10andamaximumof50scripts perproject.Wefocusonprojectsofsuchscaleforthefollowingreasons: • Ouraimistoincreasethelikelihoodthatsomescriptswithinaprojectmightbeincludedinthetrainingdataset oftheLLMwhilesomefromthesameprojectmightnot.Thiswillallowustohaveamorediversedatasetfor bothscript-levelandproject-leveldatasetinclusiondetectionasexplainedinSection4.5. • Asourprimaryobjectiveistodetectaproject’sinclusioninthetrainingdatasetofanLLM,bynotincluding overlylargeprojectsthatcontainmanyscripts,wewillhavealargernumberofprojectsinourdataset(e.g., insteadofhavingaprojectwith200scripts,wecanhave5projectswith40scripts). • AsweoutlineinSection4.1,wehavechosenadatacleaningcriterionbasedondocumentationtocoderatio. |
Therefore,toconstructabalanceddatasetwhereprojectsarenotdocumentedenoughortoodocumented,we donotincludeoverlysmallprojectsinourdataset.AsreportedbyMamunetal.[47],thereexistsacorrelation betweenthesizeofasoftwareprojectandtheamountofdocumentationwrittenforit.Therefore,bynotincluding overlysmallprojects,wewillincreasethelikelihoodofhavinghigh-quality,well-documentedscriptsinour dataset. 3https://huggingface.co/mistralai/Mistral-7B-v0.1 4https://huggingface.co/datasets/bigcode/the-stack 5https://huggingface.co/huggingface ManuscriptsubmittedtoACMDetectingCodeInclusionInLanguageModelsTrainedonCode 15 ItshouldbenotedthatSantaCoderwastrainedonallofthePython,Java,andJavaScriptcodesinTheStackafter applyingthedatacleaningcriterion.Wethenrandomlysamplefromtheremainingprojectsforconstructingthedataset forinclusiondetectionforTraWiC,NiCad,andJPlag(asexplainedinSections3.4and4.6). 4.2.2 GitHubrepositories. AsmentionedinSection4.1,unlikeSantaCoder,thedatasetsthatwereusedtotrainMistral andLlama-2arenotpubliclyavailableandthedevelopersofbothmodelsdonotexplicitlydeclarewhatdatawasused totraintheirmodels.Assuch,inordertoevaluateourapproachonmuchlargerandmorecapablemodels,weconstruct adatasetfromGitHubrepositoriesthatwerepublishedafterthereleaseofbothMistralandLlama-2(September2023). SimilartothedatasetusedforSantaCoder,inordertohaveafixedapproachandcomparethemodelsmethodologically, wefocusonrepositoriesthatcontainPythoncodes.Astheserepositorieswerepublishedafterthesemodels’release,we canbecertainthattheywerenotusedasthetrainingdataofthemodelsunderstudy.Aftercollectingtheserepositories, wefine-tunethemodelsonthedataset.Assuch,wewillhaveagroundtruthforbothMistralandLlama-2which containscodesthatthemodelshaveseenduringtheirtrainingandcodesthatwerenotincludedintheirtrainingdataset. Theresultingdatasetcontainsover1355Pythonscriptsfrom51repositoriesoutofwhich20000datapoints(whichwe willfurtherexplaininSection4.3)wereconstructed.10000datapointswereusedforfine-tuningthemodelsandthe remaining10000wereusedforevaluationofTraWiCitself(thereforenotincludedinthemodel’sfine-tuningdataset). Thelistofrepositoriesthatwereusedtofine-tunethemodelsisavailableinourreplicationpackage[72]. 4.3 Finetuing AsdescribedinSection4.1,incontrasttoSantaCoder,MistralandLlama-2arenottrainedtosupporttheFIMobjective natively.So,weusethedatasetthatweconstructedfromGitHubrepositoriesasdescribedinSection4.2,tofine-tune themodelsfortheFIMobjective.Doingsohastwobenefits: • Constructingourowndatasetallowsforhavingagroundtruthfordeterminingwhetherascript/projectwas includedinamodel’strainingdataset.Therefore,wecanevaluateTraWiConmorecapablemodelswithoutthe lossofgenerality. • Asthepre-trainedversionsofMistralandLlama-2donotsupporttheFIMobjective,usingourownconstructed datasetforfine-tuningthemodelsallowsforthesemodelstosupporttheFIMtask. Inordertofine-tunethemodelsfortheFIMobjective,weusethesamepromptformatastheonewhichwasusedto trainSantaCoder.Listing5displaysthepromptthatwasusedtofine-tunethemodelswithprefixandsuffixbeing constructedasexplainedinSection3.1andinfillbeingtheinfillingobjectivewhichthemodelistaskedtopredict. <fim-prefix>{prefix}<fim-suffix>{suffix}<fim-middle>{infill}<|endoftext|> Listing5. Thepromptformatusedforfine-tuningthemodelsunderstudy Asanexample,considerthescriptthatwaspresentedinListing1.Listing6presentsasingledatapointextracted fromthisscriptafterextractingthesyntacticandsemanticidentifiersfromthisscriptasdisplayedinListing2. Followingthisapproach,wegooverthecollecteddatasetandgeneratethefine-tuningdataset.Themodelsareonly trainedonPythonscripts,andifcodesfromotherlanguagesexistinthedataset,theyarenotincludedinthefine-tuning dataset.Inordertofine-tunethemodelsonourdatasetfortheFIMobjective,weusetheQuantizedLow-RankAdaption (QLoRA)[24]approachwhichisaparameter-efficientfine-tuning(PEFT)method[26].Furthermore,inordertohave comparableresultsforbothMistralandLlama,bothmodelswerefine-tunedonthesamedataset.Thecodesandthe ManuscriptsubmittedtoACM16 Majdinasabetal. "prefix": "def print_input_string(input_string): """ ..." "infill": "dummy_variable" "suffix": " = input_string \n # print the input ...." Listing6. Thepromptformatusedforfine-tuningthemodelsunderstudy datathatwereusedfortrainingthesemodelsareavailableinourreplicationpackage[72]allowingforreplicationof ourfine-tunedmodelsandTraWiC’sresults.Wepresentthedetailsoffine-tuningbothmodelsinSection11. 4.4 ClassifierforDatasetInclusionDetection Wehavechosentouserandomforestsastheclassifierforourexperimentsforthefollowingreasons: • Weemphasizetheneedforourapproachtomaintainahighdegreeofinterpretability.Byusingmultipledecision trees (which are inherently interpretable) that comprise the random forest, this type of classifier provides interpretabilityalongsidehighclassificationperformance. • Ensemblemethodssuchasrandomforestsareknownfortheirsuperiorperformanceoversinglepredictorsdue |
totheirabilitytoreduceoverfittingandimprovegeneralization.Theyarealsomorecomputationallyefficient comparedtotheintensiveresourcedemandsoftrainingandinferenceofDNNmodels[41]. Toensureathoroughcomparativeanalysisofclassifiers,wehavealsoexperimentedwithXGBoost(XGB)andSupport VectorMachine(SVM)classifiers.Foroptimalclassifierselection,weemployedthegridsearchmethodologywhich involvesanexhaustiveexplorationofhyperparameters[41],whereforeachmodel,weevaluatedvariouscombinations ofsettings.WelayoutthedifferentsetsofconfigurationsusedforgridsearchforeachmodelinAppendix(Section11). ThebestmodelswereselectedbasedontheachievedF-scoreonthetestdataset. 4.5 DetectingDatasetInclusion DependingonthedatacleaningcriteriausedforfilteringthescriptsfortrainingtheLLMasmentionedinSection4.1, somescriptsinaprojectmaynotbeincludedinthemodel’strainingdataset,e.g.,outof20scriptsinaproject,some maynotpassthedatafilteringcriteria.Therefore,weoutlinetwodifferentlevelsofgranularityforpredictingwhether aprojectwaspresentinamodel’strainingdataset: • File-levelgranularity:WedeterminewhetherasinglescripthasbeenintheLLM’strainingdataset. • Repository-levelgranularity:Weanalyzeallthescriptsfromarepositoryandconsiderthattherepositoryhas beenincludedintheLLM’strainingdatasetiftheclassifierpredictsacertainamountofscriptsoutofthetotal numberofscriptsintherepositoryasbeinginthetrainingdataset. Toanalyzetheimpactofeditdistanceontheclassifier’sperformance,wedefinemultiplelevelsofeditdistancefora thoroughsensitivityanalysis.OuranalysisforSantaCoderwasconductedon263repositoriesextractedfromTheStack, totaling9,409scriptswith1,866scriptsnotincludedinSantaCoder’strainingdataset.ForbothLlama-2andMistral,we usedthesamedatasetthatwefine-tunedthemodelson,designatinghalfasthegroundtruthforscriptsincludedinthe trainingdataset,andtheotherhalfasscriptsnotincluded.Thismeanstheclassifierwastrainedon1,355scriptsfrom 51repositories.Outofwhich,25repositoriesandtheircorrespondingscriptswereusedforfine-tuningthemodels (thusseenbythemodelduringtraining),while26repositoriesandtheircorrespondingscriptswereusedasnegative instances(notseenbythemodelduringtraining). ManuscriptsubmittedtoACMDetectingCodeInclusionInLanguageModelsTrainedonCode 17 4.6 ComparisonWithotherCloneDetectionApproaches Insoftwareauditing,oneoftheapproachesfordetectingcopyrightinfringementisusingCCDstocheckthecode againstarepositoryofcopyrightedcode[56].CCDisanactiveareaofresearchanddifferentapproachesareproposed forthedetectionofclones,especiallythoseofType3andType4[7].Thisprocessiscarriedoutthroughapair-wise comparisonofthecodesinaprojectagainstanotherdatasetthatcontainscopyrightedcodes.AsdetailedinSection2.2, manydifferentapproachesareproposedandavailableforCCD[7].Outoftheproposedapproaches,wechooseNiCad [59]andJPlag[53]forcomparisonwithTraWiConthedatasetinclusiontask,forthefollowingreasons: • Pythonsupport:Outoftheproposed,availableapproaches,NiCadandJPlagaretheonlyonesthatsupport clonedetectionforPythonscripts. • Availability: A few of the proposed approaches are available for use. Furthermore, most of them require expensivere-trainingonnewdatasets.However,bothNiCadandJPlagareopen-sourceanddonotrequire re-training(asopposedtotheproposedML-basedsolutions). • Syntacticerrorsingeneratedcodes:MostoftheproposedapproachesforCCDbasedonDNNs,extract informationsuchasASTsfromthecodewhichcanonlybedoneifthegeneratedcodeissyntacticallyvalid. EventhoughLLMshaveshownahighcapabilityingeneratingcorrectcode,limitingCCDtocodesthatare syntacticallyvalidwouldseverelyreducethesizeofthedatasetthatcanbeusedforthisstudy. NiCademploysaparser-based,language-specificapproachusingtheTXLtransformationsystemtoextractand normalizepotentialcodeclones.ItiscapableofdetectingcodeclonesuptoType3[59].JPlag[53]comparesthe similarityofprogramsintwophases:convertingprogramsintotokenstringsandcomparingthesestringsusingthe “GreedyStringTiling”[75]algorithm.Thisalgorithmidentifiesthelongestcommonsubstringsbetweentwotoken stringsandmarksthemastiles.Thesimilarityvalueisdeterminedbythepercentageoftokenscoveredbythesetiles. JPlag’stokenizationislanguage-dependent,anditprioritizestokensthatreflectaprogram’sstructurewhileignoring superficialaspectslikewhitespaceandcomments.Forthepurposeofthisstudy,weusethelatestversionofJPlag6. AsNiCadandJPlagaredesignedtodetectclonesbetweenscripts,theyoperatedifferentlycomparedtoourapproach whereweinspectthesimilaritybetweenoneormultiplegeneratedtokens.Therefore,todeterminewhetheracode wasinamodel’strainingdatasetwithNiCad/JPlag,wequerythemodeltogeneratecodesnippetsandcomparethe generatedcodesagainsteachsnippetthatweknowexistsinthemodel’strainingdataset.Algorithm5,showsthesteps fordatasetinclusiondetectionusingNiCadandJPlag.Wegeneratecodesnippetsinsteadoftokenssincecomparing scriptstoeachotherusingNiCadandJPalgwillresultinpositiveclonematchesasalargeportionofbothscriptswill bethesameasshowninFigure2.Ourdetailedprocessisasfollows: |
4.6.1 Pre-processing. Weextractthefunctionsandclassesthataredefinedinascriptbyparsingitscode.Forexample, thescriptinListing1,has2functions.Afterdoingso,webreakeachfunction/classintotwopartsbasedonitsnumber oflines.Figure2displaysanexampleofhowthisprocessiscarriedout.Thisprocessisrepeatedforeveryfunctionand classthatisextractedfromthescript.Weconsiderthefirsthalfastheprefixandthesecondhalfasthesuffix. 4.6.2 Inference. Aftergeneratingprefixandsuffixpairs,wequerythemodelwiththeprefixandhavethemodel generatethesuffix.Notethathere,wedonotrestrictthemodeltopredictthemaskedelementbygivingitthesuffix. Instead,weallowthemodeltogeneratewhatcomesaftertheprefixuntilthemodelitselfgeneratesan“end-of-sequence” token. 6https://github.com/jplag/JPlag/releases/tag/v5.0.0 ManuscriptsubmittedtoACM18 Majdinasabetal. 4.6.3 Comparison. TheStackdatasetcontains106.91millionPythonscripts[40].Therefore,comparingeachgenerated scriptbythemodelunderstudyinapair-wisemanneragainstalltheotherscriptspresentinthetrainingdataset iscomputationallyinfeasible.Assuch,wecompareeachgeneratedscriptwiththoseofmultiplerandomlysampled repositoriesfromthemodel’strainingdatasettodetectcodeclonesacrossprojects.Here,weaimtoinvestigatewhether themodelgeneratescodesfromitstrainingdatasetwhenitisgivenalargepartofascriptasinput.IfNiCad/JPlag detectaclonebetweenthegeneratedscriptandanotherscriptfromthesampledprojects,weconsideritasahitfor detectingthattheprojectwasincludedinthemodel’strainingdataset.Aswearesamplingrandomly,thereexiststhe possibilitythatthemodelgeneratescodefromitstrainingdatasetbutthegeneratedcodewasn’tcomparedagainstthe projectthatitwasgeneratedfrom.Therefore,todecreasethelikelihoodofsuchcaseswedothefollowing: • WefollowthesameprojectselectioncriteriaasdescribedinSection4.2forbothselectingthescriptstogenerate codefromandcomparingthegeneratedcodesagainst.Byonlysamplingprojectsthatcontainaminimumof10 andamaximumof50scripts,weaimtodecreasetheamountofmissinghits. • AsweexpanduponinSection5.1.2weexperimentwithmultiplerandomsamplingratiosinordertohavea comprehensivesensitivityanalysis. Algorithm5:DatasetInclusionDetectionUsingNiCad/JPlag Input:SCRIPT; /* SCRIPT is an entire script from a project. A project can contain many scripts. */ Output:ClassificationResults; /* A binary variable with 1 indicating inclusion in the dataset and 0 otherwise. */ // Break the script into a prefix and suffix tuple based on the number of lines. prefix,suffix←ExtractPrefixAndSuffix(SCRIPT); // Generate the suffix by giving the model prefix as input. GeneratedSuffix←Predict(prefix); // Compare the generated suffix in a pair-wise manner with every script in the dataset. foreach𝑠 ∈datasetdo r←NiCad/JPlag(s,GeneratedSuffix); Store𝑟inClassificationResults; end returnClassificationResults; Fig.2. AnexampleofhowprefixandsuffixaregeneratedforNiCadandJPlag ManuscriptsubmittedtoACMDetectingCodeInclusionInLanguageModelsTrainedonCode 19 5 RESULTSANDANALYSIS Inthissection,wepresentourexperimentalresults.Webeginbylayingoutourexperimentdetails,continuewith presentingtheresultsofourapproachcomparedagainstabaseline,andconcludethissectionbyconductinganerror, featureimportance,andsensitivityanalysis.Ourreplicationpackageispubliclyaccessibleonline[72]. 5.1 RQ1[Effectiveness] TheresultsdetailedinTables2,3,and4showcasetheperformanceofTraWiC’sdatasetinclusiondetectionwhenapplied withvaryingeditdistancethresholds.The“Consideringonlyasinglepositive”criterionshowcasestheresultsonfile- levelgranularity.Weconsidermultipleinclusioncriteriaforpredictingwhetherarepository(i.e.,project)wasincluded inthemodel’strainingdatasetasoutlinedin“Consideringathresholdof0.4/0.6”.“Precision”measuresthepercentage ofcorrectlypredictedpositiveobservationsoutofallpredictedpositives,reflectingtheclassifier’saccuracyinmaking positivepredictions.“Accuracy”givesanoverallassessment,showingtheratioofcorrectlypredictedobservationsto thetotalpredictionsmade.The“F-score”balancesprecisionandsensitivity(recall),offeringacomprehensivemetric, especiallyforunevendatasets.“Sensitivity”assessesthemodel’scapabilitytocorrectlydistinguishactualpositive cases,while“Specificity”measuresitsperformanceindistinguishingtruenegatives.WefocusontheF-scoreasthe mainperformancemetric.AfterF-score,wefocusonsensitivityasourgoalistodetectdatasetinclusion;therefore,the highertheclassifier’sabilitytodetecttruepositives,themoresuitedtoourpurposeitis. 5.1.1 RQ1a:WhatisTraWiC’sperformanceforthedatasetinclusiontask? Weobservethattheperformanceofour modelsvariesbasedontherepositoryinclusioncriterionandtheeditdistancethreshold.Specifically,forSantaCoder, thebestperformanceisachievedwithaneditdistanceof60andarepositoryinclusionthresholdof0.4.Incontrast, bothfine-tunedversionsofMistralandLlama-2performbestwithaneditdistancethresholdof20.However,thereare differencesintheirbest-performingconditions:Llama-2achievesahigherF-scorewithaninclusionthresholdof0.4, whileMistralperformsbetterindetectingtheinclusionofsinglescripts.Nonetheless,theperformancegapbetween |
Llama-2andMistralintermsofdatasetinclusiondetectionisnotsignificantwhenusinganeditdistancethresholdof20. Forall3modelsunderstudy,wecanobservethatprecision,accuracy,andF-scoregenerallydecreaseastheeditdistance thresholdincreases.Thisshowsthatahigherthresholdforeditdistancemayleadtolessstrictmatchingcriteria,which inturncouldresultinmorefalsepositivesandreduceTraWiC’sperformance.Additionally,weobservethatsensitivity marginallyincreaseswiththeeditdistancethresholdforSantaCoderbutmarginallydecreasesforMistralandLlama-2 whendetectingsinglescriptinclusion.ThissuggeststhatwhileSantaCoder’sclassifierbecomeslessprecise,itcaptures moretruepositivesathigherthresholds.However,theperformanceofMistralandLlama-2degradesasthethreshold increases. ExaminingTable2,weobservediverseresultsbasedondifferenteditdistancethresholdsandrepositoryinclusion criteria.ThehighestF-score,88.57%,isachievedataneditdistancethresholdof20whenconsideringasinglescript’s inclusion.Specificityishighest,71.6%,withthesamecriteria,indicatingadecentbalanceinpredictingwhenascript wasnotincludedinthemodel’strainingdataset.Forprojectinclusiondetection,wereportasensitivityof99.19%anda specificityof2.92%whileconsideringaneditdistancethresholdof60.ThismeansthatTraWiChasahighcapabilityin detectingwhetheraprojectwasincludedinthetrainingdataset.AsmentionedinSection4.5afterfollowingthedata cleaningcriteriaaslaidoutin[9]somescriptsfromaprojectmaynotbeincludedinthetrainingdatasetofthegiven model.Forexample,outofthe20scriptsinaproject,10maynothaveacomment-to-coderatiobetween1%to80%and thereforewillnotbeusedfortrainingthemodel.Therefore,eventhoughtheprojectitselfwasincludedinthemodel’s ManuscriptsubmittedtoACM20 Majdinasabetal. Table2. ResultsofTraWiC’scodeinclusiondetectionwithdifferenteditdistancethresholdsonSantaCoder EditDistanceThreshold RepositoryInclusionCriterion Precision(%) Accuracy(%) F-score(%) Sensitivity(%) Specificity(%) Consideringonlyasinglepositive 88.07 83.87 88.57 89.08 71.6 20 Consideringathresholdof0.4 76.52 75.1 84.82 95.13 20.63 Consideringathresholdof0.6 80.46 71.26 80.31 80.15 47.08 Consideringonlyasinglepositive 79.57 79.11 83.39 94.49 42.91 40 Consideringathresholdof0.4 72.96 72.4 83.8 98.43 3.62 Consideringathresholdof0.6 75.5 73.96 84.1 94.9 18.65 Consideringonlyasinglepositive 78.09 77.46 85.46 94.36 37.7 60 Consideringathresholdof0.4 71.28 71.23 82.95 99.19 2.92 Consideringathresholdof0.6 73.3 72.11 82.89 95.38 15.6 Consideringonlyasinglepositive 78.47 78.08 85.85 94.77 38.81 70 Consideringathresholdof0.4 73.3 72.83 84.17 98.83 2.11 Consideringathresholdof0.6 75.46 74.11 84.39 95.71 15.34 Consideringonlyasinglepositive 77.79 77.47 85.55 95.02 36.17 80 Consideringathresholdof0.4 71.22 70.83 82.84 99 1.47 Consideringathresholdof0.6 72.42 70.98 82.41 95.6 10.3 trainingdataset,aspecificscriptfromitmaynotbe.Assuch,itshouldbenotedthatmostoftheprojectsinourdataset (incontrasttoscripts)wereincludedinSantaCoder’strainingdatasetandalowspecificitywhenitcomestoproject inclusiondetection(asopposedtoscriptinclusiondetection)isnotalarmingasotherscriptsfromtheprojectcould havebeenusedinthemodel’straining. ExaminingTables3and4,weobservevariousperformancemetricsforTraWiC’sperformanceonfine-tunedversions ofLlama-2andMistralacrossdifferenteditdistancethresholdsandrepositoryinclusioncriteria.ForLlama-2(Table3), thehighestF-scoreof82.05%isachievedataneditdistancethresholdof20whenconsideringarepositoryinclusion criterionof0.4.Thehighestspecificity,87.5%,isalsoachievedataneditdistancethresholdof20witharepository inclusioncriterionof0.4.Thisindicatesagoodbalanceinpredictinganentireproject’sinclusioninthemodel’straining dataset.Sensitivityishighestat84.23%withaneditdistancethresholdof20andconsideringonlyasinglepositive. Thisshowsthatatthiseditdistancethreshold,TraWiCcandetectbothasinglescript’sandaproject’sinclusionina model’strainingdatasetwithhighperformance.ForMistral(Table4),thehighestF-score,85.37%,isachievedatan editdistancethresholdof20whenconsideringasinglescript’sinclusion.Similarly,thehighestsensitivityof88.38% isobservedunderthesameconditions.Thehighestspecificity,85.71%,isachievedwithaneditdistancethresholdof 20andarepositoryinclusioncriterionof0.4,indicatingstrongperformanceinpredictingtheabsenceofscriptsin thetrainingdataset.Therefore,wecanobservethatTraWiCshowshighperformanceincodeinclusiondetectionon LLMstrainedoncode.ItshouldalsobenotedthattheresultspresentedinTables3and4areconductedonmodelsthat werefine-tunedonthedatasetfor3epochs.However,forbothLlama-2andMistralmodelswecanseethatTraWiC’s performancestartstodegradeasweincreasetheeditdistancethreshold.Wediscusstheeffectsoffine-tuningandthe reasonsbehindTraWiC’sperformancedegradationonthesemodelsinRQ3. |
Findings1:Whenanalyzingasinglescript’sinclusion,TraWiCshowshighperformancesindetectingboth truepositivesandtruenegatives.Thebestbalanceisachievedwhenweemploylowereditdistancethresholds forconsideringsemanticidentifiers.Thismeansthatlookingforsimilarityinnaturallanguageelementsof codecanbeusedasastrongsignalfordetectingdatasetinclusion. ManuscriptsubmittedtoACMDetectingCodeInclusionInLanguageModelsTrainedonCode 21 Table3. ResultsofTraWiC’scodeinclusiondetectionwithdifferenteditdistancethresholdsonLlama-2 EditDistanceThreshold RepositoryInclusionCriterion Precision(%) Accuracy(%) F-score(%) Sensitivity(%) Specificity(%) Consideringonlyasinglepositive 78.68 82.74 81.36 84.23 81.54 20 Consideringathresholdof0.4 80.0 86.27 82.05 84.21 87.5 Consideringathresholdof0.6 75.0 72.54 68.18 62.5 81.48 Consideringonlyasinglepositive 75.98 80.93 79.09 82.47 79.73 40 Consideringathresholdof0.4 75.0 75.6 75.0 75.0 76.19 Consideringathresholdof0.6 66.66 70.0 70.0 73.68 66.66 Consideringonlyasinglepositive 73.49 79.06 76.56 79.91 78.43 60 Consideringathresholdof0.4 70.0 73.17 71.79 73.68 72.72 Consideringathresholdof0.6 70.0 70.73 70.0 70.0 71.42 Consideringonlyasinglepositive 72.58 78.50 75.78 79.29 77.92 70 Consideringathresholdof0.4 70.0 73.17 71.79 73.68 72.72 Consideringathresholdof0.6 65.0 70.73 68.42 72.22 69.56 Consideringonlyasinglepositive 70.56 77.57 74.46 78.82 76.67 80 Consideringathresholdof0.4 75.0 68.29 69.76 65.21 72.22 Consideringathresholdof0.6 70.0 63.41 65.11 60.89 66.66 Table4. ResultsofTraWiC’scodeinclusiondetectionwithdifferenteditdistancethresholdsonMistral EditDistanceThreshold RepositoryInclusionCriterion Precision(%) Accuracy(%) F-score(%) Sensitivity(%) Specificity(%) Consideringonlyasinglepositive 82.55 86.45 85.37 88.38 84.89 20 Consideringathresholdof0.4 85.0 85.36 85.0 85.0 85.71 Consideringathresholdof0.6 84.21 82.5 82.05 80.0 85.0 Consideringonlyasinglepositive 80.62 85.52 84.21 88.13 83.49 40 Consideringathresholdof0.4 85.0 85.36 85.0 85.0 85.71 Consideringathresholdof0.6 84.21 82.05 82.05 80.0 84.21 Consideringonlyasinglepositive 77.51 83.48 81.79 86.58 81.16 60 Consideringathresholdof0.4 80.0 80.48 80.0 80.0 80.95 Consideringathresholdof0.6 80.0 78.04 78.04 76.19 80.0 Consideringonlyasinglepositive 77.51 82.56 80.97 84.74 80.85 70 Consideringathresholdof0.4 75.0 80.48 78.94 83.33 78.26 Consideringathresholdof0.6 75.0 75.60 75.0 75.0 76.19 Consideringonlyasinglepositive 74.74 83.11 81.31 86.46 80.64 80 Consideringathresholdof0.4 65.0 70.73 68.42 72.22 69.56 Consideringathresholdof0.6 65.0 63.41 63.41 61.90 65.0 Given the differences between TraWiC’s specificity and sensitivity on file-level granularity in comparison to repository-levelgranularity,wecaninfertwoscenariosforusingTraWiC.First,ifarepositorycontainsalargenumber ofscriptsandinferenceonthegivenmodelisresource-intensive,usingTraWiConlessthanhalfoftherepository’s scriptscaneffectivelyindicatedatasetinclusion.Second,inscenarioswherereducingfalsenegativesismoreimportant (e.g.,identifyingeveryinstanceofdatasetinclusion),applyingTraWiConfile-levelgranularitycanbeaneffective methodtocheckfordatasetinclusion. 5.1.2 RQ1b:HowdoesTraWiCcompareagainsttraditionalCCDapproachesfordatasetinclusiondetection? Table5 presentstheresultsobtainedwhenusingNiCadandJPlagforclonedetection.WeusedNiCadandJPlagtodetect codeclonesacrosscodesnippetsfromthescriptsintheLLM’strainingdatasetandcodesnippetsgeneratedbythe LLMasoutlinedinSection4.6.Giventhatourtaskisabinaryclassification,NiCadisunabletoachievetheaccuracy ManuscriptsubmittedtoACM22 Majdinasabetal. ofarandomclassifier(50%),andJPlagachievesanaccuracyof55%whichisbarelyhigherthanarandomclassifier indicatingthatcodeclonedetectionapproachesarenotsuitablefordetectingdatasetinclusion.Thisobservationis consistentwiththeresultsreportedin[22].Thehighestaccuracy,F-Score,andsensitivityareachievedbyNiCadfor clonedetectionwhensampling9%oftheLLM’sdataset.Thehighestaccuracy,F-score,andsensitivityforJPlagare achievedwhensampling10%ofthedatasetwhilethehighestprecisionandspecificityareachievedwhensampling5% ofthedataset.Astheresultsshow,ourapproachoutperformsbothNiCadandJPlagbyalargemarginindetecting whetheraprojectwasincludedinthetrainingdatasetofSantaCoderregardlessoftheeditdistancethreshold.Moreover, usingclonedetectiontechniquesfordatasetinclusiondetectionisexpensive.Infact,beforeusinganyclonedetection technique,weneedtogeneratecodesnippetsusingtheLLMandonceasufficientamountofscriptsaregenerated, comparetheminapair-wisemannerwiththeoriginalscriptsthatwereusedintheLLM’strainingdataset.TraWiCis muchlesscomputationallyexpensivecomparedtogeneratingmultiplesnippetsofcodefromdifferentrepositoriesand usingNiCad/JPlagtodetectsimilaritiesbetweenthem.Fordetectingdatasetinclusion,TraWiC’smainperformance |
constraintisinferenceontheLLMforgeneratingthenecessarytokens.Incontrast,bothNiCadandJPlagneedto compareanaverageof3,000scriptsperprojectinapair-wisemannerforinclusiondetectionwhenconsideringa10% sampleofthefiltereddatasetontopofgeneratingthenecessarycodefromtheLLM.AspresentedinTable2,thebest resultsareachievedwhenwefocusonidentifyingtheinclusionofsinglescriptswithaneditdistancethresholdof20. OuranalysisresultspresentedinTable5showthatincreasingthenumberofsampledrepositorieshasonlyamarginal improvementonNiCad’sandJPlag’sperformance. Table5. Nicad/JPlag’sPerformanceondetectingclonesacrosssampledrepositories Tool PercentageofDatasetSampled Precision(%) Accuracy(%) F-Score(%) Sensitivity(%) Specificity(%) Sampling5%(1726projects/≈34600scripts) 84.81 43.13 53.60 38.18 63.63 NiCad Sampling9%(3108projects/≈62280scripts) 86.74 47.64 56.47 41.86 72.5 Sampling10%(3452projects/≈69200scripts) 80.24 41.22 49.24 35.51 64.44 Sampling5%(1726projects/≈34600scripts) 83.53 54.49 58.92 45.51 77.26 JPlag Sampling9%(3108projects/≈62280scripts) 82.63 53.83 58.32 45.06 76.02 Sampling10%(3452projects/≈69200scripts) 81.00 55.00 59.94 47.57 73.00 Findings2:Thehighcostofpair-wiseclonedetectionalongsidetheLLMs’capabilityingeneratingcodewith differentsyntaxmakescodeclonedetectiontoolsincapableofdetectingdatasetinclusion.Thisisindicatedby theirpoorperformanceobservedevenwhenincreasingtheamountofcodestocompare. 5.1.3 RQ1c:WhatistheeffectofusingdifferentclassificationmethodsonTraWiC’sperformance? Table6showsthe F-scoresofdifferentclassificationmethodsforSantaCoder.Astheresultsshow,randomforestsoutperformbothXGB andSVMacrossalleditdistancethresholdsandrepositoryinclusioncriteria.Weincludeamoredetailedanalysisof bothXGB’sandSVM’sacrossdifferentperformancemetricsinSection11. 5.2 RQ2[SensitivityAnalysis] InthisRQ,weassessTraWiC’srobustnesstodataobfuscation.Afterward,weanalyzethecorrelationbetweenthe extractedfeaturesinourdatasetanddiscussthefeatureimportanceofthetrainedclassifiers. ManuscriptsubmittedtoACMDetectingCodeInclusionInLanguageModelsTrainedonCode 23 Table6. Performancecomparisonofdifferentclassifiersonthegenerateddataset EditDistanceThreshold RepositoryInclusionCriterion F-scoreXGB(%) F-scoreSVM(%) F-scoreRF(%) Consideringonlyasinglepositive 86.04 85.53 88.57 20 Consideringathresholdof 0.4 83.53 82.30 84.82 Consideringathresholdof 0.6 80.53 77.95 80.31 Consideringonlyasinglepositive 84.01 83.20 83.39 40 Consideringathresholdof 0.4 82.16 81.89 83.80 Consideringathresholdof 0.6 82.48 80.60 84.10 Consideringonlyasinglepositive 82.49 82.43 85.46 60 Consideringathresholdof 0.4 82.18 82.20 82.95 Consideringathresholdof 0.6 82.02 82.37 82.89 Consideringonlyasinglepositive 84.32 84.49 85.85 70 Consideringathresholdof 0.4 83.56 83.78 84.17 Consideringathresholdof 0.6 81.85 82.82 84.39 Consideringonlyasinglepositive 82.78 82.22 85.55 80 Consideringathresholdof 0.4 82.39 81.91 82.84 Consideringathresholdof 0.6 81.23 81.24 82.41 5.2.1 RQ2a:HowrobustisTraWiCagainstdataobfuscationtechniques? Inthissection,weanalyzeTraWiC’srobustness againstmodelstrainedonobfuscateddatasets.DataobfuscationcanbeusedasadefensestrategyagainstMIAs[34]. Forexample,knowledgedistillation[67,86]ordifferentialprivacy[57,76]canbeusedfortrainingmodelsthatdo notgeneraterareelementsobservedduringtheirtraining.Therealsoexistsamyriadofcodeobfuscationtechniques proposedforsoftwarecopyrightandvulnerabilityprotectionbymakingreverse-engineeringthesoftwaremoredifficult. Forexample,MixedBooleanArithmeticmethods[23]obfuscatethecodebyreplacingarithmeticoperationsinacode withoverlycomplexstatements.OpaquePredicatesapproaches[77]obfuscatethecodebysettingonlyonedirectionof thecodetobeexecutedandinsertdummycodeinthepartthatnevergetsexecuted,therefore,makingthecodemore complexwithoutintroducinganychangetothecode’sexecution.Finally,otherapproachesalteridentifiers,variables, andstringliteralsorremovecodeindentationtochangethecode’ssemantics[65].Approachesthatuseacombination ofthesemethods[39]havebeenproposedforthoroughcodeobfuscationtoprotectthecodeaswell. Assuch,inthissection,weexaminetherobustnessofourapproachwhenconfrontedwithmodelstrainedon obfuscateddatasets.SuchscenariosarealsosimilartodetectingType-2/Type-3codeclones.Itshouldbenotedthatfor oursensitivityanalysis,weassumethatthedatafortrainingthegivenmodelhasalreadybeenobfuscatedbeforetraining themodel,thereforemakingitmorerobustagainstproducingtextualelementssimilartotheoriginalunobfuscated codes.Tosimulatethesescenarios,weintroducenoiseduringthehit-countprocessforsyntacticidentifiers.Thisnoise emulatessituationswheretheLLMproducesasyntacticidentifierthatdeviatesfromtheoriginal,resultinginamiss. Moreover,wetesttherobustnessofourapproachinscenarioswherenoiseisintroducedduringsemanticidentifiers’hit count.Finally,wesimulatescenarioswherebothsyntacticandsemanticidentifiershavebeenchangedinthetraining |
datasetbydataobfuscationtechniques.Therefore,wewillbeabletotestTraWiC’srobustnessinrecognizingdataset inclusiondespitedeliberateobfuscationsinthetrainingdataset. Tables7,8,and9showtheresultsofoursensitivityanalysisconductedbyvaryingthelevelsofnoise(withnoise ratiosof0.1,0.5,and0.9)injectedduringthesyntactic,semantic,andtheircombination’shit-countprocessasmentioned ManuscriptsubmittedtoACM24 Majdinasabetal. above,forSantaCoder,Llama-2,andMistral,respectively.Here,“Target”referstothelevelthatthenoisesarebeing appliedto,a“NoiseRatio”of0.1meansthatduringthehit-countprocess,ahitisignoredwithaprobabilityof10%. SimilartotheanalysesdoneinRQ1,weconsiderdifferenteditdistancethresholds(20,60,and80)fordetectingsemantic hits.Itshouldbenotedthatweconductouranalysisonsinglescriptsinsteadofprojects.Wealsoconductanalysesof different“RepositoryInclusionCriterion”(i.e.,consideringaprojecttobeincludedinthemodel’strainingdatasetif acertainamountofitscorrespondingscriptsaredetectedasincludedinthemodel’strainingdataset)similartothe analysisdoneinTables2,3,and4.WeincludetheseresultsinSection11. As expected, increasing the noise sensitivity threshold results in a decline in precision, accuracy, F-score, and sensitivityacrossalleditdistancethresholds,forallthemodelsunderstudy.Thisisalignedwiththeexpectationthat theintroductionofnoiseimpairstheclassifier’sabilitytocorrectlyidentifytruepositives.However,therandomforest’s inherentrobustnesstooverfittinganditsensemblenatureappeartomitigatethiseffecttosomeextent,particularlyat moderatelevelsofnoiseasalsoreportedin[30].Finally,weobservethatTraWiC’sperformancedegradesclosetothat ofarandomclassifierwhenthereexistsalotofnoiseinthedatasetforSantaCoderandevenfurtherforLlama-2and Mistral.Thisindicatesthatourapproachisnotrobustagainstthoroughdataobfuscationtechniqueswherenearlyall semanticandsyntacticidentifiersarechanged.Wewilldiscusspossibleapproachesforimprovingourapproachin Section9. Cross-analyzingtheresultsofTables7,8,and9withTables2,3,and4whichcontainclassifier’sperformanceon cleandataprovidesthefollowingobservations: • Semanticidentifiersareimportantindatasetinclusiondetection:Weobservesharpdegradationinthe classifier’sperformanceathighereditdistancethresholds.AsshowninFigures5and3whichdisplaythe featureimportanceofourtrainedclassifiers,wecanobservethatatlowereditdistancethresholds,theclassifier givesmoreweighttosemanticidentifierscomparedtosyntacticidentifiersforallofthe3modelsunderstudy. Therefore,atlowereditdistancethresholds,eventhoughweobserveperformancedegradation,theclassifier hasahighperformanceindetectingtruepositivesevenastheamountofnoiseisincreased.Asmentionedin Section2.2,Type-2codeclonesaresimilartoeachotherinsyntaxbuthavedifferentvariable/function/class names.OurresultsonsimulatingdataobfuscationshowthatTraWiCcansuccessfullydetectdatasetinclusion despitechangingthesyntacticidentifiers. • Considering a repository-level granularity for dataset inclusion detection can help against data obfuscation:ComparingTables2,3,and4withTables15,18,and21,wecanobservedegradationindetecting datasetinclusioninfile-levelgranularityonhighereditdistancethresholds.Thisdecreaseinperformanceis indicatedbythedecreaseinsensitivity(correctlyclassifiedtruepositives)andtheincreaseinspecificity(correctly classifiedtruenegatives).Asmentionedabove,asalargenumberofthescriptsinourconstructeddatasethave beenincludedinthetrainingdatasetofthemodelunderstudy,wecannotattributetheriseinspecificityas anindicatorofTraWiC’sbetterperformanceforSantaCoder,andaswecanobservefromTables8and9,for bothMistralandLlamatheperformanceoftheclassifierdegradesastheamountofnoiseisincreased.However, eventhoughweobservedegradationinperformancewhileconsideringrepository-levelgranularityaswell,we observethatanalyzingallthescriptsinaprojectcanhelpusdetectdatasetinclusioneveninthepresenceof highlevelsofnoiseforallofthe3modelsunderstudy.Dependingontheapproachusedforcodeobfuscation, obfuscatingtheentireprojectiseithertoocostlyoraddsunnecessaryperformanceoverheadtotheproject’s operation[16,65,87]andmanyoftheproposedapproachesfocusonobfuscatingblocksofcodeinsidethe ManuscriptsubmittedtoACMDetectingCodeInclusionInLanguageModelsTrainedonCode 25 Table7. ResultsofTraWiC’sdatasetinclusiondetectionwithnoiseinjectionacrossdifferentlevels-SantaCoder Target EditDistanceThreshold NoiseRatio Precision(%) Accuracy(%) F-score(%) Sensitivity(%) Specificity(%) 0.1 88 83.09 88.13 88.27 70.3 20 0.5 86.41 79.26 85.23 84.08 67.34 0.9 87.8 69.04 75.11 65.62 77.49 0.1 79.45 78.51 86.18 94.17 39.85 Syntactic 60 0.5 79.30 74.89 83.24 87.59 43.54 0.9 81.82 59.73 66.37 55.83 69.37 0.1 78.92 78.03 85.94 94.32 37.82 80 0.5 78.43 74.95 83.55 89.39 39.3 0.9 79.11 65.69 74.38 70.18 54.24 0.1 87.19 82.29 87.62 88.04 68.08 20 0.5 83.83 74.89 81.97 80.19 61.81 0.9 81.7 51.86 55.22 41.7 76.94 |
0.1 79.63 78.67 86.26 94.1 40.59 Semantic 60 0.5 78.83 76.33 84.59 91.26 39.48 0.9 78.79 74.31 82.89 87.44 41.88 0.1 79.31 78.88 86.51 95.14 38.75 80 0.5 78.03 76.76 85.16 93.72 34.87 0.9 78.13 75.21 83.86 90.51 37.45 0.1 87.16 82.07 87.45 87.74 68.08 20 0.5 83.37 73.03 80.37 77.58 61.81 0.9 85.25 50.96 52.18 37.59 83.95 0.1 79.1 77.87 85.76 93.65 38.93 Combined 60 0.5 78.93 73.51 82.15 85.65 43.54 0.9 81.97 54.1 58.53 45.52 75.28 0.1 78.34 77.39 85.59 94.32 35.61 80 0.5 78.12 74.36 83.15 88.86 38.56 0.9 82.1 57.45 63.24 51.42 72.32 scriptasopposedtoitsentirety[39,84].Assuch,eventhoughtheidentifiersinonescriptintheprojectmaybe changed,otherscriptsinthesameprojectmayremainunchangedorundergoloweramountsofobfuscation, therefore,aidingindatasetinclusiondetection. Findings3:DataobfuscationtechniquescanbeusedtopreventMIAsfromdetectingdatasetinclusion.Our resultsshowthatTraWiCiscapableofdetectingdatasetinclusionwhenamoderatelevelofnoiseisapplied tothetrainingdata.Moreover,weshowthatbyanalyzingallthescriptsinaproject,wecandetectdataset inclusionwithanF-scoreofupto83.15%,evenwhenhalfofsemanticandsyntacticidentifiersincodeare obfuscated. 5.2.2 RQ2b:Whatistheimportanceofeachfeatureindetectingdatasetinclusion? Wefirstanalyzetheconstructed datasetfortrainingtheclassificationmodelsusingSpearman’srankcorrelationcoefficient[80]toassesstherelationship betweentheconstructedfeaturesofthedataset.Then,weanalyzetheclassificationmodels’featureimportanceusing theginiimportancecriterion[14]. ManuscriptsubmittedtoACM26 Majdinasabetal. Table8. ResultsofTraWiC’sdatasetinclusiondetectionwithnoiseinjectionacrossdifferentlevels-Llama-2 Target EditDistanceThreshold NoiseRatio Precision(%) Accuracy(%) F-score(%) Sensitivity(%) Specificity(%) 0.1 76.68 82.75 81.36 84.23 81.54 20 0.5 68.60 71.80 69.96 71.37 72.16 0.9 57.75 61.41 58.89 60.08 62.54 0.1 72.29 76.26 73.92 75.63 76.77 Syntactic 60 0.5 62.50 65.92 63.01 63.52 67.93 0.9 56.22 59.25 56.22 56.22 61.99 0.1 69.53 74.02 70.36 74.66 7357 80 0.5 60.48 66.54 62.63 64.96 67.76 0.9 51.61 55.14 51.61 51.61 58.19 0.1 70.93 74.40 72.62 74.39 74.40 20 0.5 63.57 67.16 64.95 66.40 67.81 0.9 53.10 56.40 53.83 54.58 57.99 0.1 67.07 72.34 69.29 71.67 72.85 Semantic 60 0.5 57.43 62.43 58.73 60.08 64.31 0.9 55.42 57.57 54.87 54.33 60.50 0.1 66.13 73.46 69.79 73.87 73.16 80 0.5 56.45 62.43 58.21 60.09 64.24 0.9 49.60 54.39 50.20 50.83 57.43 0.1 68.60 71.43 69.69 70.80 71.97 20 0.5 59.30 63.27 60.71 62.20 64.16 0.9 53.10 55.66 53.41 53.73 57.39 0.1 66.27 69.35 66.80 67.35 71.03 Combined 60 0.5 59.04 64.49 60.74 62.55 66.00 0.9 48.59 50.28 47.64 46.72 53.62 0.1 62.50 68.60 64.85 67.39 69.51 80 0.5 55.65 60.19 56.44 57.26 62.59 0.9 47.18 54.02 48.75 50.43 56.77 Figure4displaysthecorrelationanalysisoftheextractedfeatureswithtrained_onbeingthedependentvariable indicatingascript’sinclusionintheLLM’strainingdatasetforeachofthemodelsunderstudy.Weobserveweaktono correlationbetweendifferentfeatureswhichindicateslowdependencyamongfeaturesandthateachfeatureprovides uniqueinformationfortheclassificationtask.AsdisplayedinTables2,3,and4,weexperimentwithmultipleeditdistance thresholdsforcountingsemantichits.Eventhoughtheperformancesofmodelsfordifferenteditdistancethresholds aresimilar,therearevariationsintheclassifiers’featureimportance.Figure3displaysdifferentfeatureimportance distributionsfordetectingdatasetinclusionforSantaCoderforeditdistancesof20,40,60,and80,respectively.As displayedinthisfigure,thelowertheeditdistancethresholdis,themoretheimportanceofdocstringandcomment featuresfordecidingcodeinclusionincreases.Commentsanddocstringsarespecificallydesignedtoexplainthecode’s functionality,astheyarewrittentoexplainpiecesofcodeinnaturallanguage,whichmakestheminherentlyunique. Therefore,withlowereditdistances,whichlowersthematchingcriterionbetweentheoriginaltokensandwhatthe modelgenerates,thenumberofhitsforsemanticelementsincreases,andasaresult,theimportanceofthesefeatures increases. Figure3cdisplaysthefeatureimportanceofthebest-performingclassifier(i.e.,therandomforestwithaneditdistance of60andrepositoryinclusioncriterionof0.4)forSantaCoder.Thenumberofvariablehitsisthemostsignificantfeature, ManuscriptsubmittedtoACMDetectingCodeInclusionInLanguageModelsTrainedonCode 27 Table9. ResultsofTraWiC’sdatasetinclusiondetectionwithnoiseinjectionacrossdifferentlevels-Mistral Target EditDistanceThreshold NoiseRatio Precision(%) Accuracy(%) F-score(%) Sensitivity(%) Specificity(%) 0.1 82.56 86.48 85.37 88.38 84.95 20 0.5 71.32 73.15 71.73 72.16 74.04 0.9 66.15 66.85 65.24 65.37 68.20 0.1 77.52 83.49 81.80 86.58 81.17 Syntactic |
60 0.5 67.83 71.61 69.58 71.43 71.77 0.9 55.43 61.22 57.78 60.34 61.92 0.1 73.26 78.66 76.67 80.43 77.30 80 0.5 64.34 71.24 68.17 72.49 70.32 0.9 55.42 60.75 57.02 58.72 62.37 0.1 72.48 76.44 74.65 76.95 76.01 20 0.5 67.05 69.02 67.45 67.84 70.07 0.9 56.98 60.11 57.76 58.57 61.46 0.1 68.99 74.58 72.21 75.74 73.68 Semantic 60 0.5 61.63 66.42 63.73 65.98 66.78 0.9 56.59 60.30 57.71 58.87 61.51 0.1 67.44 74.03 71.31 75.65 72.82 80 0.5 59.69 67.16 63.51 67.84 66.67 0.9 53.49 59.55 55.87 58.47 60.40 0.1 72.48 75.88 74.21 76.02 75.77 20 0.5 65.89 69.02 67.06 68.27 69.66 0.9 58.91 61.78 59.61 69.32 63.07 0.1 69.77 72.36 70.73 71.71 72.92 Combined 60 0.5 60.85 64.38 62.06 63.31 65.29 0.9 53.88 54.92 53.36 52.85 56.88 0.1 67.05 72.73 70.18 73.62 72.04 80 0.5 61.24 64.75 62.45 63.71 65.64 0.9 52.71 57.88 54.51 56.43 59.06 followedbythenumberofcommenthitsandfunctionhits.Thenumberofdocstringhitshastheleastimportance, whichsuggeststhattheinformationwithinthedocstringsofthecodehasaminimalimpactontheclassifier’sdecisions. Itshouldbenotedthatcomparedtoothersemanticidentifiers,docstringsarelonger(astheyexplainthefunctionality ofaclassalongsideitsinputsandoutputs)andlessprevalent(oneperfunction/class).Therefore,thereexistsasmaller numberofhitsfordocstringscomparedtotheothersemanticidentifiers. Weobservedifferentfeatureimportanceandcorrelationsoffeaturesforthefine-tunedmodelsofMistralandLlama comparedtoSantaCoder,asdisplayedinFigure5.Figure5ashowsthatforaneditdistanceof20forLlama-2,the numberofstringhitsisthemostsignificantfeature,followedbyvariablehitsandfunctionhits.InFigure5c,which presentsthefeatureimportanceforaneditdistanceof60forLlama-2,thenumberofvariablehitsbecomesthemost significantfeature,followedbycommenthitsandstringhits.Itshouldbenotedthatthefeatureimportanceofcomment, string,andfunctionhitsisrelativelythesameatthiseditdistance.However,atbotheditdistances,thenumberofclass hitshastheleastimportance(excludingthedocstringhitsforaneditdistanceof60).Thisisduetotherelativelyfewer occurrencesofclassesinscriptswithintheanalyzeddataset,whichissimilartotheanalysiscarriedoutforSantaCoder. Figure5band5ddisplaythefeatureimportanceoftheclassifierateditdistancesof20and60forMistral,respectively. Ataneditdistanceof20,thenumberofvariablehitsisthemostsignificantfeature,followedbyfunctionhitsand ManuscriptsubmittedtoACM28 Majdinasabetal. (a)FeatureImportance(editdistanceof20) (b)FeatureImportance(editdistanceof40) (c)FeatureImportance(editdistanceof60) (d)FeatureImportance(editdistanceof80) Fig.3. Featureimportanceofthefinaldatasetwithdifferenteditdistancethresholds-SantaCoder commenthits.Thenumberofclasshitsistheleastimportant,suggestingthattheinformationwithinclassdefinitions hasminimalimpactontheclassifier’sdecisions,similartoLlama-2andSantaCoder.Ataneditdistanceof60,the numberofvariablehitsremainsthemostsignificantfeature,followedbystringhitsandcommenthits.Theconsistency acrossdifferentsemanticthresholdsforallthreemodelsshowsthatthesefeaturescanbeusedasstrongindicators fordatasetinclusioninLLMstrainedoncode.Animportantobservationacrossallthreemodelsistherelativelylow importanceofdocstringhitsintheclassifier’sabilitytodetectdatasetinclusion.Weobservethatasweincreasethe ManuscriptsubmittedtoACMDetectingCodeInclusionInLanguageModelsTrainedonCode 29 (a)SantaCoder (b)Llama-2 (c)Mistral Fig.4. Correlationofdifferentfeaturesinthegenerateddataset editdistancethreshold,thenumberofdocstringhitsdecreases,indicatingthatthislowcontributionresultsfromlow editdistancescoresassociatedwithdocstrings.WediscussthereasonsbehindthisinRQ3. ManuscriptsubmittedtoACM30 Majdinasabetal. (a)FeatureImportance(editdistanceof20-Llama) (b)FeatureImportance(editdistanceof20)-Mistral (c)FeatureImportance(editdistanceof60)-Llama (d)FeatureImportance(editdistanceof60)-Mistral Fig.5. FeatureimportanceofLlama-2andMistralwithdifferenteditdistancethresholds 5.3 RQ3[ErrorAnalysis] InthisRQ,weinvestigatehowthemodelsunderstudymakemistakesinpredictingthemaskedelement,andhowthese mistakescanbeusedforthetaskofdatasetinclusiondetection.Here,wedefinea“mistake”asthemodel’sinabilityto generatetheexactsametokenaswhatwaspresentinthescript.Thereexistsadifferencebetweenthemistakesmade bySantaCoderandourfine-tunedmodels,assuchwewillexplaineachasfollows. ManuscriptsubmittedtoACMDetectingCodeInclusionInLanguageModelsTrainedonCode 31 product_to_encode = pickle.load(open('./data/product_1.pickle', 'rb')) mock = OpenDataCubeRecordsProviderMock() encoded_product = mock._encode_dataset_type_properties(product_to_encode) assert 'id' in encoded_product.keys() id = encoded_product.get('id') assert id is not None assert id == product_to_encode.name def test__encode_dataset_type_properties(): product_to_encode |
Listing7. Acaseofmodel’spredictionsforscriptsnotinitstrainingdata Table10. AnalysisofModel’soutputsbytokentypeforscriptsthatareincludedinitstrainingdataset FiltrationThreshold TokenType OutputContainsToken(ScriptinDataset)(%) OutputContainsToken(ScriptnotinDataset)(%) VariableNames 60.71 61 0.5 FunctionNames 14.08 11.77 ClassNames 11.18 5.02 VariableNames 60.89 61.38 0.3 FunctionNames 15.37 12.9 ClassNames 11.3 5.08 VariableNames 60.89 60 0.1 FunctionNames 15.37 11.99 ClassNames 11.3 5 5.3.1 SantaCoder. InPython,variable/function/classnamescannotcontainanywhitespacesbetweenthemandare consideredasingletoken;therefore,wefocusontheseelementsasthemodel’staskispredictingasingletoken.We observethatinmostcaseswhenthemodelmakesamistake,itgeneratesthecorrectvariable/function/classname howeveritalsogeneratessomeextratokensaswell.Listing7showsanexampleofsuchcasesforthevariablename product_to_encode.Whenthemodelmakesamistakeforsyntacticidentifiers,itgeneratesanaverageof23tokens regardlessofwhethertheunderlyingscriptwasinitstrainingdatasetornot.Therefore,tounderstandthesemistakes better,wefirstremovethetokensthatareeitherapartoftheprogramminglanguage’ssyntaxortoofrequent.Todoso, wecounttherepetitionfrequencyofeachtokenandnormalizethembydividingtheirrepetitionfrequencybythetotal numberofobservedtokens.Afterward,weremovethetokensthatarerepeatedtoomanytimesacrossthescriptsinthe datasetfromthegeneratedmistakes. TheresultsofthisanalysisarepresentedinTable10withthe“FiltrationThreshold”columnoutliningthethreshold forkeepingatoken.Afiltrationthresholdof0.5meansthatonlytokensappearingwithafrequencylessthanorequal to50%ofalltheobservedtokensareretained,whiletokenswithahigherfrequencyarefilteredoutduringtheanalysis. Anotablevarianceisobservedintheclassnamesgeneratedbythemodel,withanapproximatedifferenceofaround 6%,dependingonwhetherascriptwaspartofitstrainingdatasetornot.Itisimportanttonotethatclassesareless recurrentthanfunctionsandvariableswithincode.Thisdifference(regardlessoffiltrationcriterion)showsthatMIAs, particularlywhendirectedatraretextualelements,canserveasameanstodiscernthedatausedinamodel’straining. ManuscriptsubmittedtoACM32 Majdinasabetal. Findings4:Evenincaseswherethemodel’soutputisnotadheringtoourexactmatchingcriteria,inroughly 60%ofcasesitsoutputscontainelementsfromthecodethatitwastrainedon.Theseresultscomplement Finding1andshowthatMIAsdirectedatpartsofcodethatarenotgrammatical(syntaxoftheprogramming languageitself),canhelpidentifydatasetinclusion. 5.3.2 Llama-2andMistral. AsdescribedinSection5.2.2,forbothfine-tunedversionsofLlama-2andMistral,we observethatthenumberofdocstringhitsdecreasesasweincreasetheeditdistancethreshold.Thisindicatesthat thereisalreadyalowsimilaritybetweenthemodels’generatedcontentfortheseelementsandtheiroriginalcontents. Tounderstandthisbetter,weinvestigatetheoutputsofourfine-tunedmodelsagainsttheoriginaltargets.Whenit comestogeneratinglongsequencessuchasdocstrings,themodelsoftenfailtoproducecoherentoutputs.Theyeither generatesequencesthatareclosetothetargetbutcontainadditionaltokens,reducingthesimilarityscore,orthey generateunrelatedtokens.Listing8showsanexampleofcaseswherethemodelgeneratesspecialtokens.Asdescribed inSection4.3,Llama-2andMistraldonotnativelysupporttheFIMobjective,requiringustofine-tunethemodelsfor thispurpose.AsshowninListing5,<fim_prefix>isoneofthespecialtokensaddedtothemodels’tokenizersforthis task.Duringtheearlystagesoffine-tuning,thesespecialtokensappearfrequentlyinthemodels’outputs.However,as trainingprogresses,theiroccurrencedecreases,suggestingthatfurtherfine-tuningonalargerdatasetwouldminimize theseinstances.Nevertheless,asshowninListing8,thepresenceofsuchextratokenscanresultinhighereditdistance scores,leadingtomoremistakes. Target: 'SEARCH_ITERATIONS' Model_output: '<fim-prefix> SEARCH_ITERATIONS' Listing8. Acaseoffine-tunedmodelsgeneratingspecialtokens Ontheotherhand,insomecases,thedevelopershaveaddedsomecommentstothecodeinordertoaddressor referencesomeofthemaintenanceactivitiessuchasURLslinkingtorelatedissuesintherepository,asdisplayedin Listing9.Insuchcases,bothmodelsgeneratedsomewhatsimilarURLsbutfailedtogeneratetherestofthedocstrings asintended.ItisimportanttonotethattheURLgeneratedinListing9isincorrectanddoesnotexist.However,"Surya" isoneoftherepositoriesincludedinourfine-tuningdataset7,andweobservethatthemodelgeneratesaURLwhich, althoughinvalid,includestherepository’sname,indicatingthatthemodelhasencounteredthisrepositoryduring training.OurfindingslineupwiththeresultsreportedbyCarlinietal.[17]wheretheyshowthatMIAsonURLs,phone numbers,andnamescanaidinidentifyingmemorizationinLLMs. Target: 'You should install the dependencies' Model_output: print(\"You probably should install the dependencies.\") print(\"https://github.com/surya-ai/surya/blob/master/docs/installation.md\") Listing9. Acaseoffine-tunedmodelsgeneratingspecialtokens 7https://github.com/VikParuchuri/surya |
ManuscriptsubmittedtoACMDetectingCodeInclusionInLanguageModelsTrainedonCode 33 Finally,similartoSantaCoder,weobservethatinsomecases,themodelsgeneratethetargetwearelookingfor,as showninListing10.However,duetothepresenceofadditionaltokens,theeditdistancescoreishigh.Consequently, byincreasingtheeditdistancethreshold,suchcasesarefilteredout,resultinginfewornohitsfordocstrings. Target: '\Action {action} is illegal\' Model_output: def step_from_state(self, action): if self.state[action] != 0: raise ValueError(f\"Action {action} is illegal\") self.state[action] = self.turn self.actions_stack Listing10. Acaseoffine-tunedmodelsgeneratingspecialtokens TheanalysisofthemistakesmadebyTraWiCforpredictingtheinclusionofsuchoccurrenceshighlightssome limitationsofourproposedapproach.Therefore,ourfutureworkaimstodesignamorerobustsimilarityobjectivefor semanticelements.Thiswouldenablethedetectionofcasessuchasthese,ultimatelyimprovingtheaccuracyofdataset inclusiondetectionformodelstrainedoncode. 6 RELATEDWORKS Inthissection,wereviewtherelevantliterature.Specifically,wepresentrelatedworksthatfocusondetermining datasetinclusiondetectionforcodemodels. ThemostrelevantworktoourapproachistheworkdonebyChoietal.[21]whichproposesanapproachfor verifyingwhetheraparticularmodelwastrainedonaparticulardataset.Theirapproachconsistsofinvestigating multiplecheckpointsproducedduringamodel’strainingprocedure.Theyshowthatifamodelwastrainedona datasetasreportedbythemodel’screators,thenthemodel’sweightswould“overfit”onthetrainingdatasetinthe nextcheckpointcomparedtoadatasetthatwasnotincludedinthetrainingdata.Eventhoughtheirapproachshows promisingresults,itrequireshavingaccesstothemodel’sarchitecture,weights,hyperparameters,andtrainingdata.In comparison,TraWiConlyrequiresqueryaccesstothegivenmodelwhichisthecaseformostofthecodeLLMsoffered byenterprises. Zhangetal.[85]proposedaframeworkfordatasetinclusiondetectiononCodePre-trainedLanguageModels (CLPMs).Inthiswork,theauthorsconsidermultiplelevelsofaccesstoaCLPM,namely: (1) Completeaccesstothemodelandalargefractionofitstrainingdataset(i.e.,white-boxaccess). (2) Completeaccesstothemodel’sarchitectureandasmallfractionofitstrainingdataset(i.e.,gray-boxaccess). (3) Noaccesstoeitherthemodelorthetrainingdataset(i.e.,black-boxaccess). Foreachlevelofaccess,theyprovidedifferentdetectionmechanismsandtheirblack-boxlevelaccessisthesetting thatiscomparablewithTraWiC.Fordatasetinclusiondetectiongivenonlyblack-boxaccesstothemodel,theauthors providetwomethods,namely,unimodalandbimodalcalibration.Inunimodalcalibration,thecodeisperturbedand thedifferencesinthemodel’soutputrepresentationsfordifferentversions(e.g.,uppercasevslowercase)ofthecode snippetareobserved.Acalibrationmodel,basedonthesedifferences,isusedtoinferthemembershipstatusofthe code.Inbimodalcalibration,thecalibrationmodel’soutputsarecomparedtotheCPLM’soutputsforthesameinput. Thiscomparisonisfocusedonidentifyingdisparitiesbetweenthetwooutputs,whichareindicativeofwhethera ManuscriptsubmittedtoACM34 Majdinasabetal. givencodesnippetwasusedintrainingtheCPLMornot.Thismethodreliesontheuniqueinteractionbetweenthe naturallanguageandprogramminglanguageaspectsofCPLMs,tomakeinferencesaboutmembership.Incomparison toTraWiC,theapproachproposedbyZhangetal.isfocusedonCLPMsthatproduceembeddingsfordownstreamtasks andrequireexpensivetrainingofmultiplelargeMLcomponents(i.e.,calibrationmodels)whileTraWiCisfocused onLLMsthatgeneratecode.Furthermore,TraWiCisnotlimitedtopre-trainedmodelsanddoesnotcontainany resource-intensiveMLcomponentsoutsideofinferenceontheLLMitself. Finally,Yangetal.[78]examinedtheriskofmembershipleakageinLLMstrainedoncode.Intheirapproach,they assumethatafractionofthedatausedtotrainanLLMoncodeisavailable.Theytrainasurrogatemodelonbothsaid fractionandanotherdatasetthatwasnotincludedintheoriginalmodel’straining.Thesurrogatemodelissupposedto mimicthebehavioroftheoriginalmodel.Afterward,theytrainanMIAclassifierbasedontheoutputsofthesurrogate modelasaninputtodetectdatasetinclusion.Theystudytheresultsofvaryingdegreesofmembershipleakagerisks dependingonfactorssuchastheattacker’sknowledgeofthevictimmodel,themodel’strainingepochs,andthefraction ofthetrainingdatathatisknowntotheattacker.Intheirapproach,theyconsiderthatatleastapartofthetraining dataofamodelisavailableanditinvolvesresource-intensivetrainingofasurrogatemodel.Incomparison,themain resource-intensivecomponentofTraWiCisinferenceontheLLManditdoesnotrequirere-trainingofasurrogate modeloraccesstotheoriginaltrainingdataset. 7 LIMITATIONS Inthissection,wedescribeanddiscussthelimitationsofourapproachindetectingdatasetinclusionandpossible approachesforovercomingthem. AsdiscussedinSection5.1,weobservethatforlargermodels,namelyMistralandLlama-2,TraWiC’sperformance degradeswhentheeditdistancethresholdforconsideringsemanticsimilaritybetweenthemodels’outputsandthe |
targettokensisincreased.AccordingtooursensitivityanalysisinSection5.2,TraWiC’sperformancedegradestothe levelofarandomclassifierwhenthedatasetisheavilyobfuscated(obfuscationofmorethanhalfofsyntacticand semanticvariables).ByanalyzingthecausesbehindthesefailuresinSection5.3,weconcludethatthemainissueliesin TraWiC’sabilitytodetectsemanticsimilarity.Thisisnotsurprising,asdiscussedinSection5.2.2,sinceTraWiCfocuses onthenumberofsemanticelementhitstodetectdatasetinclusion.Listings8and10showthatmostfailuresaredueto higheditdistancesbetweenthetargetsandthemodels’predictions,evenwhentheelementofinterestispresentin thepredictions.Therefore,webelievethatbydesigningmorerobustandcomplexmethodsforsimilaritydetectionon semanticelements,wecoulddetecttheseinstances,reducefalsenegatives,andimproveTraWiC’sperformanceinsuch cases. Anotherlimitationofourapproachisthatitrequiressomegroundtruthstotraintheinitialclassifierfordataset inclusiondetection.AsdiscussedinSections1and4.1,developersofLLMsrarelyprovidecompleteinformationabout thetrainingdatasetsoftheirmodels,optinginsteadforgeneraldescriptions[38,71].However,basedonthespecific modelunderstudy,wecaninfersomepriorsaboutthedatasetusedtotrainthemodelfromthedevelopers’descriptions andreportsoruseotherMIAapproachesinordertoconstructtheinitialdatasetfortrainingtheclassifier. Finally,thereisarequirementforthemodelsunderstudytosupporttheFill-in-the-Middle(FIM)objective.TheFIM taskisacommonstepinthepre-trainingproceduresofLLMs[9,10,25,38,71].However,asshownwithsomeofthe modelsunderstudy,thereleasedversionsbythedevelopersmaynotnativelysupportthistask.Therefore,studying suchmodelsfordatasetinclusionusingTraWiCwouldrequireeitherfine-tuningthesemodelstosupporttheFIMtask orhavingaccesstoaversionthatalreadydoes. ManuscriptsubmittedtoACMDetectingCodeInclusionInLanguageModelsTrainedonCode 35 8 THREATSTOVALIDITY Inthissection,wereportonthreatstothevalidityofourstudy. Threatstointernalvalidityconcernfactorsinternaltoourstudy,thatcouldhaveinfluencedourstudy.Our relianceonMIAsastheprimarymechanismfordetectingcodeinclusioncanbeconsideredapotentialthreattothe validityofourresearch.MIAshavebeensuccessfullyusedinothercontexts,andourresultsshowthatleveraging MIAsisalsoaneffectiveapproachfordetectingcodeinclusion.Additionally,ourchoiceofcomparisoncriteriafor syntactic/semanticelementsmightnotencompassallaspectsofthecodeinclusiondetectiontask.Wemitigatethis threatbyexperimentingwithmultiplevaluesofeditdistanceandconductingarigoroussensitivityanalysistoshowthe effectivenessofourapproach.Finally,thebiasintroducedbytherandomnessinsamplingasmallportionoftheoriginal datasetsoftheLLMsunderstudycanbeconsideredasapotentialthreat.Whileweconductedmultipleexperimental runsandfixedhyperparameterstoensureconsistencyandreproducibility,wedidnotperformstatisticalsignificance testsgiventhelargesizeoftheoriginaldatasetandtheresourcesavailabletous.Wemitigatethisthreatbyconducting ourstudyonmultipleLLMsandseparatedatasets.However,futureworkshouldincludemultipleexperimentalruns andstatisticalteststobetteraccountforandmitigatetheeffectsofrandombias. Threatstoconstructvalidityconcerntherelationshipbetweentheoryandobservation.Thefirstthreattoour constructvalidityinvolvesourexperimentaldesign.Tomitigatethisthreatandtoensurethattheresultsofcode inclusiondetectionwerenotonlyvalidforcertainscriptsandprojectsweranourapproachusingrandomsamplingof bothscriptsandprojectsfromthegenerateddatasettovalidatetheresults.Weacknowledgethattheresultsofdetecting codeinclusionusingNiCadandJPlagmayvaryastheunderlyingdatasetisextremelylargeandapair-wisecomparison acrosstheentiredatasetisnotfeasible.Wemitigatethisissuebyincreasingthepercentageofthesampleddatasetto haveacomprehensiveanalysisandusetwopopulartoken-basedcodeclonedetectionapproaches.Finally,thereexists thethreatthatourfine-tuningofthemodelsisselectivebasedonthemodelunderstudy.Weaddressthisthreatby fine-tuningtwomodelsonthesamedatasetuntilthetraininglossonbothmodelsstopsimproving.Weincludethe detailsofourfine-tuningregimentinSection11. ThreatstoExternalValidityconcernthegeneralizationofourfindings.Weidentifythereplicabilityofourresults asthemostimportantexternalthreat.GiventhatreplicatingacertainoutputonLLMsisdifficult,wereleasethe generateddataset,alongsideTraWiC’scode[72].Moreover,thereexiststhepossibilitythatourapproachmaynot performasreportedonothercodemodels.SinceSantaCoder’sintroduction,manylargercodemodelswithhigher performancehavebeenintroduced.ThereexistlargerandmorecapablemodelssuchasotherversionsofLlama-2 withhighernumbersofparametersandLlama-3.Itshouldbenotedthatthemajorityofthesemodelsaresolargethat theyrequirecomputingresourcesthatarenotavailableoutsideoftheenterprise.Furthermore,thetrainingdatasetsof theselargemodelsarenotavailable.Givensuchlimitations,SantaCoder,Mistral7B,andLlama-27Bweretheonly feasibleoptionsforthisstudy.Consideringthatasamodel’scapacityincreasessodoesitsinternalabilitytoremember instancesfromitstrainingdataset[17]andthatourapproachismodel-agnostic,itisverylikelythatourapproach wouldalsoperformwellevenonlargermodels. 9 CONCLUSION |
Inthisstudy,weintroducedTraWiC,amodel-agnostic,interpretableapproachfordetectingcodeinclusion.TraWiC exploitsthememorizationabilityofLLMstrainedoncodetodetectwhethercodesfromaproject(collectionofcodes) wereincludedinamodel’strainingdataset.EvaluationresultsshowthatTraWiCcandetectwhetheraprojectwas ManuscriptsubmittedtoACM36 Majdinasabetal. includedinanLLM’strainingdatasetwitharecallofupto99.19%.OurresultsalsoshowthatTraWiCsignificantly outperformscodeclonedetectionapproachesinidentifyingdatasetinclusionandisrobusttonoise.Inthefuture,we plantotestTraWiConmorecapableLLMstrainedoncode.Wealsoplantoinvestigatewhatotheraspectsofcodecan beusedforconductingMIAsonacodemodel.Weaimtousedeepreinforcementlearningtotrainamodelthatcan detectdatasetinclusionbasedontheoutputsofacodemodelbycomparingtheASTrepresentationofthegenerated outputtotheASTrepresentationoftheinput.DoingsowouldconsiderablylowerTraWiC’sperformancebottleneck andallowforconstructinganend-to-endsolutionfordatasetinclusiondetectionwhichismorerobustagainstcode obfuscation. 10 ACKNOWLEDGMENTS ThisworkispartiallysupportedbytheFondsdeRechercheduQuebec(FRQ),theCanadianInstituteforAdvanced Research(CIFAR),andtheNaturalSciencesandEngineeringResearchCouncilofCanada(NSERC). REFERENCES [1] 2023.CommonLispStyleGuide|CommonLiSP. https://lisp-lang.org/style-guide/ [2] 2023.PEP8–StyleguideforPythoncode|peps.python.org. https://peps.python.org/pep-0008/ [3] 2023.SimianSimilarityAnalyzer. https://simian.quandarypeak.com/ [4] 2023.ThestateofAIin2023:GenerativeAI’sbreakoutyear. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in- 2023-generative-ais-breakout-year# [5] EmadAghajani,CsabaNagy,MarioLinares-Vásquez,LauraMoreno,GabrieleBavota,MicheleLanza,andDavidCShepherd.2020. Software documentation:thepractitioners’perspective.InProceedingsoftheACM/IEEE42ndInternationalConferenceonSoftwareEngineering.590–601. [6] EmadAghajani,CsabaNagy,OlgaLuceroVega-Márquez,MarioLinares-Vásquez,LauraMoreno,GabrieleBavota,andMicheleLanza.2019.Software documentationissuesunveiled.In2019IEEE/ACM41stInternationalConferenceonSoftwareEngineering(ICSE).IEEE,1199–1210. [7] QuratUlAin,WasiHaiderButt,MuhammadWaseemAnwar,FarooqueAzam,andBilalMaqbool.2019.Asystematicreviewoncodeclonedetection. IEEEaccess7(2019),86121–86144. [8] JoshuaAinslie,JamesLee-Thorp,MichieldeJong,YuryZemlyanskiy,FedericoLebrón,andSumitSanghai.2023. Gqa:Traininggeneralized multi-querytransformermodelsfrommulti-headcheckpoints.arXivpreprintarXiv:2305.13245(2023). [9] LoubnaBenAllal,RaymondLi,DenisKocetkov,ChenghaoMou,ChristopherAkiki,CarlosMunozFerrandis,NiklasMuennighoff,MayankMishra, AlexGu,MananDey,etal.2023.SantaCoder:don’treachforthestars!arXivpreprintarXiv:2301.03988(2023). [10] RohanAnil,AndrewMDai,OrhanFirat,MelvinJohnson,DmitryLepikhin,AlexandrePassos,SiamakShakeri,EmanuelTaropa,PaigeBailey, ZhifengChen,etal.2023.Palm2technicalreport.arXivpreprintarXiv:2305.10403(2023). [11] JacobAustin,AugustusOdena,MaxwellNye,MaartenBosma,HenrykMichalewski,DavidDohan,EllenJiang,CarrieCai,MichaelTerry,QuocLe, etal.2021.Programsynthesiswithlargelanguagemodels.arXivpreprintarXiv:2108.07732(2021). [12] MohammadBavarian,HeewooJun,NikolasTezak,JohnSchulman,ChristineMcLeavey,JerryTworek,andMarkChen.2022.EfficientTrainingof LanguageModelstoFillintheMiddle. arXiv:2207.14255[cs.CL] [13] IzBeltagy,MatthewEPeters,andArmanCohan.2020.Longformer:Thelong-documenttransformer.arXivpreprintarXiv:2004.05150(2020). [14] LeoBreiman.2017.Classificationandregressiontrees.Routledge. [15] TomB.Brown,BenjaminMann,NickRyder,MelanieSubbiah,JaredKaplan,PrafullaDhariwal,ArvindNeelakantan,PranavShyam,GirishSastry, AmandaAskell,SandhiniAgarwal,ArielHerbert-Voss,GretchenKrueger,TomHenighan,RewonChild,AdityaRamesh,DanielM.Ziegler,Jeffrey Wu,ClemensWinter,ChristopherHesse,MarkChen,EricSigler,MateuszLitwin,ScottGray,BenjaminChess,JackClark,ChristopherBerner,Sam McCandlish,AlecRadford,IlyaSutskever,andDarioAmodei.2020.LanguageModelsareFew-ShotLearners. arXiv:2005.14165[cs.CL] [16] ChristianBunse.2018.Ontheimpactofcodeobfuscationtosoftwareenergyconsumption.InFromSciencetoSociety:NewTrendsinEnvironmental Informatics.Springer,239–249. [17] NicholasCarlini,DaphneIppolito,MatthewJagielski,KatherineLee,FlorianTramer,andChiyuanZhang.2022.Quantifyingmemorizationacross neurallanguagemodels.arXivpreprintarXiv:2202.07646(2022). [18] NicholasCarlini,FlorianTramer,EricWallace,MatthewJagielski,ArielHerbert-Voss,KatherineLee,AdamRoberts,TomBrown,DawnSong,Ulfar |
Erlingsson,etal.2021.Extractingtrainingdatafromlargelanguagemodels.In30thUSENIXSecuritySymposium(USENIXSecurity21).2633–2650. [19] KentKChang,MackenzieCramer,SandeepSoni,andDavidBamman.2023.Speak,memory:Anarchaeologyofbooksknowntochatgpt/gpt-4. arXivpreprintarXiv:2305.00118(2023). [20] MarkChen,JerryTworek,HeewooJun,QimingYuan,HenriquePondedeOliveiraPinto,JaredKaplan,HarriEdwards,YuriBurda,NicholasJoseph, GregBrockman,etal.2021.Evaluatinglargelanguagemodelstrainedoncode.arXivpreprintarXiv:2107.03374(2021). ManuscriptsubmittedtoACMDetectingCodeInclusionInLanguageModelsTrainedonCode 37 [21] DamiChoi,YonadavShavit,andDavidDuvenaud.2023.ToolsforVerifyingNeuralModels’TrainingData.arXivpreprintarXiv:2307.00682(2023). [22] MatteoCiniselli,LucaPascarella,andGabrieleBavota.2022.Towhatextentdodeeplearning-basedcoderecommendersgeneratepredictionsby cloningcodefromthetrainingset?.InProceedingsofthe19thInternationalConferenceonMiningSoftwareRepositories.167–178. [23] RobinDavid,LuigiConiglio,andMarianoCeccato.2020.Qsynth-aprogramsynthesisbasedapproachforbinarycodedeobfuscation.InBAR2020 Workshop. [24] TimDettmers,ArtidoroPagnoni,AriHoltzman,andLukeZettlemoyer.2024.Qlora:Efficientfinetuningofquantizedllms.AdvancesinNeural InformationProcessingSystems36(2024). [25] JacobDevlin,Ming-WeiChang,KentonLee,andKristinaToutanova.2018. Bert:Pre-trainingofdeepbidirectionaltransformersforlanguage understanding.arXivpreprintarXiv:1810.04805(2018). [26] NingDing,YujiaQin,GuangYang,FuchaoWei,ZonghanYang,YushengSu,ShengdingHu,YulinChen,Chi-MinChan,WeizeChen,etal.2023. Parameter-efficientfine-tuningoflarge-scalepre-trainedlanguagemodels.NatureMachineIntelligence5,3(2023),220–235. [27] DrorGFeitelson,AyeletMizrahi,NofarNoy,AviadBenShabat,OrEliyahu,andRoySheffer.2020.Howdeveloperschoosenames.IEEETransactions onSoftwareEngineering48,1(2020),37–52. [28] JoshuaFeldman,JoeDavison,andAlexanderMRush.2019.Commonsenseknowledgeminingfrompretrainedmodels.arXivpreprintarXiv:1909.00505 (2019). [29] BeatFluri,MichaelWursch,andHaraldCGall.2007.Docodeandcommentsco-evolve?ontherelationbetweensourcecodeandcommentchanges. In14thWorkingConferenceonReverseEngineering(WCRE2007).IEEE,70–79. [30] AritraGhosh,NareshManwani,andPSSastry.2017. Ontherobustnessofdecisiontreelearningunderlabelnoise.InAdvancesinKnowledge DiscoveryandDataMining:21stPacific-AsiaConference,PAKDD2017,Jeju,SouthKorea,May23-26,2017,Proceedings,PartI21.Springer,685–697. [31] DorsafHaouari,HouariSahraoui,andPhilippeLanglais.2011. Howgoodisyourcomment?astudyofcommentsinjavaprograms.In2011 Internationalsymposiumonempiricalsoftwareengineeringandmeasurement.IEEE,137–146. [32] XinleiHe,RuiWen,YixinWu,MichaelBackes,YunShen,andYangZhang.2021.Node-levelmembershipinferenceattacksagainstgraphneural networks.arXivpreprintarXiv:2102.05429(2021). [33] HongshengHu,ZoranSalcic,GillianDobbie,YiChen,andXuyunZhang.2021.EAR:Anenhancedadversarialregularizationapproachagainst membershipinferenceattacks.In2021InternationalJointConferenceonNeuralNetworks(IJCNN).IEEE,1–8. [34] HongshengHu,ZoranSalcic,LichaoSun,GillianDobbie,PhilipSYu,andXuyunZhang.2022.Membershipinferenceattacksonmachinelearning: Asurvey.ACMComputingSurveys(CSUR)54,11s(2022),1–37. [35] WeiHua,YuleiSui,YaoWan,GuangzhongLiu,andGuandongXu.2020.Fcca:Hybridcoderepresentationforfunctionalclonedetectionusing attentionnetworks.IEEETransactionsonReliability70,1(2020),304–318. [36] BoHui,YuchenYang,HaolinYuan,PhilippeBurlina,NeilZhenqiangGong,andYinzhiCao.2021.Practicalblindmembershipinferenceattackvia differentialcomparisons.arXivpreprintarXiv:2101.01341(2021). [37] JinyuanJia,AhmedSalem,MichaelBackes,YangZhang,andNeilZhenqiangGong.2019.Memguard:Defendingagainstblack-boxmembership inferenceattacksviaadversarialexamples.InProceedingsofthe2019ACMSIGSACconferenceoncomputerandcommunicationssecurity.259–274. [38] AlbertQJiang,AlexandreSablayrolles,ArthurMensch,ChrisBamford,DevendraSinghChaplot,DiegodelasCasas,FlorianBressand,Gianna Lengyel,GuillaumeLample,LucileSaulnier,etal.2023.Mistral7B.arXivpreprintarXiv:2310.06825(2023). [39] SeoyeonKang,SujeongLee,YuminKim,Seong-KyunMok,andEun-SunCho.2021. Obfus:Anobfuscationtoolforsoftwarecopyrightand vulnerabilityprotection.InProceedingsoftheEleventhACMConferenceonDataandApplicationSecurityandPrivacy.309–311. [40] DenisKocetkov,RaymondLi,LoubnaBenAllal,JiaLi,ChenghaoMou,CarlosMuñozFerrandis,YacineJernite,MargaretMitchell,SeanHughes, ThomasWolf,DzmitryBahdanau,LeandrovonWerra,andHarmdeVries.2022.TheStack:3TBofpermissivelylicensedsourcecode.Preprint (2022). |
[41] YannLeCun,YoshuaBengio,andGeoffreyHinton.2015.Deeplearning.nature521,7553(2015),436–444. [42] KlasLeinoandMattFredrikson.2020.Stolenmemories:Leveragingmodelmemorizationforcalibrated{White-Box}membershipinference.In29th USENIXsecuritysymposium(USENIXSecurity20).1605–1622. [43] V.I.Levenshtein.1966.BinaryCodesCapableofCorrectingDeletions,InsertionsandReversals.SovietPhysicsDoklady10(Feb.1966),707. [44] YunhuiLong,VincentBindschaedler,LeiWang,DiyueBu,XiaofengWang,HaixuTang,CarlAGunter,andKaiChen.2018. Understanding membershipinferencesonwell-generalizedlearningmodels.arXivpreprintarXiv:1802.04889(2018). [45] ZiyangLuo,CanXu,PuZhao,QingfengSun,XiuboGeng,WenxiangHu,ChongyangTao,JingMa,QingweiLin,andDaxinJiang.2023.WizardCoder: EmpoweringCodeLargeLanguageModelswithEvol-Instruct.arXivpreprintarXiv:2306.08568(2023). [46] ThibaudLutellier,HungVietPham,LawrencePang,YitongLi,MoshiWei,andLinTan.2020.Coconut:combiningcontext-awareneuraltranslation modelsusingensembleforprogramrepair.InProceedingsofthe29thACMSIGSOFTinternationalsymposiumonsoftwaretestingandanalysis. 101–114. [47] MdAbdullahAlMamun,ChristianBerger,andJörgenHansson.2017.Correlationsofsoftwarecodemetrics:anempiricalstudy.InProceedingsof the27thinternationalworkshoponsoftwaremeasurementand12thinternationalconferenceonsoftwareprocessandproductmeasurement.255–266. [48] RobertCMartin.2009.Cleancode:ahandbookofagilesoftwarecraftsmanship.PearsonEducation. [49] LucaMelis,CongzhengSong,EmilianoDeCristofaro,andVitalyShmatikov.2019.Exploitingunintendedfeatureleakageincollaborativelearning. In2019IEEEsymposiumonsecurityandprivacy(SP).IEEE,691–706. ManuscriptsubmittedtoACM38 Majdinasabetal. [50] ShervinMinaee,TomasMikolov,NarjesNikzad,MeysamChenaghlu,RichardSocher,XavierAmatriain,andJianfengGao.2024.Largelanguage models:Asurvey.arXivpreprintarXiv:2402.06196(2024). [51] MatthewE.Peters,MarkNeumann,MohitIyyer,MattGardner,ChristopherClark,KentonLee,andLukeZettlemoyer.2018.Deepcontextualized wordrepresentations. arXiv:1802.05365[cs.CL] [52] FabioPetroni,TimRocktäschel,PatrickLewis,AntonBakhtin,YuxiangWu,AlexanderHMiller,andSebastianRiedel.2019.Languagemodelsas knowledgebases?arXivpreprintarXiv:1909.01066(2019). [53] LutzPrechelt,GuidoMalpohl,MichaelPhilippsen,etal.2002.FindingplagiarismsamongasetofprogramswithJPlag.J.Univers.Comput.Sci.8,11 (2002),1016. [54] XipengQiu,TianxiangSun,YigeXu,YunfanShao,NingDai,andXuanjingHuang.2020.Pre-trainedmodelsfornaturallanguageprocessing:A survey.ScienceChinaTechnologicalSciences63,10(2020),1872–1897. [55] AlecRadford,JeffWu,RewonChild,DavidLuan,DarioAmodei,andIlyaSutskever.2019.LanguageModelsareUnsupervisedMultitaskLearners. (2019). [56] ChaviRalhanandNavneetMalik.2021.AStudyofSoftwareCloneDetectionTechniquesforBetterSoftwareMaintenanceandReliability.In2021 InternationalConferenceonComputingSciences(ICCS).IEEE,249–253. [57] LucasRosenblatt,XiaoyanLiu,SamiraPouyanfar,EduardodeLeon,AnujDesai,andJoshuaAllen.2020. Differentiallyprivatesyntheticdata: Appliedevaluationsandenhancements.arXivpreprintarXiv:2011.05537(2020). [58] ChanchalKumarRoyandJamesRCordy.2007.Asurveyonsoftwareclonedetectionresearch.Queen’sSchoolofcomputingTR541,115(2007), 64–68. [59] ChanchalKRoyandJamesRCordy.2008. NICAD:Accuratedetectionofnear-missintentionalclonesusingflexiblepretty-printingandcode normalization.In200816thiEEEinternationalconferenceonprogramcomprehension.IEEE,172–181. [60] ChanchalKRoy,JamesRCordy,andRainerKoschke.2009.Comparisonandevaluationofcodeclonedetectiontechniquesandtools:Aqualitative approach.Scienceofcomputerprogramming74,7(2009),470–495. [61] BaptisteRozière,JonasGehring,FabianGloeckle,StenSootla,ItaiGat,XiaoqingEllenTan,YossiAdi,JingyuLiu,TalRemez,JérémyRapin,Artyom Kozhevnikov,IvanEvtimov,JoannaBitton,ManishBhatt,CristianCantonFerrer,AaronGrattafiori,WenhanXiong,AlexandreDéfossez,JadeCopet, FaisalAzhar,HugoTouvron,LouisMartin,NicolasUsunier,ThomasScialom,andGabrielSynnaeve.2023.CodeLlama:OpenFoundationModels forCode. arXiv:2308.12950[cs.CL] [62] NehaSaini,SukhdipSingh,etal.2018.Codeclones:Detectionandmanagement.Procediacomputerscience132(2018),718–727. [63] HiteshSajnani,VaibhavSaini,JeffreySvajlenko,ChanchalKRoy,andCristinaVLopes.2016.Sourcerercc:Scalingcodeclonedetectiontobig-code. InProceedingsofthe38thInternationalConferenceonSoftwareEngineering.1157–1168. [64] AhmedSalem,YangZhang,MathiasHumbert,PascalBerrang,MarioFritz,andMichaelBackes.2018. Ml-leaks:Modelanddataindependent membershipinferenceattacksanddefensesonmachinelearningmodels.arXivpreprintarXiv:1806.01246(2018). [65] SavioAntonySebastian,SaurabhMalgaonkar,PaulamiShah,MuditKapoor,andTanayParekhji.2016.Astudy&reviewoncodeobfuscation.In |
2016WorldConferenceonFuturisticTrendsinResearchandInnovationforSocialWelfare(StartupConclave).IEEE,1–6. [66] AmazonWebServices.2023.Codereferences-CodeWhisperer. https://docs.aws.amazon.com/codewhisperer/latest/userguide/code-reference.html [67] ViratShejwalkarandAmirHoumansadr.2021.Membershipprivacyformachinelearningmodelsthroughknowledgetransfer.InProceedingsofthe AAAIconferenceonartificialintelligence,Vol.35.9549–9557. [68] RezaShokri,MarcoStronati,CongzhengSong,andVitalyShmatikov.2017.Membershipinferenceattacksagainstmachinelearningmodels.In2017 IEEEsymposiumonsecurityandprivacy(SP).IEEE,3–18. [69] AarohiSrivastava,AbhinavRastogi,AbhishekRao,AbuAwalMdShoeb,AbubakarAbid,AdamFisch,AdamRBrown,AdamSantoro,AdityaGupta, AdriàGarriga-Alonso,etal.2022.Beyondtheimitationgame:Quantifyingandextrapolatingthecapabilitiesoflanguagemodels.arXivpreprint arXiv:2206.04615(2022). [70] KushalTirumala,AramMarkosyan,LukeZettlemoyer,andArmenAghajanyan.2022.Memorizationwithoutoverfitting:Analyzingthetraining dynamicsoflargelanguagemodels.AdvancesinNeuralInformationProcessingSystems35(2022),38274–38290. [71] HugoTouvron,LouisMartin,KevinStone,PeterAlbert,AmjadAlmahairi,YasmineBabaei,NikolayBashlykov,SoumyaBatra,PrajjwalBhargava, ShrutiBhosale,etal.2023.Llama2:Openfoundationandfine-tunedchatmodels.arXivpreprintarXiv:2307.09288(2023). [72] trawic.2024.TraWiC.https://github.com/CommissarSilver/TraWiC. [73] AshishVaswani,NoamShazeer,NikiParmar,JakobUszkoreit,LlionJones,AidanN.Gomez,LukaszKaiser,andIlliaPolosukhin.2023.AttentionIs AllYouNeed. arXiv:1706.03762[cs.CL] [74] JiapengWangandYihongDong.2020.Measurementoftextsimilarity:asurvey.Information11,9(2020),421. [75] MichaelJWise.1993.StringsimilarityviagreedystringtilingandrunningKarp-Rabinmatching.OnlinePreprint,Dec119,1(1993),1–17. [76] ChuguiXu,JuRen,DeyuZhang,YaoxueZhang,ZhanQin,andKuiRen.2019.GANobfuscator:MitigatinginformationleakageunderGANvia differentialprivacy.IEEETransactionsonInformationForensicsandSecurity14,9(2019),2358–2371. [77] DongpengXu,JiangMing,andDinghaoWu.2016.Generalizeddynamicopaquepredicates:Anewcontrolflowobfuscationmethod.InInformation Security:19thInternationalConference,ISC2016,Honolulu,HI,USA,September3-6,2016.Proceedings19.Springer,323–342. [78] ZhouYang,ZhipengZhao,ChenyuWang,JiekeShi,DongsumKim,DonggyunHan,andDavidLo.2023. Gotcha!ThisModelUsesMyCode! EvaluatingMembershipLeakageRisksinCodeModels.arXivpreprintarXiv:2310.01166(2023). ManuscriptsubmittedtoACMDetectingCodeInclusionInLanguageModelsTrainedonCode 39 [79] AnnieTTYing,JamesLWright,andStevenAbrams.2005.Sourcecodethattalks:anexplorationofEclipsetaskcommentsandtheirimplicationto repositorymining.ACMSIGSOFTsoftwareengineeringnotes30,4(2005),1–5. [80] JerroldHZar.2005.Spearmanrankcorrelation.Encyclopediaofbiostatistics7(2005). [81] ChiyuanZhang,SamyBengio,MoritzHardt,BenjaminRecht,andOriolVinyals.2021.Understandingdeeplearning(still)requiresrethinking generalization.Commun.ACM64,3(2021),107–115. [82] ChiyuanZhang,DaphneIppolito,KatherineLee,MatthewJagielski,FlorianTramèr,andNicholasCarlini.2021.Counterfactualmemorizationin neurallanguagemodels.arXivpreprintarXiv:2112.12938(2021). [83] JianZhang,XuWang,HongyuZhang,HailongSun,KaixuanWang,andXudongLiu.2019.Anovelneuralsourcecoderepresentationbasedon abstractsyntaxtree.InProceedingsofthe41stInternationalConferenceonSoftwareEngineering.IEEEPress,783–794. [84] PeihuaZhang,ChenggangWu,MingfanPeng,KaiZeng,DingYu,YuanmingLai,YanKang,WeiWang,andZheWang.2023.Khaos:TheImpactof Inter-proceduralCodeObfuscationonBinaryDiffingTechniques.InProceedingsofthe21stACM/IEEEInternationalSymposiumonCodeGeneration andOptimization.55–67. [85] ShengZhangandHuiLi.2023.CodeMembershipInferenceforDetectingUnauthorizedDataUseinCodePre-trainedLanguageModels.arXiv preprintarXiv:2312.07200(2023). [86] JunxiangZheng,YongzhiCao,andHanpinWang.2021.Resistingmembershipinferenceattacksthroughknowledgedistillation.Neurocomputing 452(2021),114–126. [87] YanZhuang.2018.TheperformancecostofsoftwareobfuscationforAndroidapplications.Computers&Security73(2018),57–72. 11 APPENDIX 11.1 Gridsearchparametersforclassifiers Here,wepresentthegridsearchparametersthatwereusedtotrainthebestclassifiers.Itshouldbenotedthatthebest classifierwasselectedbasedonthehighestF-scoreachievedonthetestset.Allscriptsfortrainingthesemodelsare availableat[72]. 11.1.1 Randomforest. • Numberofestimators:Thishyperparameterisusedtoindicatethenumberoftreesintheforest.Weusedvalues of50,100,and200. • Numberoffeaturesforbestsplit:Thenumberoffeaturestoconsiderwhenlookingforthebestsplit.Weused |
thevalues“sqrt”indicatingthesquarerootofthenumberoffeaturesintheinput;and“log2”indicatingthe binarylogarithmofthenumberoffeaturesintheinput. • Maxtreedepth:Maximumdepthallowedforasingletreeintheforest.weusedthevaluesof10,20,and30. • Splitcriterion:Thefunctiontomeasurethequalityofasplit.With“gini”indicatingGiniimpurityand“entropy” indicatingShannoninformationgain. 11.1.2 SVM. • C:TheregularizationparameterwiththestrengthofregularizationbeinginverselyproportionaltoC.Weused valuesof0.1,1,10,100. • Kernel:Specifiesthekerneltypetobeusedinthealgorithm.Weusedradialbasisandlinearfunctions. • KernelCoefficient:Thekernelcoefficientisaparameterthatdeterminestheshapeofthekernelfunctionusedto maptheinputdataintoahigher-dimensionalfeaturespace.Weusedvaluesof1,0.1,0.01,and0.001. 11.1.3 XGB. • Learningrate:Alsodenotedas“eta”isthestepsizeshrinkageusedintheupdate.Weusedvaluesof0.01,0.1,and 0.5. • Maxdepth:maximumdepthofatree.Weusedvaluesof3,5,and7. • Numberofestimators:Numberofboostingroundsortreestobuild.Weusedvaluesof50,100,and200. ManuscriptsubmittedtoACM40 Majdinasabetal. Table11. ResultsofTraWiC’scodeinclusiondetectionwithdifferenteditdistancethresholds-SVM EditDistanceThreshold RepositoryInclusionCriterion Precision(%) Accuracy(%) F-score(%) Sensitivity(%) Specificity(%) Consideringonlyasinglepositive 84.80 79.41 85.53 86.27 63.00 20 Consideringathresholdof0.4 77.66 73.60 82.30 87.53 40.88 Consideringathresholdof0.6 80.81 70.13 77.95 75.29 58.01 Consideringonlyasinglepositive 74.98 73.67 83.20 93.45 27.99 40 Consideringathresholdof0.4 70.66 70.30 81.89 97.37 10.11 Consideringathresholdof0.6 72.69 69.97 80.60 90.43 24.47 Consideringonlyasinglepositive 74.12 72.93 82.43 92.85 29.80 60 Consideringathresholdof0.4 71.11 70.72 82.20 97.39 10.22 Consideringathresholdof0.6 74.29 72.53 82.37 92.42 27.42 Consideringonlyasinglepositive 76.60 75.00 84.49 94.19 24.95 70 Consideringathresholdof0.4 73.41 72.67 83.78 97.56 7.56 Consideringathresholdof0.6 74.87 72.19 82.82 92.67 18.60 Consideringonlyasinglepositive 72.83 72.07 82.22 94.40 23.74 80 Consideringathresholdof0.4 70.79 69.97 81.91 97.17 6.59 Consideringathresholdof0.6 72.16 69.97 81.24 92.92 16.48 Table12. ResultsofTraWiC’scodeinclusiondetectionwithdifferenteditdistancethresholds-XGB EditDistanceThreshold RepositoryInclusionCriterion Precision(%) Accuracy(%) F-score(%) Sensitivity(%) Specificity(%) Consideringonlyasinglepositive 84.94 80.05 86.04 87.18 63.00 20 Consideringathresholdof0.4 77.85 75.08 83.53 90.12 39.78 Consideringathresholdof0.6 82.31 73.27 80.53 78.82 60.22 Consideringonlyasinglepositive 75.23 74.73 84.01 95.12 27.64 40 Consideringathresholdof0.4 70.69 70.63 82.16 98.09 9.57 Consideringathresholdof0.6 73.46 72.44 82.48 94.02 24.47 Consideringonlyasinglepositive 73.29 72.61 82.49 94.32 25.59 60 Consideringathresholdof0.4 70.58 70.39 82.18 98.34 6.99 Consideringathresholdof0.6 72.71 71.38 82.02 94.08 19.89 Consideringonlyasinglepositive 76.61 74.79 84.32 93.75 25.34 70 Consideringathresholdof0.4 73.32 72.35 83.56 97.11 7.56 Consideringathresholdof0.6 74.59 70.90 81.85 90.67 19.19 Consideringonlyasinglepositive 73.38 72.98 82.78 94.95 25.42 80 Consideringathresholdof0.4 71.26 70.79 82.39 97.64 8.24 Consideringathresholdof0.6 71.87 69.80 81.23 93.40 14.84 • Subsample:Subsampleratioofthetraininginstances.Forexample,asubsampleof0.5meansthatwerandomly samplehalfofthetrainingdatabeforegrowingthetrees.Weusedvaluesof0.6,0.8,and1.0. • Subsampleratioofcolumns:Subsampleratioofcolumnswhenconstructingeachtree.Subsamplingoccursonce foreverytreeconstructed.Weusedvaluesof0.6,0.8,and1.0. 11.2 PerformanceofXGBandSVM BothSVMandXGBclassifiersweretrainedonthesamedatasetastherandomforestmodel.Tables11and12show theircompleteperformancemetrics,respectively. ManuscriptsubmittedtoACMDetectingCodeInclusionInLanguageModelsTrainedonCode 41 Table13. ResultsofTraWiC’scodeinclusiondetectionwithdifferenteditdistancethresholdsanddifferentnoiseratio-Semantic- SantaCoder EditDistanceThreshold NoiseRatio RepositoryInclusionCriterion Precision(%) Accuracy(%) F-score(%) Sensitivity(%) Specificity(%) Consideringonlyasinglepositive 87.19 82.29 87.62 88.04 68.08 0.1 Consideringathresholdof0.4 79.56 76.82 86.17 93.97 20.00 Consideringathresholdof0.6 84.55 73.51 82.30 80.17 51.43 Consideringonlyasinglepositive 83.83 74.89 81.97 80.19 61.81 20 0.5 Consideringathresholdof0.4 79.14 76.82 86.27 94.83 17.14 Consideringathresholdof0.6 85.32 74.17 82.67 80.17 54.29 Consideringonlyasinglepositive 81.70 51.86 55.22 41.70 76.94 |
0.9 Consideringathresholdof0.4 80.15 75.50 85.02 90.52 25.71 Consideringathresholdof0.6 87.18 61.59 70.10 58.62 71.43 Consideringonlyasinglepositive 80.47 79.79 86.94 94.54 43.36 0.1 Consideringathresholdof0.4 77.70 77.48 87.12 99.14 5.71 Consideringathresholdof0.6 81.29 80.79 88.63 97.41 25.71 Consideringonlyasinglepositive 79.07 76.86 84.95 91.78 40.04 40 0.5 Consideringathresholdof0.4 77.18 76.82 86.79 99.14 2.86 Consideringathresholdof0.6 81.16 80.13 88.19 96.55 25.71 Consideringonlyasinglepositive 77.96 70.27 79.53 81.17 43.36 0.9 Consideringathresholdof0.4 78.23 78.15 87.45 99.14 8.57 Consideringathresholdof0.6 81.62 80.13 88.10 95.69 28.57 Consideringonlyasinglepositive 79.31 78.88 86.51 95.14 38.75 0.1 Consideringathresholdof0.4 77.70 77.48 87.12 99.14 5.71 Consideringathresholdof0.6 80.71 80.13 88.28 97.41 22.86 Consideringonlyasinglepositive 78.03 76.76 85.16 93.72 34.87 80 0.5 Consideringathresholdof0.4 77.33 77.48 87.22 100.00 2.86 Consideringathresholdof0.6 80.71 80.13 88.28 97.41 22.86 Consideringonlyasinglepositive 78.13 75.21 83.86 90.51 37.45 0.9 Consideringathresholdof0.4 77.85 78.15 87.55 100.00 5.71 Consideringathresholdof0.6 81.43 81.46 89.06 98.28 25.71 11.3 TraWiC’sperformanceondifferentnoiselevels Here,weincludethecompleteresultsofTraWic’sinclusiondetectionagainstvariouslevelsofnoiseonscript-level granularityandproject-levelgranularitywithdifferentinclusionthresholds.Tables13-21displayourresultsonnoise injectedatsemantic,syntactic,andbothlevels,respectively. 11.4 DetailsofFine-tuning • LoRAattentiondimension:64 • LoRAdropout:0.1 • Maxgradientnorm:0.3 • Learningrate:0.002 • Weightdecay:0.001 • Warmupratio:0.03 Figure6displaysthetraininglossofbothmodelsduringfine-tuning.ThebluelineindicatesthelossofLlama-2and theredlineindicatesMistral’slossduringthefine-tuningprocess. ReceivedJanuary2024;revised2024;accepted2024 ManuscriptsubmittedtoACM42 Majdinasabetal. Table14. ResultsofTraWiC’scodeinclusiondetectionwithdifferenteditdistancethresholdsanddifferentnoiseratio-Syntactic- SantaCoder EditDistanceThreshold NoiseRatio RepositoryInclusionCriterion Precision(%) Accuracy(%) F-score(%) Sensitivity(%) Specificity(%) Consideringonlyasinglepositive 88.00 83.09 88.13 88.27 70.30 0.1 Consideringathresholdof0.4 79.56 76.82 86.17 93.97 20.00 Consideringathresholdof0.6 80.73 67.55 78.22 75.86 40.00 Consideringonlyasinglepositive 86.41 79.26 85.23 84.08 67.34 20 0.5 Consideringathresholdof0.4 78.68 74.83 84.92 92.24 17.14 Consideringathresholdof0.6 81.13 66.89 77.48 74.14 42.86 Consideringonlyasinglepositive 87.80 69.04 75.11 65.62 77.49 0.9 Consideringathresholdof0.4 78.52 74.17 84.46 91.38 17.14 Consideringathresholdof0.6 83.51 66.23 76.06 69.83 54.29 Consideringonlyasinglepositive 80.34 79.41 86.68 94.10 43.17 0.1 Consideringathresholdof0.4 78.23 78.15 87.45 99.14 8.57 Consideringathresholdof0.6 81.16 80.13 88.19 96.55 25.71 Consideringonlyasinglepositive 79.52 75.00 83.26 87.37 44.46 40 0.5 Consideringathresholdof0.4 77.70 77.48 87.12 99.14 5.71 Consideringathresholdof0.6 81.48 79.47 87.65 94.83 28.57 Consideringonlyasinglepositive 83.74 64.15 70.97 61.58 70.48 0.9 Consideringathresholdof0.4 77.93 76.82 86.59 97.41 8.57 Consideringathresholdof0.6 83.61 77.48 85.71 87.93 42.86 Consideringonlyasinglepositive 78.92 78.03 85.94 94.32 37.82 0.1 Consideringathresholdof0.4 77.85 78.15 87.55 100.00 5.71 Consideringathresholdof0.6 80.71 80.13 88.28 97.41 22.86 Consideringonlyasinglepositive 78.43 74.95 83.55 89.39 39.30 80 0.5 Consideringathresholdof0.4 77.33 77.48 87.22 100.00 2.86 Consideringathresholdof0.6 82.01 82.12 89.41 98.28 28.57 Consideringonlyasinglepositive 79.11 65.59 74.38 70.18 54.24 0.9 Consideringathresholdof0.4 78.38 78.81 87.88 100.00 8.57 Consideringathresholdof0.6 82.71 80.79 88.35 94.83 34.29 Table15. ResultsofTraWiC’scodeinclusiondetectionwithdifferenteditdistancethresholdsanddifferentnoiseratio-Combined- SantaCoder EditDistanceThreshold NoiseRatio RepositoryInclusionCriterion Precision(%) Accuracy(%) F-score(%) Sensitivity(%) Specificity(%) Consideringonlyasinglepositive 87.16 82.07 87.45 87.74 68.08 0.1 Consideringathresholdof0.4 79.71 77.48 86.61 94.83 20.00 Consideringathresholdof0.6 83.93 73.51 82.46 81.03 48.57 Consideringonlyasinglepositive 83.37 73.03 80.37 77.58 61.81 20 0.5 Consideringathresholdof0.4 79.71 77.48 86.61 94.83 20.00 Consideringathresholdof0.6 85.45 74.83 83.19 81.03 54.29 Consideringonlyasinglepositive 85.25 50.96 52.18 37.59 83.95 0.9 Consideringathresholdof0.4 82.11 75.50 84.52 87.07 37.14 |
Consideringathresholdof0.6 87.84 60.26 68.42 56.03 74.29 Consideringonlyasinglepositive 79.51 78.30 86.00 93.65 40.41 0.1 Consideringathresholdof0.4 78.23 78.15 87.45 99.14 8.57 Consideringathresholdof0.6 80.71 80.13 88.28 97.41 22.86 Consideringonlyasinglepositive 78.94 73.99 82.57 86.55 42.99 40 0.5 Consideringathresholdof0.4 77.70 77.48 87.12 99.14 5.71 Consideringathresholdof0.6 81.16 80.13 88.19 96.55 25.71 Consideringonlyasinglepositive 84.54 55.21 59.05 45.37 79.52 0.9 Consideringathresholdof0.4 79.43 78.15 87.16 96.55 17.14 Consideringathresholdof0.6 80.95 66.23 76.92 73.28 42.86 Consideringonlyasinglepositive 78.34 77.39 85.59 94.32 35.61 0.1 Consideringathresholdof0.4 77.70 77.48 87.12 99.14 5.71 Consideringathresholdof0.6 82.01 82.12 89.41 98.28 28.57 Consideringonlyasinglepositive 78.12 74.36 83.15 88.86 38.56 80 0.5 Consideringathresholdof0.4 77.70 77.48 87.12 99.14 5.71 Consideringathresholdof0.6 81.29 80.79 88.63 97.41 25.71 Consideringonlyasinglepositive 82.10 57.45 63.24 51.42 72.32 0.9 Consideringathresholdof0.4 79.17 78.81 87.69 98.28 14.29 Consideringathresholdof0.6 82.20 73.51 82.91 83.62 40.00 ManuscriptsubmittedtoACMDetectingCodeInclusionInLanguageModelsTrainedonCode 43 Table16. ResultsofTraWiC’scodeinclusiondetectionwithdifferenteditdistancethresholdsanddifferentnoiseratio-Syntactic- Llama-2 EditDistanceThreshold NoiseRatio RepositoryInclusionCriterion Precision(%) Accuracy(%) F-score(%) Sensitivity(%) Specificity(%) Consideringonlyasinglepositive 76.68 82.57 81.36 84.23 81.54 0.1 Consideringathresholdof0.4 80.00 82.93 82.05 84.21 81.82 Consideringathresholdof0.6 70.00 66.67 62.22 56.00 76.92 Consideringonlyasinglepositive 68.60 71.80 69.96 71.37 72.16 20 0.5 Consideringathresholdof0.4 70.00 73.17 71.79 73.68 72.73 Consideringathresholdof0.6 70.00 64.71 60.87 53.85 76.00 Consideringonlyasinglepositive 57.75 61.41 58.89 60.08 62.54 0.9 Consideringathresholdof0.4 60.00 60.98 60.00 60.00 61.90 Consideringathresholdof0.6 60.00 56.86 52.17 46.15 68.00 Consideringonlyasinglepositive 73.23 77.01 75.15 77.18 76.87 0.1 Consideringathresholdof0.4 75.00 75.61 75.00 75.00 76.19 Consideringathresholdof0.6 61.90 65.00 65.00 68.42 61.90 Consideringonlyasinglepositive 57.09 59.44 57.20 57.31 61.37 40 0.5 Consideringathresholdof0.4 75.00 75.61 75.00 75.00 76.19 Consideringathresholdof0.6 57.14 55.00 57.14 57.14 52.63 Consideringonlyasinglepositive 43.31 45.05 42.80 42.31 47.64 0.9 Consideringathresholdof0.4 70.00 60.98 63.64 58.33 64.71 Consideringathresholdof0.6 47.62 57.50 54.05 62.50 54.17 Consideringonlyasinglepositive 69.53 74.02 70.36 74.66 73.57 0.1 Consideringathresholdof0.4 75.00 65.85 68.18 62.50 70.59 Consideringathresholdof0.6 65.00 60.98 61.90 59.09 63.16 Consideringonlyasinglepositive 60.48 66.54 62.63 64.96 67.76 80 0.5 Consideringathresholdof0.4 60.00 56.10 57.14 54.55 57.89 Consideringathresholdof0.6 65.00 60.98 61.90 59.09 63.16 Consideringonlyasinglepositive 51.61 55.14 51.61 51.61 58.19 0.9 Consideringathresholdof0.4 60.00 53.66 55.81 52.17 55.56 Consideringathresholdof0.6 60.00 53.66 55.81 52.17 55.56 Table17. ResultsofTraWiC’scodeinclusiondetectionwithdifferenteditdistancethresholdsanddifferentnoiseratio-Semantic- Llama2 EditDistanceThreshold NoiseRatio RepositoryInclusionCriterion Precision(%) Accuracy(%) F-score(%) Sensitivity(%) Specificity(%) Consideringonlyasinglepositive 70.93 74.40 72.62 74.39 74.40 0.1 Consideringathresholdof0.4 75.00 80.49 78.95 83.33 78.26 Consideringathresholdof0.6 60.00 64.71 57.14 54.55 72.41 Consideringonlyasinglepositive 63.57 67.16 64.95 66.40 67.81 20 0.5 Consideringathresholdof0.4 70.00 73.17 71.79 73.68 72.73 Consideringathresholdof0.6 60.00 60.78 54.55 50.00 70.37 Consideringonlyasinglepositive 53.10 56.40 53.83 54.58 57.99 0.9 Consideringathresholdof0.4 70.00 60.98 63.64 58.33 64.71 Consideringathresholdof0.6 50.00 47.06 42.55 37.04 58.33 Consideringonlyasinglepositive 69.29 72.15 70.26 71.26 72.92 0.1 Consideringathresholdof0.4 70.00 68.29 68.29 66.67 70.00 Consideringathresholdof0.6 61.90 67.50 66.67 72.22 63.64 Consideringonlyasinglepositive 78.94 73.99 82.57 86.55 42.99 40 0.5 Consideringathresholdof0.4 75.98 80.93 79.10 82.48 79.73 Consideringathresholdof0.6 52.38 60.00 57.89 64.71 56.52 Consideringonlyasinglepositive 52.76 56.07 53.28 53.82 58.04 0.9 Consideringathresholdof0.4 65.00 60.98 61.90 59.09 63.16 Consideringathresholdof0.6 42.86 52.50 48.65 56.25 50.00 Consideringonlyasinglepositive 66.13 73.46 69.79 73.87 73.16 |
0.1 Consideringathresholdof0.4 70.00 63.41 65.12 60.87 66.67 Consideringathresholdof0.6 65.00 58.54 60.47 56.52 61.11 Consideringonlyasinglepositive 56.45 62.43 58.21 60.09 64.24 80 0.5 Consideringathresholdof0.4 65.00 56.10 59.09 54.17 58.82 Consideringathresholdof0.6 55.00 48.78 51.16 47.83 50.00 Consideringonlyasinglepositive 49.60 54.39 50.20 50.83 57.43 0.9 Consideringathresholdof0.4 50.00 48.78 48.78 47.62 50.00 Consideringathresholdof0.6 50.00 43.90 46.51 43.48 44.44 ManuscriptsubmittedtoACM44 Majdinasabetal. Table18. ResultsofTraWiC’scodeinclusiondetectionwithdifferenteditdistancethresholdsanddifferentnoiseratio-Combined- Llama2 EditDistanceThreshold NoiseRatio RepositoryInclusionCriterion Precision(%) Accuracy(%) F-score(%) Sensitivity(%) Specificity(%) Consideringonlyasinglepositive 68.60 71.43 69.69 70.80 71.97 0.1 Consideringathresholdof0.4 75.00 75.61 75.00 75.00 76.19 Consideringathresholdof0.6 65.00 62.75 57.78 52.00 73.08 Consideringonlyasinglepositive 59.30 63.27 60.71 62.20 64.16 20 0.5 Consideringathresholdof0.4 65.00 68.29 66.67 68.42 68.18 Consideringathresholdof0.6 55.00 56.86 50.00 45.83 66.67 Consideringonlyasinglepositive 53.10 55.66 53.41 53.73 57.39 0.9 Consideringathresholdof0.4 55.00 46.34 50.00 45.83 47.06 Consideringathresholdof0.6 45.00 47.06 40.00 36.00 57.69 Consideringonlyasinglepositive 60.24 63.74 61.20 62.20 65.05 0.1 Consideringathresholdof0.4 65.00 68.29 66.67 68.42 68.18 Consideringathresholdof0.6 61.90 60.00 61.90 61.90 57.89 Consideringonlyasinglepositive 57.09 61.50 58.47 59.92 62.80 40 0.5 Consideringathresholdof0.4 50.00 53.66 51.28 52.63 54.55 Consideringathresholdof0.6 52.38 55.00 55.00 57.89 52.38 Consideringonlyasinglepositive 54.33 58.69 55.53 56.79 60.27 0.9 Consideringathresholdof0.4 60.00 48.78 53.33 48.00 50.00 Consideringathresholdof0.6 45.00 47.06 40.00 36.00 57.69 Consideringonlyasinglepositive 62.50 68.60 64.85 67.39 69.51 0.1 Consideringathresholdof0.4 65.00 63.41 63.41 61.90 65.00 Consideringathresholdof0.6 65.00 56.10 59.09 54.17 58.82 Consideringonlyasinglepositive 55.65 60.19 56.44 57.26 62.59 80 0.5 Consideringathresholdof0.4 55.00 56.10 55.00 55.00 57.14 Consideringathresholdof0.6 50.00 48.78 48.78 47.62 50.00 Consideringonlyasinglepositive 47.18 54.02 48.75 50.43 56.77 0.9 Consideringathresholdof0.4 45.00 43.90 43.90 42.86 45.00 Consideringathresholdof0.6 45.00 48.78 46.15 47.37 50.00 Table19. ResultsofTraWiC’scodeinclusiondetectionwithdifferenteditdistancethresholdsanddifferentnoiseratio-Syntactic- Mistral EditDistanceThreshold NoiseRatio RepositoryInclusionCriterion Precision(%) Accuracy(%) F-score(%) Sensitivity(%) Specificity(%) Consideringonlyasinglepositive 82.56 86.48 85.37 88.38 84.95 0.1 Consideringathresholdof0.4 85.00 82.93 82.93 80.95 85.00 Consideringathresholdof0.6 84.21 82.50 82.50 80.00 85.00 Consideringonlyasinglepositive 71.32 73.15 71.73 72.16 74.04 20 0.5 Consideringathresholdof0.4 80.00 78.05 78.05 76.19 80.00 Consideringathresholdof0.6 73.68 72.50 71.79 70.00 75.00 Consideringonlyasinglepositive 66.15 66.85 65.24 65.37 68.20 0.9 Consideringathresholdof0.4 75.00 65.85 68.18 62.50 70.59 Consideringathresholdof0.6 73.68 70.00 70.00 66.67 73.68 Consideringonlyasinglepositive 78.29 80.71 79.53 80.80 80.62 0.1 Consideringathresholdof0.4 75.00 78.05 76.92 78.95 77.27 Consideringathresholdof0.6 73.68 71.79 71.79 70.00 73.68 Consideringonlyasinglepositive 70.93 74.77 72.91 75.00 74.58 40 0.5 Consideringathresholdof0.4 60.00 63.41 61.54 63.16 63.64 Consideringathresholdof0.6 68.42 69.23 68.42 68.42 70.00 Consideringonlyasinglepositive 63.57 65.86 64.06 64.57 67.02 0.9 Consideringathresholdof0.4 55.00 60.98 57.89 61.11 60.87 Consideringathresholdof0.6 52.63 56.41 54.05 55.56 57.14 Consideringonlyasinglepositive 73.26 78.66 76.67 80.43 77.30 0.1 Consideringathresholdof0.4 65.00 70.73 68.42 72.22 69.57 Consideringathresholdof0.6 60.00 56.10 57.14 54.55 57.89 Consideringonlyasinglepositive 64.34 71.24 68.17 72.49 70.32 80 0.5 Consideringathresholdof0.4 55.00 58.54 56.41 57.89 59.09 Consideringathresholdof0.6 55.00 51.22 52.38 50.00 52.63 Consideringonlyasinglepositive 55.42 60.75 57.02 58.72 62.37 0.9 Consideringathresholdof0.4 50.00 48.78 48.78 47.62 50.00 Consideringathresholdof0.6 50.00 51.22 50.00 50.00 52.38 ManuscriptsubmittedtoACMDetectingCodeInclusionInLanguageModelsTrainedonCode 45 Table20. ResultsofTraWiC’scodeinclusiondetectionwithdifferenteditdistancethresholdsanddifferentnoiseratio-Semantic- Mistral |
EditDistanceThreshold NoiseRatio RepositoryInclusionCriterion Precision(%) Accuracy(%) F-score(%) Sensitivity(%) Specificity(%) Consideringonlyasinglepositive 72.48 76.44 74.65 76.95 76.0‘ 0.1 Consideringathresholdof0.4 75.00 78.05 76.92 78.95 77.27 Consideringathresholdof0.6 68.42 70.00 68.42 68.42 71.43 Consideringonlyasinglepositive 67.05 69.02 67.45 67.84 70.07 20 0.5 Consideringathresholdof0.4 75.00 70.73 71.43 68.18 73.68 Consideringathresholdof0.6 68.42 65.00 65.00 61.90 68.42 Consideringonlyasinglepositive 56.98 60.11 57.76 58.57 62.37 0.9 Consideringathresholdof0.4 60.00 56.10 57.14 54.55 57.89 Consideringathresholdof0.6 73.68 60.00 63.64 56.00 66.67 Consideringonlyasinglepositive 80.25 79.77 78.44 76.71 82.59 0.1 Consideringathresholdof0.4 85.00 82.93 82.93 80.95 85.00 Consideringathresholdof0.6 68.42 71.79 70.27 72.22 71.43 Consideringonlyasinglepositive 68.07 69.17 66.94 65.85 72.16 40 0.5 Consideringathresholdof0.4 70.00 75.61 73.68 77.78 73.91 Consideringathresholdof0.6 73.68 69.23 70.00 66.67 72.22 Consideringonlyasinglepositive 58.82 60.50 57.73 56.68 63.97 0.9 Consideringathresholdof0.4 50.00 63.41 57.14 66.67 61.54 Consideringathresholdof0.6 63.16 51.28 55.81 50.00 53.33 Consideringonlyasinglepositive 67.44 74.03 71.31 75.65 72.82 0.1 Consideringathresholdof0.4 65.00 68.29 66.67 68.42 68.18 Consideringathresholdof0.6 60.00 58.54 58.54 57.14 60.00 Consideringonlyasinglepositive 59.69 67.16 63.51 67.84 66.67 80 0.5 Consideringathresholdof0.4 50.00 56.10 52.63 55.56 56.52 Consideringathresholdof0.6 45.00 48.78 46.15 47.37 50.00 Consideringonlyasinglepositive 53.49 59.55 55.87 58.47 60.40 0.9 Consideringathresholdof0.4 50.00 60.98 55.56 62.50 60.00 Consideringathresholdof0.6 60.00 51.22 54.55 50.00 52.94 Table21. ResultsofTraWiC’scodeinclusiondetectionwithdifferenteditdistancethresholdsanddifferentnoiseratio-Combined- Mistral EditDistanceThreshold NoiseRatio RepositoryInclusionCriterion Precision(%) Accuracy(%) F-score(%) Sensitivity(%) Specificity(%) Consideringonlyasinglepositive 72.48 75.88 74.21 76.02 75.77 0.1 Consideringathresholdof0.4 75.00 75.61 75.00 75.00 76.19 Consideringathresholdof0.6 84.21 80.00 80.00 76.19 84.21 Consideringonlyasinglepositive 65.89 69.02 67.06 68.27 69.66 20 0.5 Consideringathresholdof0.4 70.00 73.17 71.79 73.68 72.73 Consideringathresholdof0.6 63.16 60.00 60.00 57.14 63.16 Consideringonlyasinglepositive 58.91 61.78 59.61 69.32 63.07 0.9 Consideringathresholdof0.4 55.00 53.66 53.66 52.38 55.00 Consideringathresholdof0.6 57.89 55.00 55.00 52.38 57.89 Consideringonlyasinglepositive 69.77 73.84 71.86 74.07 73.65 0.1 Consideringathresholdof0.4 75.00 78.05 76.92 78.95 77.27 Consideringathresholdof0.6 78.95 74.36 75.00 71.43 77.78 Consideringonlyasinglepositive 63.18 66.05 64.05 64.94 67.01 40 0.5 Consideringathresholdof0.4 70.00 70.73 70.00 70.00 71.43 Consideringathresholdof0.6 73.68 66.67 68.29 63.64 70.59 Consideringonlyasinglepositive 51.55 58.63 54.40 57.58 59.42 0.9 Consideringathresholdof0.4 55.00 58.54 56.41 57.89 59.09 Consideringathresholdof0.6 73.68 66.67 68.29 63.64 70.59 Consideringonlyasinglepositive 67.05 72.73 70.18 73.62 72.04 0.1 Consideringathresholdof0.4 65.00 65.85 65.00 65.00 66.67 Consideringathresholdof0.6 60.00 58.54 58.54 57.14 60.00 Consideringonlyasinglepositive 61.24 64.75 62.45 63.71 65.64 80 0.5 Consideringathresholdof0.4 50.00 56.10 52.63 55.56 56.52 Consideringathresholdof0.6 55.00 53.66 53.66 52.38 55.00 Consideringonlyasinglepositive 52.71 57.88 54.51 56.43 59.06 0.9 Consideringathresholdof0.4 40.00 39.02 39.02 38.10 40.00 Consideringathresholdof0.6 40.00 41.46 40.00 40.00 42.86 ManuscriptsubmittedtoACM46 Majdinasabetal. Fig.6. Traininglossofthefine-tuningprocess ManuscriptsubmittedtoACM |
2402.10773 IEEETRANSACTIONSONSOFTWAREENGINEERING,VOL.XX,NO.XX,MONTH2024 1 AIM: Automated Input Set Minimization for Metamorphic Security Testing Nazanin Bayati Chaleshtari, Yoann Marquer, Fabrizio Pastore, Member, IEEE, and Lionel C. Briand, Fellow, IEEE Abstract—AlthoughthesecuritytestingofWebsystemscanbe are correct, based on specifications. For such a verification, a automatedbygeneratingcraftedinputs,solutionstoautomatethe test oracle [3] is required, i.e., a mechanism for determining test oracle, i.e., vulnerability detection, remain difficult to apply whether a test case has passed or failed. When test cases are inpractice.Specifically,thoughpreviousworkhasdemonstrated manually derived, test oracles are defined by engineers and the potential of metamorphic testing—security failures can be determinedbymetamorphicrelationsthatturnvalidinputsinto they generally consist of expressions comparing an observed malicious inputs—metamorphic relations are typically executed output with the expected output, determined from software onalargesetofinputs,whichistime-consumingandthusmakes specifications. In security testing, when a software output metamorphic testing impractical. differs from the expected one, then a software vulnerability We propose AIM, an approach that automatically selects (i.e., a fault affecting the security properties of the software) inputs to reduce testing costs while preserving vulnerability detection capabilities. AIM includes a clustering-based black- has been discovered. box approach, to identify similar inputs based on their security Automatically deriving test oracles for the software under properties.Italsoreliesonanovelgeneticalgorithmtoefficiently test (SUT) is called the oracle problem [4], which entails select diverse inputs while minimizing their total cost. Further, distinguishing correct from incorrect outputs for all poten- it contains a problem-reduction component to reduce the search tial inputs. Except for the verification of basic reliability space and speed up the minimization process. We evaluated the effectiveness of AIM on two well-known Web systems, Jenkins properties—ensuringthatthesoftwareprovidesatimelyoutput and Joomla, with documented vulnerabilities. We compared anddoesnotcrash—theproblemisnottractablewithoutaddi- AIM’s results with four baselines involving standard search tional executable specifications (e.g., method post-conditions approaches. Overall, AIM reduced metamorphic testing time by or detailed system models), which, unfortunately, are often 84%forJenkinsand82%forJoomla,whilepreservingthesame unavailable. Further, since software vulnerabilities tend to be level of vulnerability detection. Furthermore, AIM significantly outperformedalltheconsideredbaselinesregardingvulnerability subtle, it is necessary to exercise each software interface with coverage. a large number of inputs (e.g., providing all the possible code injection strings to a Web form). When a large number of test Index Terms—System Security Testing, Metamorphic Testing, Test Suite Minimization, Many-Objective Search inputsareneeded,eveninthepresenceofautomatedmeansto generatethem(e.g.,catalogsofcodeinjectionstrings),testing becomes impractical if we lack solutions to automatically I. INTRODUCTION derive test oracles. Web systems, from social media platforms to e-commerce Metamorphic testing was proposed to alleviate the oracle and banking systems, are a backbone of our society: they problem [5] by testing not the input-output behavior of the manage data that is at the heart of our social and business system, but by comparing the outputs of multiple test exe- activities (e.g., public pictures, bank transactions), and, as cutions [5], [6]. It relies on metamorphic relations (MRs), such, should be protected. To verify that Web systems are which are specifications expressing how to derive a follow- secure, engineers perform security testing, which consists of up input from a source input and relations between the verifying that the software adheres to its security properties corresponding outputs. Such an approach has shown to be (e.g.,confidentiality,availability,andintegrity).Suchtestingis useful for security testing, also referred to as metamorphic typically performed by simulating malicious users interacting security testing (MST) [6], [7]. MST consists in relying on with the system under test [1], [2]. MRs to modify source inputs to obtain follow-up inputs that At a high-level, security testing does not differ from other mimicattacksandverifythatknownoutputpropertiescaptured software testing activities: it consists of providing inputs to bytheseMRshold(e.g.,ifthefollow-upinputdiffersfromthe thesoftwareundertestandverifyingthatthesoftwareoutputs source input in some way, then the output shall be different). Forinstance,onemayverifyifURLscanbeaccessedbyusers N. Bayati Chaleshtari and L. Briand are with the School of Electrical who should not reach them through their user interface, thus and Computer Engineering of University of Ottawa, Canada, Y. Marquer and F. Pastore are with the Interdisciplinary Centre for Security, Reliability, enabling the detection of authorization vulnerabilities. and Trust (SnT) of the University of Luxembourg, Luxembourg, and L. MST has been successfully applied to testing Web inter- BriandisalsowithLeroSFICentreforSoftwareResearchandUniversityof faces [6], [7] in an approach called MST-wi [6]; in such Limerick,Ireland.PartofthisworkwasdonewhenL.Briandwasaffiliated withtheInterdisciplinaryCentreforSecurity,Reliability,andTrust(SnT)of context, source inputs are sequences of interactions with a theUniversityofLuxembourg. Web system and can be easily derived using a Web crawler. |
E-mail: n.bayati@uottawa.ca, yoann.marquer@uni.lu, fab- For example, a source input may consist of two actions: per- rizio.pastore@uni.lu,lbriand@uottawa.ca ManuscriptreceivedMonthDD,2024;revisedMonthDD,2024. formingaloginandthenrequestingaspecificURLappearing 4202 guA 2 ]RC.sc[ 3v37701.2042:viXraIEEETRANSACTIONSONSOFTWAREENGINEERING,VOL.XX,NO.XX,MONTH2024 2 in the returned Web page. MST-wi integrates a catalog of 76 our approach (Section II). We define the problem of mini- MRs enabling the identification of 101 vulnerability types. mizing the initial input set while retaining inputs capable of Theperformanceandscalabilityofmetamorphictestingnat- detecting distinct software vulnerabilities (Section III). We urallydependsonthenumberofsourceinputstobeprocessed. present an overview of AIM (Section IV) and then detail InthecaseofMST-wi,wedemonstratedthatscalabilitycanbe our core technical solutions (Sections V to IX). We report achieved through parallelism; however, such solution may not on a large-scale empirical evaluation of AIM (Section X) and fit all development contexts (e.g., not all companies have an address the threats to the validity of the results (Section XI). infrastructureenablingparallelexecutionofthesoftwareunder We discuss and contrast related work (Section XII) and draw test and its test cases). Further, even when parallelization is conclusions (Section XIII). possible, a reduction of the test execution time may provide tangible benefits, including earlier vulnerability detection. In II. BACKGROUND general, what is required is an approach to minimize the In this section, we present the concepts required to define number of source inputs to be used during testing. our approach. We first provide a background on Metamorphic In this work, we address the problem of minimizing source Testing (MT, § II-A), then we briefly describe MST-wi, our inputs used by MRs to make metamorphic testing more scal- previous work on the application of MT to security (§ II-B). able, with a focus on Web systems, though many aspects are Next,webrieflydescribethreeclusteringalgorithms:K-means reusable to other domains. We propose the Automated Input (§ II-D1), DBSCAN (§ II-D2), and HDBSCAN (§ II-D3). Minimizer(AIM)approach,whichaimsatminimizingasetof Finally, we introduce optimization problems (§ II-E). sourceinputs(hereafter,theinitialinputset),whilepreserving the capability of MRs to detect security vulnerabilities. More A. Metamorphic Testing in detail, this work includes the following contributions: Incontrasttocommontestingpractice,whichcomparesfor • We propose AIM, an approach to minimize input sets eachinputofthesystemtheactualoutputagainsttheexpected for metamorphic testing while retaining inputs able to output, MT examines the relationships between outputs ob- detect vulnerabilities. Note that many steps of AIM are tained from multiple test executions. not specific to Web systems while others would need to MT is based on Metamorphic Relations (MRs), which are be tailored to other domains (e.g., desktop applications, necessarypropertiesoftheSUT(systemundertest)inrelation embeddedsystems).Thisapproachincludesthefollowing to multiple inputs and their expected outputs [9]. The test novel components: result, either pass or failure, is determined by validating the – An extension of the MST-wi framework to retrieve outputs of various executions against the MR. output data and extract cost information about MRs Formally, let S be the SUT. In the context of MT, inputs in without executing them. the domain of S are called source inputs. Moreover, we call – A black-box approach leveraging clustering algo- source output and we denote S(x) the output obtained from rithms to partition the initial input set based on a source input x. An MR is the combination of: security-related characteristics. • A transformation function θ, taking values in source – MOCCO (Many-Objective Coverage and Cost Op- inputsandgeneratingnewinputscalledfollow-upinputs. timizer), a novel genetic algorithm which is able For each source input x, we call follow-up output the to efficiently select diverse inputs while minimizing output S(θ(x)) of the follow-up input θ(x). their total cost. • AnoutputrelationRbetweensourceoutputsandfollow- – IMPRO (Inputset Minimization Problem Reduction up outputs. Operator), an approach to reduce the search space TheMRisexecuted withasourceinputxwhenthefollow- to its minimal extent, and then divide it in smaller up input θ(x) is generated, then the SUT is executed on both independent parts. inputs to obtain outputs S(x) and S(θ(x)), and finally the • WeprovideaprototypeframeworkforAIM[8],integrat- relation R(S(x),S(θ(x))) is checked. If this relation holds, ing the above components and automating the process of then the MR is satisfied, otherwise it is violated. input set minimization for Web systems. For instance, consider a system implementing the cosine • Wereportonanextensiveempiricalevaluation(about800 function. It might not be feasible to verify the cos(x) results hours of computation) aimed at assessing the effective- for all possible values of x, except for special values of x, ness of AIM in terms of vulnerability detection and per- e.g., cos(0)=1 or cos(π)=0. However, the cosine function 2 formance, considering 18 different AIM configurations satisfies that, for each input x, cos(π−x)=−cos(x). Based and 4 standard search algorithms for security testing, on onthisproperty,wecandefineanMR,wherethesourceinputs |
theJenkinsandJoomlasystems,whicharethemostused are the possible angle values of x, the follow-up inputs are Web-based frameworks for development automation and y = π − x, and the expected relation between source and context management. follow-up outputs is cos(y)=−cos(x). The SUT is executed • We also provide a proof of the correctness of the AIM twice per source input, respectively with an angle x and an approach in the appendix. angle y = π − x. The outputs of both executions are then This paper is structured as follows. We introduce back- validatedagainsttheoutputrelation.Ifthisrelationisviolated, ground information necessary to state our problem and detail then the SUT is faulty.IEEETRANSACTIONSONSOFTWAREENGINEERING,VOL.XX,NO.XX,MONTH2024 3 1 MR CWE_668 { 2 { 3 var sep = "/"; 4 for (var par=0; par < 4; par++){ 5 for (Action action : Input(1).actions()){ 6 var pos = action.getPosition(); 7 var newUrl = action.urlPath+sep+RandomFilePath(); 8 IMPLIES( 9 !isAdmin(action.user) && 10 afterLogin(action) && 11 CREATE(Input(2), Input(1)) && 12 Input(2).actions().get(pos).setUrl(newUrl) && 13 notTried(action.getUser(), newUrl) 14 , 15 TRUE( 16 Output(Input(2),pos).noFile() || 17 userCanRetrieveContent(action.getUser(), Output(Input(2),pos).file()) || 18 different(Output(Input(1),pos), Output(Input(2),pos))) 19 );//end-IMPLIES 20 }//end-for 21 sep=sep+"../"; 22 }//end-for 23 }//end-MR 24 }//end-package Fig.1. MRforCWE668:Exposureofresourcetowrongsphere[12] B. Metamorphic Security Testing position of the current action is stored (Line 6) to be used to generate the corresponding follow-up action. A new URL is Inpreviouswork,weautomatedMTinthesecuritydomain defined by concatenating the URL of the current action and by introducing a tool named MST-wi [6]. MST-wi enables a randomly selected system file path, e.g., config.xml, software engineers to define MRs that capture the security (Line 7). For instance, if the URL of the current action properties of Web systems. MST-wi includes a data collection is http://www.hostname.com, the new URL can be frameworkthatcrawlstheWebsystemundertesttoautomati- http://www.hostname.com/../../config.xml. cally derive source inputs. Each source input is a sequence of The MR first checks if the user who is performing the action interactionsofthelegitimateuserwiththeWebsystem.More- is admin (Line 9), since an admin has direct access to the over, MST-wi includes a Domain Specific Language (DSL) system file path, and hence will not exercise a vulnerability. to support writing MRs for security testing. Finally, MST- Then, the MR checks that the action is performed after a wi provides a testing framework that automatically performs login (Line 10), to ensure this action requires authentication. security testing based on the defined MRs and the input data. Then, the MR generates a follow-up input, named Input(2), In MST, follow-up inputs are generated by modifying by copying the current sequence of actions Input(1) (Line 11) source inputs, simulating actions an attacker might take and setting the URL of the current action to the new URL to identify vulnerabilities in the SUT. These modifications (Line 12). To speed up the process, the MR verifies that the can be done using 55 Web-specific functions enabling current user has not tried the same URL before (Line 13). engineers to define complex security properties, e.g., The SUT is vulnerable if all the following conditions are cannotReachThroughGUI, isSupervisorOf, and violated: 1) the follow-up input does not access a file at the isError. MRs capture security properties that hold when new URL, or 2) it accesses a file, but the user has the right theSUTbehavesinasafeway.IfanMR,foranygivensource to access it, or 3) the source and follow-up inputs obtain input, gets violated, then MST-wi detects a vulnerability in different outputs, as the follow-up input tries to access a the SUT. In that case, we say that the MR exercized the system file without access rights, while the source input is vulnerability in the SUT. MST-wi includes a catalogue of 76 accessing the originally crawled URL. MRs, inspired by OWASP guidelines [10] and vulnerability descriptions in the CWE database [11], capturing a large This MR tests the initial set of source inputs, with different variety of security properties for Web systems. URLs and users, and transforms each one several times We describe in Figure 1 an MR written for CWE 668, with different system file paths, leading to a combinatorial which concerns unintended access rights [12]. This MR explosion.Themoreexecutedactions,thelongertheexecution verifies that a file path passed in a URL should never enable time. The provided MR, with an input set of 160 source a user to access data that is not already provided by the user inputs on Jenkins, executed more than 200,000 follow-up interface. The first for loop iterates multiple times (Line 4) inputs in 17,694 minutes (about 12 days) on a professional to cover different system paths, e.g., / and /../../. The desktop PCs (Dell G7 7500, RAM 16Gb, Intel(R) Core(TM) second for loop iterates over all the actions of a source i9-10885H CPU @ 2.40GHz). Even when parallelization is input (Line 5). Each action in the sequence is identified by its possible, a reduction of the test execution time may provide position,i.e.,thefirstactioninasequencehasposition0.The tangiblebenefits,includingearliervulnerabilitydetection.ThisIEEETRANSACTIONSONSOFTWAREENGINEERING,VOL.XX,NO.XX,MONTH2024 4 syntaxtreeoftestsourcecode[17].Bothusesimilaritymetrics to cluster and then select test cases. |
InthecontextofWebsystems,boththesystemsourcecode andtestcodearenotavailabletodeterminesimilaritybetween source inputs, warranting a different black-box approach able toclusterandselectsourceinputsforthesesystems.Moreover, to make MST scalable, MR execution time should be min- imized while preserving vulnerability detection, warranting an approach that minimizes the number of executed actions (§ II-B) while covering all the input clusters. D. Clustering Within the clustering steps in Section VI, we rely on three well-known clustering algorithms: K-means (§ II-D1), DBSCAN (§ II-D2), and HDBSCAN (§ II-D3). 1) K-means: K-meansisaclusteringalgorithmwhichtakes Fig. 2. Linear regression between the number of executed actions and the as input a set of data points and an integer k. K-means executiontimeofametamorphicrelation.Eachinputisrepresentedbyagreen aims to assign data points to k clusters by maximizing the dot,whilethebluelinedepictsthelinearregressionmodel. similarity between individual data points within each cluster andthecenterofthecluster,calledcentroid.Thecentroidsare randomly initialized, then iteratively refined until a fixpoint is warrants an approach to minimize the initial set of source reached [18]. inputs, based on the cost of each input. 2) DBSCAN: DBSCAN (Density-Based Spatial Clustering Knowing the execution time of each source input would of Applications with Noise) is an algorithm that defines require to execute them on the considered MRs, hence de- clustersusinglocaldensityestimation.Thisalgorithmtakesas feating the purpose of input set minimization. Fortunately, input a dataset and two configuration parameters: the distance the execution time of an MR with a source input tends to threshold ϵ and the minimum number of neighbours n. be linearly correlated with the number of executed actions, The distance threshold ϵ is used to determine the ϵ- as illustrated in the representative example below. Hence, the neighbourhood of each data point, i.e., the set of data points number of executed actions can be used as a surrogate metric that are at most ϵ distant from it. There are three different for execution time. types of data points in DBSCAN, based on the number of ThislinearrelationisillustratedinFigure2,usingrandomly neighbours in the ϵ-neighbourhood of a data point: selected source inputs and an MR written for CWE 863, Core If a data point has a number of neighbours above n, which has a shorter execution time than the one presented it is then considered a core point. in Figure 1. Each point represents the execution time (x- BorderIf a data point has a number of neighbours below n, axis) and the number of executed actions (y-axis) for a given but has a core point in its neighborhood, it is then input. The linear regression is represented by the blue line. considered a border point. The coefficient of determination is 97.8%, indicating a strong Noise Any data point which is neither a core point nor a linearcorrelationbetweenMRexecutiontimeandthenumber border point is considered noise. of executed actions. A cluster consists of the set of core points and border points that can be reached through their ϵ-neighbourhoods [19]. DBSCAN uses a single global ϵ value to determine C. Test Suite Minimization the clusters. But, if the clusters have varying densities, this Test suites are prone to redundant test cases that, if not re- could lead to suboptimal partitioning of the data. HDBSCAN moved,canleadtoamassivewasteoftimeandresources[13], addresses this problem and we describe next. thus warranting systematic and automated strategies to elim- 3) HDBSCAN: HDBSCAN (Hierarchical Density-Based inate redundant test cases, that are referred to as test suite Spatial Clustering of Applications with Noise) is an extension minimization. ofDBSCAN(§II-D2).AsopposedtoDBSCAN,HDBSCAN While test suite minimization techniques are very di- relies on different distance thresholds ϵ for each cluster, thus verse [13], most of them are white-box approaches aiming at obtaining clusters of varying densities. minimizing the size of the test suite while maximizing code HDBSCAN first builds a hierarchy of clusters, based on coverage. For instance, several test minimization approaches various ϵ values selected in decreasing order. Then, based on used greedy heuristics to select test cases based on their code suchahierarchy,HDBSCANselectsasfinalclustersthemost coverage [14], [15]. Black-box approaches include the FAST- persistent ones, where cluster persistence represents how long Rfamilyofscalableapproachesthatleveragesarepresentation a cluster remains the same without splitting when decreasing oftestsourcecode(orcommandlineinputs)inavector-space the value of ϵ. In HDBSCAN, one has only to specify model[16]andtheATMapproachthatisbasedontheabstract one parameter, which is the minimum number of individualsIEEETRANSACTIONSONSOFTWAREENGINEERING,VOL.XX,NO.XX,MONTH2024 5 required to form a cluster, denoted by n [20]. Clusters with To tackle these challenges, several many-objective al- less than n individuals are considered noise and ignored. gorithms have been successfully applied within the soft- ware engineering community, like NSGA-III [27], [29] and E. Many Objective Optimization MOSA [28]. NSGA-III [27], [29] is based on NSGA-II [24] and ad- Engineers are often faced with problems requiring to fulfill dresses these challenges by assuming a set of supplied or multiple objectives at the same time, called multi-objective predefinedreferencepoints.Diversity (challenge2)isensured problems.Forinstance,multi-objectivesearchalgorithmswere by starting the search in parallel from each of the reference |
used in test suite minimization approaches to balance cost, points,assumingthatlargelyspreadstartingpointswouldlead effectiveness, and other objectives [21], [22]. Multi-objective to exploring all relevant parts of the Pareto front. For each problemswithatleast(threeor)fourobjectivesareinformally parallel search, parents share the same starting point, so they known as many-objective problems [23]. In both kind of are assumed to be close enough so that recombination oper- problems, one need a solution which is a good trade-off ations (challenge 3) are more meaningful. Finally, instead of between the objectives. Hence, we first introduce the Pareto consideringallsolutionsintheParetofront,NSGA-IIIfocuses front of a decision space (§ II-E1). Then, we describe genetic on individuals which are the closest to the largest number of algorithms able to solve many-objective problems (§ II-E2). reference points. That way, NSGA-III considers only a small 1) Pareto Front: Multi- and many-objective problems can proportion of the Pareto front (addressing challenge 1). be stated as minimizing several objective functions while Another many-objective algorithm, MOSA [28], does not taking values in a given decision space. The goal of multi- aimtoidentifyasingleindividualachievingatradeoffbetween objective optimization is to approximate the Pareto Front in objectives but a set of individuals, each satisfying one of the objective space [23]. the objectives. Such characteristic makes MOSA adequate Formally,ifD isthedecisionspaceandf (.),...,f (.)are 1 n for many software testing problems where it is sufficient nobjectivefunctionsdefinedonD,thenthefitnessvectorofa to identify one test case (i.e., an individual) for each test decisionvectorx∈D is[f (x),...,f (x)],hereafterdenoted 1 n objective(e.g.,coveringaspecificbranchorviolatingasafety F(x). Moreover, a decision vector x Pareto-dominates a 1 requirement). To deal with challenge 1, MOSA relies on a decision vector x (hereafter denoted x ≻x ) if 1) for each 2 1 2 preference criterion amongst individuals in the Pareto front, 1 ≤ i ≤ n, we have f (x ) ≤ f (x ), and 2) there exists i 1 i 2 by focusing on 1) extreme individuals (i.e., test cases having 1 ≤ i ≤ n such that f (x ) < f (x ). If there exists i 1 i 2 one or more objective scores equal to zero), and 2) in case of no decision vector x such that x ≻x , we say that the 1 1 2 tie, the shortest test cases. These best extreme individuals are decision vector x is non-dominated. The Pareto front of D 2 storedinanarchiveduringthesearch,andthearchiveobtained is the set {F(x ) | x ∈D and ∀x ∈D :x ⊁ x } of the 2 2 1 1 2 at the last generation is the final solution. Challenges 2 and 3 fitness vectors of the non-dominated decision vectors. Finally, are addressed by focusing the search, on each generation, on a multi/many-objective problem consists in: the objectives not yet covered by individuals in the archive. minimizeF(x)=[f (x),...,f (x)] 1 n x∈D III. PROBLEMDEFINITION where the minimize notation means that we want to find or at As the time required to execute a set of considered MRs least approximate the non-dominated decision vectors, hence may be large (§ II-B), we aim to minimize the set of source the ones having a fitness vector in the Pareto front [23]. inputs (hereafter, input set) to be used when applying MST to 2) Solving Many-Objective Problems: Multi-objective al- a Web system, given a set of MRs. In our context, each input gorithms like NSGA-II [24] or SPEA2 [25], [26] are not ef- is a sequence of actions used to communicate with the Web fective in solving many-objective problems [27], [28] because system and each action leads to a different Web page. of the following challenges: To ensure that a minimized input set can exercise the same 1) The proportion of non-dominated solutions becomes vulnerabilitiesastheoriginalone,intuitively,weshouldensure exponentially large with an increased number of ob- that they belong to the same input blocks. Indeed, in software jectives. This reduces the chances of the search being testing,afteridentifyinganimportantcharacteristictoconsider stuck at a local optimum and may lead to a better for the inputs, one can partition the input space in blocks, convergence rate [23], but also slows down the search i.e., pairwise disjoint sets of inputs, such that inputs in the process considerably [27]. same block exercise the SUT in a similar way [30]. As the 2) With an increased number of objectives, diversity op- manual identification of relevant input blocks for a large erators (e.g., based on crowding distance or clustering) system is extremely costly, we rely on clustering for that become computationally expensive [27]. purpose (Section VI). Since an input is a sequence of actions, 3) Ifonlyahandfulofsolutionsaretobefoundinalarge- it can exercise several input blocks. In the rest of the paper, dimensionalspace,solutionsarelikelytobewidelydis- we rely on the notion of input coverage, indicating the input tantfromeachother.Hence,twodistantparentsolutions blocks an input belongs to. are likely to produce offspring solutions that are distant A. Assumptions and Goals from them. In this situation, recombination operations may be inefficient and require crossover restriction or We assume we know, for each input in the initial input set, other schemes [27]. 1) its cost and 2) its coverage.IEEETRANSACTIONSONSOFTWAREENGINEERING,VOL.XX,NO.XX,MONTH2024 6 1) Because we want to make MST scalable, the cost eachcodebranchforwhite-boxtesting[28].Becausethetotal |
cost(in) of an input in corresponds to the execution time number of blocks to be covered is typically large (≥ 4), requiredtoverifyiftheconsideredMRsaresatisfiedwiththis we deal with a many-objective problem [23]. This can be an input. Because we aim to reduce this execution time without advantage,becauseamany-objectivereformulationofcomplex having to execute the MRs, as it would defeat the purpose of problems can reduce the probability of being trapped in local inputsetminimization,weusethenumbernbActions(mr,in) optima and may lead to a better convergence rate [28]. But of actions to be executed by an MR mr using input in as a this raises several challenges (§ II-E2) that we tackle while surrogate metric for its execution time (see § II-B). We thus presenting our search algorithm (Section VIII). define the cost of an input as follows: C. Objective Functions (cid:88) cost(in)=def nbActions(mr,in) Toprovideeffectiveguidancetoasearchalgorithm,weneed mr∈MRs to quantify when an input set is closer to the objective of When cost(in) = 0, input in was not exercised by any MR covering a particular block bl than another input set. In other due to the preconditions in these MRs. Hence, in is not words, if I and I are two input sets which do not cover bl 1 2 useful for MST and can be removed without loss from the but have the same cost, we need to determine which one is initial input set. Finally, the total cost of an input set I is more desirable to achieve the goals introduced in § III-A by cost(I)=def(cid:80) cost(in). defining appropriate objective functions. in∈I 2)Tominimizethecostofmetamorphictesting,weremove In general, adding an input to an input set would not unnecessary inputs from the initial input set, but we want to only cover bl, but would also likely cover other blocks, that preserve all the inputs able to exercise distinct vulnerabilities. would then be covered by several inputs, thus introducing Hence, we consider, for each initial input in, its coverage the possibility to remove some of them without affecting Cover(in). In our study, Cover(in) is the set of input blocks coverage. To track of how a given block bl is covered by the input in belongs to, and we determine these input blocks inputs from a given input set I, we introduce the con- in Section VI using double-clustering. For now, we assume cept of superposition as superpos(bl,I)=def|Inputs(bl)∩I|. that the coverage of an input is known. The total coverage of For instance, if superpos(bl,I) = 1, then there is only an input set I is Cover(I)=def∪ Cover(in). one input in I covering bl. In that case, this input is in∈I We can now state our goals. We want to obtain a subset necessary to maintain the coverage of I. More generally, I ⊆I oftheinitialinputsetsuchthat1)I doesnot we quantify how much an input is necessary to ensure final init final reducetotalinputcoverage,i.e.,Cover(I )=Cover(I ) the coverage of an input set with the redundancy metric: final init and2)I hasminimalcost,i.e.,cost(I )=min{cost(I) redundancy(in,I)=defmin{superpos(bl,I) | bl ∈Cover(in)} final final | I ⊆I ∧Cover(I)=Cover(I )}. Note that a solution −1. The −1 is used to normalize the redundancy met- init init I may not be necessarily unique. ric so that its range starts at 0. If redundancy(in,I) = final 0, we say that in is necessary in I, otherwise we say that in is redundant. In the following, we denote B. A Many-Objective Problem Redundant(I)=def{in ∈I | redundancy(in,I)>0} the set of To minimize the initial input set, we focus on the selection the redundant inputs in I. of inputs that belong to the same input blocks as the initial To focus on the least costly input sets during the search input set. A potential solution to our problem is an input set (Section VIII), we quantify the gain obtained by removing I ⊆I init.ObtainingasolutionI abletoreachfullinputcover- redundant inputs. If I contains a redundant input in, then we ageisstraightforwardsince,foreachblockbl,onecansimply call removal step a transition from I to I \{in}. Otherwise, select an input in Inputs(bl)=def{in ∈I init | bl ∈Cover(in)}. we say that I is already reduced. Unfortunately, given two The hard part of our problem is to determine a combination redundant inputs in and in , removing in may render in 1 2 1 2 of inputs able to reach full input coverage at a minimal cost. necessary. Hence, when considering potential removal steps Hence, we have to consider an input set as a whole and not (e.g.,removingeitherin orin ),onehastoconsidertheorder 1 2 focus on individual inputs. of these steps. We represent a valid order of removal steps by This is similar to the whole suite approach [31] targeting a list of inputs [in ,...,in ] to be removed from I such that, 1 n white-box testing. They use as objective the total number of for each 0≤i<n, in is redundant in I \{in ,...,in }. i+1 1 i coveredbranches.But,inourcontext,countingthenumberof We denote ValidOrders(I) the set of valid orders of removal uncovered blocks would consider as equivalent input sets that stepsinI.Removingredundantinputsin ,...,in leadstoa 1 n miss the same number of blocks, without taking into account reductionofcostcost(in )+···+cost(in ).Foreachinputset 1 n that it may be easier to cover some blocks than others (e.g., I,weconsiderthemaximalgainfromvalidordersofremoval someblocksmaybecoveredbymanyinputs,butsomeonlyby steps: afew)orthatablockmaybecoveredbyinputswithdifferent (cid:110) |
gain(I)=defmax (cid:80) cost(in ) costs. Thus, to obtain a combination of inputs that minimizes i 1≤i≤n cost while preserving input coverage, we have to investigate (cid:12) (cid:111) (cid:12) [in ,...,in ]∈ValidOrders(I) how input sets cover each input block. (cid:12) 1 n Hence, we are interested in covering each input block as To reduce the cost of computing this gain, we prove in an individual objective, in a way similar to the coverage of the appendix (Theorem 1) that, to determine which orders ofIEEETRANSACTIONSONSOFTWAREENGINEERING,VOL.XX,NO.XX,MONTH2024 7 removalstepsarevalid,wecanremoveinputsinanyarbitrary formulate our problem definition as a many-objective opti- order, without having to resort to backtracking to previous mization problem: inputs. Moreover, in our approach, we need to compute the minimizeF(I) gain only in situations when the number of redundant inputs I⊆Iinit is small (Section VIII), thus exhaustively computing the gain where the minimize notation means that we want to find or at is tractable. least approximate the non-dominated decision vectors having Addinganinputin 1 ∈Inputs(bl)toI wouldleadtoagain afitnessvectorontheParetofront[23].Becausewewantfull but would also result in additional cost, hence warranting we inputcoverage(§III-A),theultimategoalisanon-dominated considerthebenefit-costbalancegain(I ∪{in 1})−cost(in 1) solution I final such that: to evaluate how efficiently bl is covered by in . More gen- 1 erally, we define the potential of I to efficiently cover bl as F(I final)=[ω(cost min),0,...,0] the maximum benefit-cost balance obtained by adding inputs where cost is the cost of the cheapest subset of I with in in covering bl. But, as an input in may be necessary min init 1 1 full input coverage. to cover bl while leading to no or not enough removal steps, gain(I ∪{in }) − cost(in ) may be negative. To facilitate 1 1 normalization, we need the potential to return a non-negative IV. OVERVIEWOFTHEAPPROACH value, thus we shift all the benefit-cost balances for a given As stated in our problem definition (Section III), we aim objective bl by adding a dedicated term. As the potential is to reduce the cost of MST (§ II-B) by minimizing an initial a maximum, the worst case of gain(I ∪{in })−cost(in ) 2 2 inputsetwithoutremovinginputsthatarerequiredtoexercise is when gain(I ∪{in }) = 0 and cost(in ) is the minimal 2 2 distinct security vulnerabilities. To do so, we need to tackle costamongsttheinputsabletocoverbl.Hence,weobtainthe the following sub-problems (§ III-D): following definition for the potential of I in covering bl: 1) For each initial input, we need to determine its cost potential(I,bl)=defmax{gain(I ∪{in })−cost(in ) 1 1 without executing the considered MRs. | in ∈Inputs(bl)} 1 2) Foreachinitialinput,weneedtodetermineitscoverage. +min{cost(in ) | in ∈Inputs(bl)} 2 2 In the context of metamorphic testing for Web systems, We thus use the potential to define the objective function we consider input blocks based on system outputs and associated with an objective bl. input parameters. To normalize our metrics, we rely on the normalization 3) AmongstallpotentialinputsetsI ⊆I ,wesearchfor init functionω(x)=def x ,whichisusedtoreducearangeofvalues a non-dominated solution I that preserves coverage x+1 final from[0,∞)to[0,1)whilepreservingtheordering.Whenused while minimizing cost. during a search, it is less prone to precision errors and more The Automated Input Minimizer (AIM) approach relies on likelytodrivefasterconvergencetowardsanadequatesolution analyzing the output and cost corresponding to each input. thanalternatives[32].Weuseittonormalizethecostbetween AIMobtainssuchinformationthroughanewfeatureaddedto 0 and 1 and the smaller is the normalized cost, the better the MST-wi toolset to execute each input on the system and a solution is. For coverage, since a high potential is more retrievethecontentofthecorrespondingWebpages.Obtaining desirable, we use its complement 1 =1−ω(x) to reverse the outputs of the system is very inexpensive compared to x+1 theorder,sothatthemorepotentialaninputsethas,thelower executing the considered MRs. Moreover, to address our first itscoverageobjectivefunctionis.Wethusdefinetheobjective sub-problem, we also updated MST-wi to retrieve the cost of function corresponding to a block bl i as: an input without executing the considered MRs. We rely on (cid:26) 0 if bl ∈Cover(I) a surrogate metric (§ II-B), linearly correlated with execution f (I)=def i bli 1 otherwise time, which is inexpensive to collect (§ V-A). potential(I,bli)+1 In Step 1 (Pre-processing), AIM pre-processes the initial The lower this value, the better. If bl is covered, then i input set and the output information, by extracting relevant f (I) = 0, otherwise f (I) > 0. As expected, input sets bli bli textual content from each returned Web page (§ V-B). that cover the objective are better than input sets that do not, Toaddressthesecondsub-problem,AIMreliesonadouble- and if two input sets do not cover the objective, then the clustering approach (Section VI), which is implemented by normalized potential is used to break the tie. Step2(OutputClustering)andStep3(ActionClustering).For bothsteps,AIMreliesonstate-of-the-artclusteringalgorithms, D. Solutions to Our Problem which require to select hyper-parameter values (§ VI-A). |
EachelementinthedecisionspaceisaninputsetI ⊆I , Output clustering (§ VI-B) is performed on the pre-processed init which is associated with a fitness vector: outputs, each generated cluster corresponding to an output class. Then, for each output class identified by the Output F(I)=def[ω(cost(I)),f (I),...,f (I)] bl1 bln ClusteringStep,Actionclustering(§VI-C)firstdeterminesthe where Cover(I ) = {bl ,...,bl } denotes the n input actions whose output belongs to the considered output class, init 1 n blocks to be covered by input sets I ⊆I . then partitions these actions based on action parameters such init Hence, we can define the Pareto front formed by the non- asURL,username,andpassword,obtainingactionsubclasses. dominated solutions in our decision space (§ II-E1) and we On the completion of Step 3, AIM has mapped each input toIEEETRANSACTIONSONSOFTWAREENGINEERING,VOL.XX,NO.XX,MONTH2024 8 considered MRs Data Collection initial input set outputs costs Automated Inputset Minimizer Step 1 Step 4 (AIM) Pre-processing Probl (e Im M PR Re Odu )ction pre- op uro tpc ue ts ssed pre i- np pr uo tc e sess ted suba cc lt aio sn ses coin mp pu ot n s ee nt ts ne ic ne ps us tsary Step 5 Genetic Algorithm (MOCCO) minimized components minimized Step 2 Step 3 Step 6 input set output classes Output Clustering Action Clustering Post-processing Fig.3. ActivitydiagramoftheAutomatedInputMinimizer(AIM)approach. a set of action subclasses, used for measuring input coverage mized input set component. as per our problem definition (§ III-A). Finally, after the genetic search is completed for each Topreservediversityinourinputset,andespeciallytoretain component, in Step 6 (Post-processing), AIM generates the inputsthatarenecessarytoexercisevulnerabilities,werequire minimized input set by combining the necessary inputs iden- the minimized input set generated by AIM to cover the same tified by IMPRO and the inputs from the MOCCO minimized actionsubclassesastheinitialinputset.Thatway,weincrease components (Section IX). the chances that the minimized input set contains at least one Note that, even though in our study we focus on Web sys- input able to exercise each vulnerability detectable with the tems,steps4(IMPRO),5(MOCCO),and6(post-processing), initial input set. which form the core of our solution, are generic and can be applied to any system. Moreover, step 1 (pre-processing) and Using cost and coverage information, AIM can address steps 2 and 3 (double-clustering) can be tailored to apply the last sub-problem. Since the size of the search space AIM to other domains (e.g., desktop applications, embedded exponentially grows with the number of initial inputs, the systems).AndthoughwereliedonMST-witocollectourdata, solution cannot be obtained by exhaustive search. Actually, AIMdoesnotdependonaparticulardatacollector,andusing ourproblemisanalogoustotheknapsackproblem[33],which or implementing another data collector would enable the use is NP-hard, and is thus unlikely to be solved by deterministic of our approach in other contexts. algorithms. Therefore, AIM relies on metaheuristic search to find a solution (Step 5) after reducing the search space (Step 4). V. STEP1:DATACOLLECTIONANDPRE-PROCESSING In Step 4, since the search space might be large, AIM first In Step 1, AIM determines the cost of each initial input reduces the search space to the maximal extent possible (Sec- (§ V-A) and extract meaningful textual content from the Web tion VII) before resorting to metaheuristic search. Precisely, pages obtained with the initial inputs (§ V-B). it relies on the Inputset Minimization Problem Reduction Operator (IMPRO) component for problem reduction, which determines the necessary inputs, removes inputs that cannot A. Input Cost be part of the solution, and partition the remaining inputs into Thecostofasourceinputisthenumberofactionsexecuted input set components that can be independently minimized. by source and follow-up inputs for the considered MRs In Step 5, AIM applies a genetic algorithm (Section VIII) (§ III-A). Note that counting the number of actions to be to minimize each component. Because existing algorithms executed is inexpensive compared to executing them on the did not entirely fit our needs, we explain why we introduce SUT, then checking for the verdict of the output relation. MOCCO (Many-Objective Coverage and Cost Optimizer), a For instance, counting the number of actions for eleven MRs novel genetic algorithm which converges towards a solution with Jenkins’ initial input set (Section X) took less than five covering the objectives at a minimal cost, obtaining a mini- minutes, while executing the MRs took days.IEEETRANSACTIONSONSOFTWAREENGINEERING,VOL.XX,NO.XX,MONTH2024 9 B. Output Representation leading to outputs in the same output class (§ VI-C1). Then, AIM refines each action set by partitioning the Since, in this study, we focus on Web systems, the outputs actions it contains using action parameters. To do so, it of the SUT are Web pages. Fortunately, collecting these first uses the request method (§ VI-C2) to split action pages using a crawler and extracting their textual content is sets into parts. Then, we define an action distance inexpensive compared to executing MRs. Hence, we can use (§VI-C3)basedontheURL(§VI-C4)andotherparam- systemoutputstodeterminerelevantinputblocks(SectionIII). eters (§ VI-C5) of the considered actions. Finally, AIM We focus on textual content extracted from the Web pages reliesonSilhouetteanalysisandclusteringalgorithmsto returned by the Web system under test. We remove from the partitioneachpartofanactionsetintoactionsubclasses textual content of each Web page all the data that is shared |
(§ VI-C6), defining our input blocks (§ III-A). amongmanyWebpagesandthuscannotcharacterizeaspecific page, like system version, date, or (when present) the menu Note that double-clustering should not be confused with of the Web page. Moreover, to focus on the meaning of the biclustering [34], [35], since the latter simultaneously clusters Web page, we consider the remaining textual content not as a two distinct aspects (features and samples) of the data, while stringofcharactersbutasasequenceofwords.Also,following the former clusters only one aspect (actions, in our case, that standard practice in natural language processing, we apply a can be seen as features) but in two consecutive steps (action stemming algorithm to consider distinct words with the same outputs, then action parameters), the second refining the first stem as equivalent, for instance the singular and plural forms one. of the same word. Finally, we remove stopwords, numbers, and special characters, in order to focus on essential textual A. Hyper-parameters Selection information. In this study, we rely on the common K-means [18], DBSCAN [19], and HDBSCAN [20] clustering algorithms VI. STEPS2AND3:DOUBLECLUSTERING (§ II-D) to determine output classes (§ VI-B) and action To reduce the cost of MST, we want to minimize an initial subclasses(§VI-C).Theseclusteringalgorithmsrequireafew input set while preserving, for each vulnerability affecting hyper-parameters to be set. One needs to select for K-means the SUT, at least one input able to exercise it; of course, in thenumberofclustersk,forDBSCANthedistancethresholdϵ practice, such vulnerabilities are not known in advance but andtheminimumnumberofneighboursn,andforHDBSCAN should be discovered by MST. Hence, we have to determine the minimum number n of individuals required to form a in which cases two inputs are distinct enough so that both cluster. should be kept in the minimized input set, and in which cases To select the best values for these hyper-parameters, we some inputs are redundant with the ones we already selected rely on Silhouette analysis. Though the Silhouette score is a and thus can be removed. To determine which inputs are common metric used to determine optimal values for hyper- similar and which significantly differ, we rely on clustering parameters [36], [37], it is obtained from the average Silhou- algorithms.Precisely,werelyontheK-means,DBSCAN,and ette score of the considered data points. Thus, for instance, HDBSCANalgorithmstoclusterourdatapoints.Eachofthem clusters with all data points having a medium Silhouette hasasetofhyper-parameterstobesetandwefirstdetailhow score cannot be distinguished from clusters where some data these hyper-parameters are obtained using Silhouette analysis points have a very large Silhouette score while others have a (§ VI-A). very small one. Hence, having a large Silhouette score does Since, for practical reasons, we want to avoid making not guarantee that all the data points are well-matched to assumptions regarding the nature of the Web system under their cluster. To quantify the variability in the distribution test (e.g., programming language or underlying middleware), of Silhouette scores, we use Gini index, a common measure we propose a black-box approach relying on input and output of statistical dispersion. If the Gini index is close to 0, then information to determine which inputs we have to keep or Silhouettescoresarealmostequal.Conversely,ifitiscloseto remove. In the context of a Web system, each input is a 1, then the variability in Silhouette score across data points is sequenceofactions,eachactionenablesausertoaccessaWeb large. page(usingaPOSTorGETrequestmethod),andeachoutput Hence, for our Silhouette analysis, we consider two ob- is a Web page. After gathering output and action information, jectives: (average) Silhouette score and the Gini index of we perform double-clustering on our data points, i.e., two the Silhouette scores. The selection of hyper-parameters is clustering steps performed in sequence: therefore a multi-objective problem with two objectives. We 1) Output clustering (§ VI-B) uses the outputs of the Web rely on the common NSGA-II evolutionary algorithm [24] to system under test, i.e., textual data obtained by pre- solvethisproblemandapproximatetheParetofrontregarding processing content from Web pages (§ V-B). We define bothSilhouettescoreandGiniindex.Then,weselecttheitem an output distance (§ VI-B1) to quantify similarity be- in the Pareto front that has the highest Silhouette score. tweentheseoutputs,whichisthenusedtorunSilhouette analysis and clustering algorithms to partition outputs B. Step 2: Output Clustering into output classes (§ VI-B2). 2) Action clustering (§ VI-C) then determines input cover- Output clustering consists in defining an output distance age. First, AIM collects in the same action set actions (§ VI-B1) to quantify dissimilarities between Web systemIEEETRANSACTIONSONSOFTWAREENGINEERING,VOL.XX,NO.XX,MONTH2024 10 outputs, and then to partition the outputs to obtain output used to send a request to the server, so we split each action classes (§ VI-B2). set using the request method (§ VI-C2). Then, to quantify the A user communicates with a Web system using actions. dissimilaritybetweentwoactions,wedefineanactiondistance Hence, an input for a Web system is a sequence of actions (§ VI-C3) based on URL (§ VI-C4) and other parameters (e.g.,login,accesstoaWebpage,logout).Asthesameaction (§ VI-C5). That way, action clustering refines each action set may occur several times in an input in, a given occurrence into action subclasses (§ VI-C6). of an action is identified by its position i in the input and 1) Action Sets: Based on the obtained output classes |
denoted action(in,i). Outputs of a Web system are textual (§ VI-B2), AIM determines action sets such that actions data obtained by pre-processing the content from Web pages leading to outputs in the same output class outCl are in the (§ V-B). The accessed Web page depends not only on the same action set: considered action, but also on the previous ones; for instance ActionSet(outCl)=def{act | ∃in,i:action(in,i)=act if the user has logged into the system. Hence, we denote by ∧ OutputClass(in,i)=outCl} output(in,i) the output of the action at position i in in. Note that, because an action can have different outputs 1) Output distance: In this study, we use system outputs depending on the considered input, it is possible for an action (i.e.,Webpages)tocharacterizesystemstates.Hence,twoac- to belong to several actions sets, corresponding to several tionsthatdonotleadtothesameoutputshouldbeconsidered output classes. distinct because they bring the system into different states. 2) Request Partition: Each action uses either a POST or More generally, dissimilarity between outputs is quantified GET method to send an HTTP request to the server. Actions using an output distance. Since we deal with textual data, (such as login) that send the parameters to the server in the we consider both Levenshtein and bag distances. Levenshtein message body use the POST method, while actions that send distance is usually a good representation of the difference the parameters through the URL use the GET method. As between two textual contents [38], [39]. However, computing thisdifferenceismeaningfulfordistinguishingdifferentaction theminimalnumberofeditsbetweentwostringscanbecostly, types,wespliteachactionsetintotwoparts:theactionsusing since the complexity of the Levenshtein distance between two a POST method and those using a GET method. strings is O(len(s )×len(s )), where len(.) is the length of 1 2 3) Action Distance: After request partition (§ VI-C2), we the string [40]. Thus, we consider the bag distance [41] as an consider one part of an action set at a time and we refine alternativetotheLevenshteindistance,becauseitscomplexity it using an action distance quantifying dissimilarity between is only O(len(s1)+len(s )) [42]. But it does not take into 2 actions based on remaining parameters (e.g., URL or form account the order of words and is thus less precise than entries). In the context of a Web system, each Web content Levenshtein distance. is identified by its URL, so we give more importance to this 2) Output Classes: We partition the textual content we ob- parameter. We denote url(act ) the URL of action act . For i i tainedfromWebpages(§V-B)usingtheK-means,DBSCAN, the sake of clarity, we call in the rest of the section residual and HDBSCAN clustering algorithms, setting the hyper- parameters the parameters of an action which are not its re- parameters using Silhouette analysis (§ VI-A), and deter- questmethodnoritsURLandwedenoteres(act )theresidual i mining similarities between outputs using the chosen output parameters of action act . Since we give more importance to i distance. We call output classes the obtained clusters and the URL, we represent the distance between two actions by a we denote by OutputClass(in,i) the unique output class real value, where the integral part corresponds to the distance output(in,i) belongs to. between their respective URLs and the decimal part to the distance between their respective residual parameters: C. Step 3: Action Clustering actionDist(act ,act )=defurlDist(url(act ),url(act )) 1 2 1 2 Exercising all the Web pages is not sufficient to discover +paramDist(res(act 1),res(act 2)) all the vulnerabilities; indeed, vulnerabilities might be wheretheURLdistanceurlDist(.,.)isdefinedin§VI-C4and detected through specific combinations of parameter returns an integer, and the parameter distance paramDist(.,.) values associated to an action (e.g., values belonging to is defined in § VI-C5 and returns a real number between 0 a submitted form). Precisely, actions on a Web system and 1. can differ with respect to a number of parameters that 4) URL distance: A URL is represented as a sequence of include the URL (allowing the action to perform a request at least two words, separated by ‘://’ between the first and to a Web server), the method of sending a request to secondword,thenby‘/’betweenanyotherwords.Thelength the server (like GET or POST), URL parameters (e.g., ofaURLurl isitsnumberofwords,denotedlen(url).Given http://myDomain.com/myPage?urlParameter1= twoURLs,url andurl ,theirlowestcommonancestoristhe 1 2 value1&urlParameter2=value2), and entries in longestprefixtheyhaveincommon,denotedLCA(url ,url ). 1 2 form inputs (i.e., textarea, textbox, options in select items, WedefinethedistancebetweentwoURLsasthetotalnumber datalists). ofwordsseparatingthemfromtheirlowestcommonancestor: Based on the obtained output classes, action clustering urlDist(url ,url )=deflen(url )+len(url ) first determines action sets (§ VI-C1). Then, action clustering 1 2 1 2 −2×len(LCA(url ,url )) refines each action set by partitioning the actions it contains 1 2 usingactionsparameters.First,wegiveprioritytothemethod We provide an example in Figure 4.IEEETRANSACTIONSONSOFTWAREENGINEERING,VOL.XX,NO.XX,MONTH2024 11 http TABLEI VALUESFORTHEEXAMPLEOFPARAMETERDISTANCE PageNumber Username Password hostname resids1 10 “John” “qwerty” 1 resids2 42 “Johnny” “qwertyuiop” login job 3 into 4 = 0.80. Thus, the parameter distance is 4+1 paramDist(resids ,resids )≈ 0.97+0.66+0.80 ≈0.71. |
try1 1 2 0.97+0.66+0.80+1 6) ActionSubclasses: Wepartitionbothpartsofeachaction set (§ VI-C2) using the K-means, DBSCAN, or HDBSCAN lastBuild clustering algorithms, setting the hyper-parameters using our Silhouette analysis (§ VI-A), and quantifying action Fig. 4. The URL distance between http://hostname/login and dissimilarity using our action distance (§ VI-C3), obtain- http://hostname/job/try1/lastBuildis1+3=4. ing clusters we call action subclasses. We denote by ActionSubclass(act,actSet) the unique action subclass bl 5) Parameter Distance: To quantify the dissimilarity be- from the action set actSet such that act ∈ bl. For the sake of simplicity, we denote Subclass(in,i) the action subclass tweenresidualparameters,wefirstindependentlyquantifythe corresponding to the i-th action in input in: dissimilarity between pairs of parameters of the same type. Since,inourcontext,weexercisevulnerabilitiesbyonlyusing Subclass(in,i)=defActionSubclass(action(in,i), stringornumericalvalues,weignoreparametervaluesofother ActionSet(OutputClass(in,i)) typessuchasbytearrays(e.g.,animageuploadedtotheSUT). Finally,inourstudy,theobjectivescoveredbyaninputare: In other contexts, new parameter distance functions suited to other input types may be required. For strings, we use the Cover(in)=def{Subclass(in,i) | 1≤i≤len(in)} Levenshtein distance [38], [39], whereas for numerical values we consider the absolute value of their difference [43]: VII. STEP4:PROBLEMREDUCTION paramValDist(v ,v )=def 1 2 The search space for our problem (§ III-D) consists of all LevenshteinDist(v ,v ) if type(v )=str=type(v ) 1 2 1 2 thesubsetsoftheinitialinputset,whichleadsto2m potential |v −v | if type(v )=int=type(v ) 1 2 1 2 solutions, where m is the number of initial inputs. undefined otherwise For this reason, AIM integrates a problem reduction step, Since we have parameters of different types, we normal- implemented by the Inputset Minimization Problem Reduc- ize the parameter distance using the normalization function tion Operator (IMPRO) component, to minimize the search ω(x) = x (§ III-C). Then, we add these normalized space before solving the search problem in the next step x+1 distances together, and normalize the sum to obtain a result (Section VIII). We apply the following techniques to reduce between 0 and 1. We compute the parameter distance in case the size of the search space: of matching parameters, i.e., the number of parameters is the • Determiningredundancy:Necessaryinputs(§III-C)must same and the corresponding parameters have the same type. be part of the solution, hence one can only investigate Otherwise,weassumethelargestdistancepossible,whichis1 redundant inputs (§ VII-C). Moreover, one can restrict duetothenormalization.Thisistheonlycasewherethevalue thesearchbyremovingtheobjectivesalreadycoveredby 1 is reached, as distance lies otherwise in [0 1[, as expected necessary inputs. Finally, if a redundant input does not for a decimal part (§ VI-C3): cover any of the remaining objectives, it will not con- tribute to the final coverage, and hence can be removed. paramDist(resids ,resids )=def ω( (cid:80) 1 ω(par2 amValDist(resids[i],resids[i]))) • Removing duplicates: Several inputs may have the same 1 2 cost and coverage. In this case, we consider them as 0≤i<len(resids1) duplicates (§ VII-D). Thus, we keep only one and we if resids and resids have matching parameters 1 2 remove the others. 1 otherwise • Removing locally-dominated inputs: For each input, if where resids = res(act ), resids = res(act ), and there exists other inputs that cover the same objectives 1 1 2 2 resids[i] is the i-th element of resids. at a same or lower cost, then the considered input is For instance, we consider two actions act and act locally-dominated by the other inputs (§ VII-E) and is 1 2 having matching parameters with the values in Table I removed. for page number, username, and password. The dis- • Dividing the problem: We consider two inputs covering tance for the page number is paramValDist(10,42) = a common objective as being connected. Using this 32, normalized into 32 ≈ 0.97. For the user- relation, we partition the search space into connected 32+1 name, it is paramValDist(“John”,“Johnny”) = 2, nor- components that can be independently solved (§ VII-F), malized into 2 ≈ 0.66. For the password, it is thus reducing the number of objectives and inputs to 2+1 paramValDist(“qwerty”,“qwertyuiop”) = 4, normalized investigate at a time.IEEETRANSACTIONSONSOFTWAREENGINEERING,VOL.XX,NO.XX,MONTH2024 12 Algorithm 1 Redundancy determination technique. (Line 2). Among them, inputs which are necessary (§ III-C) 1: procedure REDUNDANCY(I necess,I search,Coverage obj) in I search (Line 3) for the objectives in Coverage obj have 2: I redund ←Redundant(I search) to be included in the final solution (Line 4), otherwise some 3: I nn ee cw ess ←I search \I redund objectives will not be covered. Then, the objectives already 4: I necess ←I necess ∪I nn ee cw ess covered by the necessary inputs are removed (Line 5). Hence, 5: Coverage ←Coverage \Cover(Inew ) in the following, we only consider, for each remaining input obj obj necess 6: I search ←{in ∈I redund|Cover(in)∩Coverage obj ̸= in ∈ I search, their coverage regarding the remaining objec- |
∅} tives, i.e., Cover(in)∩Coverage , instead of Cover(in). obj 7: return I necess,I search,Coverage obj Finally, some redundant inputs may cover only objectives that are already covered by necessary inputs. In that case, they cannot be part of the final solution because they would Before detailing these techniques, we first explain in which contributetothecostbutnottothecoverageoftheobjectives. order there are applied (§ VII-A). Hence, we restrict without loss the search space for our problem by considering only redundant inputs that can cover A. Order of Reduction Techniques the remaining objectives (Line 6). We want to perform first the least expensive reduction techniques, to sequentially reduce the cost of the following D. Removing Duplicates more expensive techniques. Determining redundancy requires O(m×c) steps, removing duplicates requires O(m2) steps, In the many-objective problem described in § III-C, inputs and removing locally-dominated inputs requires O(m×2n) are characterized by their coverage (§ VI-C6) and their cost steps, where m is the number of inputs, c is the maxi- (§ III-A). Hence, two inputs with the same coverage and cost mal number of objectives covered by an input, and n is areconsideredduplicates.Inthatcase,IMPROselectsoneand the maximal number of neighbors for an input (i.e., the remove the other. number of other inputs that cover an objective shared with the considered input). In our study, we assume c < m. E. Removing Locally-dominated Inputs Hence,wefirstdetermineredundancy,thenremoveduplicates, and remove locally-dominated inputs. Dividing the problem For a given input in ∈ I search, if the same coverage can requiresexploringneighborsandcomparingnon-visitedinputs be achieved by one or several other input(s) for at most the with visited ones, so it is potentially the most costly of these same cost, then in is not required for the solution. Formally, reduction techniques; hence, it is performed at the end. we say that the input in ∈I search is locally dominated by the After determining redundancy, the removal of already cov- subset S ⊆I search, denoted in ⊑S, if in ̸∈S, Cover(in)⊆ ered objectives may lead to new inputs being duplicates or Cover(S), and cost(in) ≥ cost(S). In order to simplify the locally-dominated. Moreover, the removal of duplicates or problem, inputs that are locally dominated should be removed locally-dominated inputs may lead to changes in redundancy, from the remaining inputs I search. making some previously redundant inputs necessary. Hence, Removing a redundant input in (§ III-C) can only affect these reduction techniques should be iteratively applied, until the redundancy of the inputs in I that cover objectives in a stable output is reached. Such output can be detected by Cover(in).Hence,weconsidertwoinputsasbeingconnected checking if inputs were removed during an iteration. iftheycoveratleastonecommonobjective.Formally,wesay Therefore, the order is as follows. We first initialize vari- that two inputs in 1 and in 2 overlap, denoted by in 1⊓in 2, ables (§ VII-B). Then we repeat, until no input is removed, if Cover(in 1) ∩ Cover(in 2) ̸= ∅. The name of the local the following steps: determine redundancy (§ VII-C), remove dominancerelationcomesfromthefactprovedintheappendix duplicates (§ VII-D), and remove locally-dominated inputs (Proposition 3) that, to determine if an input is locally- (§ VII-E). Finally, we divide the problem into sub-problems dominated, one has only to check amongst its neighbors for (§ VII-F). the overlapping relation instead of amongst all the remaining inputs, thus making this step tractable. B. Initializing Variables One concern is that removing a locally-dominated input could alter the local dominance of other inputs. Fortunately, During problem reduction, we consider three variables: this is not the case for local dominance. We prove in the I , the set of the inputs that has to be part of the necess appendix (Theorem 2) that, for every locally-dominated input final solution, I , the remaining inputs to be investigated, search in ∈ I , there always exists a subset S ⊆ I of and Coverage , the objectives that remain to be covered search search obj not locally-dominated inputs such that in ⊑ S. Hence, the by subsets of I . I is initially empty. I is search necess search locallydominatedinputscanberemovedinanyorderwithout initialized as the initial input set. Coverage is initialized obj reducing coverage or preventing cost reduction, both being as the coverage of the initial input set (§ VI-C6 and § III-A). ensured by non locally-dominated inputs. Therefore, IMPRO keeps in the search space only the remaining inputs that are C. Determining Redundancy not locally-dominated: ThistechniquesispresentedinAlgorithm1.Eachtimeitis repeated, the redundancy of the remaining inputs is computed I ←{in ∈I | ∀S ⊆I :in ̸⊑S} search search searchIEEETRANSACTIONSONSOFTWAREENGINEERING,VOL.XX,NO.XX,MONTH2024 13 bl bl bl bl bl bl bl 1 2 3 4 5 6 7 in in ✓ ✓ 1 1 in ✓ ✓ 2 in 3 ✓ ✓ in 2 in 4 in ✓ ✓ 4 in ✓ ✓ 5 in in 3 5 Fig.5. Inputscoveringoneobjectiveincommon(left)areconnectedinthecorrespondingoverlappinggraph(right). F. Dividing the Problem objective. In our case, we minimize the total cost instead of maximizingtotalvalueandensurethecoverageofeachaction Afterremovingasmanyinputsaspossible,weleveragethe objective instead of making sure that each weight capacity is overlappingrelation(§VII-E)topartitiontheremaininginputs notexceeded.Furthermore,the0-1multidimensionalknapsack intoconnectedcomponents,inadivide-and-conquerapproach. |
problem is harder than the initial knapsack problem as it does We denote G (I) the overlapping graph of the input set I, ⊓ notadmitafullypolynomialtimeapproximationscheme[44], i.e.,theundirectedgraphsuchthatverticesareinputsinI and hence the need for a meta-heuristic. edgescorrespondtotheoverlappingrelation⊓,andComps(I) For our approach to scale, we adopt a genetic algorithm the set of the connected components of G (I). ⊓ becauseitisknowntofindgoodapproximationsinreasonable For instance, in Figure 5, we represent on the left the input execution time [28] and has been widely used in software blocks covered by each input in the search space and the testing.AninputsetI ⊆C canthusbeseenasachromosome, corresponding overlapping graph on the right. The connected where each gene corresponds to an input in ∈ C, the gene components of the graph are {in ,in ,in } and {in ,in }. 1 2 3 4 5 valuebeing1ifin ∈I and0otherwise.Thoughseveralmany- Suchconnectedcomponentsareimportantbecauseweprove objective algorithms have been successfully applied within in the appendix (Proposition 8) that inputs in a connected thesoftwareengineeringcommunity,likeNSGA-III[27],[29] componentcanberemovedwithoutalteringtheredundancyof and MOSA [28] (§ II-E2), these algorithms do no entirely fit inputs in other connected components. Similarly, we prove in our needs (§ VIII-A). Hence, we propose MOCCO (Many- the appendix (Theorem 3) that the gain, i.e., the maximal cost Objective Coverage and Cost Optimizer), a novel genetic reduction from removing redundant inputs (§ III-C), can be algorithm based on two populations and summarized in Algo- independently computed on each connected component, i.e., (cid:80) rithm 2. We first explain how these populations are initialized for each input set I, gain(I)= gain(C). C∈Comps(I) (§ VIII-B). Then, for each generation, MOCCO performs the Hence,insteadofsearchingasolutiononI tosolveour search following standard steps: selection of the parents (§ VIII-C), initialproblem(§III-D),weuseadivide-and-conquerstrategy crossover of the parents to produce an offspring (§ VIII-D), to split the problem into more manageable sub-problems that mutation of the offspring (§ VIII-E), and update of the pop- can be independently solved on each connected component ulations (§ VIII-F) to obtain the next generation. The process C ∈Comps(I ). search We denote Coverage (C)=defCoverage ∩ Cover(C) continuesuntilaterminationcriterionismet(§VIII-G).Then, obj obj we detail how MOCCO determines the solution I to each the remaining objectives to be covered by inputs in C and C sub-problem. we formulate the sub-problem on the connected component similarly to the initial problem: A. Motivation for a Novel Genetic Algorithm minimizeF (I)=def[ω(cost(I)),f (I),...,f (I)] I⊆C C bl1 bln While our problem is similar to the multidimensional 0-1 knapsack problem, it is not exactly equivalent, since standard where Coverage (C) = {bl ,...,bl } and the minimize obj 1 n solutions to the multidimensional knapsack problem have to notation is detailed in § II-E1. We denote I a non- C ensure that, for each “weight type”, the total weight of the dominated solution with full coverage, such that F (I ) = C C items in the knapsack is below weight capacity, while we [ω(cost ),0,...,0]. min wanttoensurethat,foreachobjective,atleastoneinputinthe minimizedinputsetcoversit.Hence,standardsolutionsbased VIII. STEP5:GENETICSEARCH ongeneticsearcharenotapplicableinourcase,andwefocus Obtaining an optimal solution I to our sub-problem on genetic algorithms able to solve many-objective problems C (§ VII-F) on a connected component C would be similar to in the context of test case generation or minimization. We solving the knapsack problem, which is NP-hard [33]. To be haveexplainedearlier(§II-E2)thechallengesraisedbymany- more precise, our problem is equivalent to the 0-1 knapsack objective problems and how the NSGA-III [27], [29] and problem,whichconsistsinselectingasubsetofitemstomaxi- MOSA [28] genetic algorithms tackle such challenges. mizeatotalvalue,whilesatisfyingaweightcapacity.Sincewe NSGA-IIIhastheadvantageoflettinguserschoosetheparts consideramany-objectiveproblem(§III-B),wemustaddress of the Pareto front they are interested in, by providing refer- the multidimensional variant of the 0-1 knapsack problem, ence points. Otherwise, it relies on a systematic approach to where each item has many “weights”, one per considered place points on a normalized hyperplane. While this approachIEEETRANSACTIONSONSOFTWAREENGINEERING,VOL.XX,NO.XX,MONTH2024 14 Algorithm 2 MOCCO overview. Hence, we propose a novel genetic algorithm, named 1: procedure MOCCO(C,n size,n gens,time budget) MOCCO.WetakeinspirationfromMOSA[28]byconsidering 2: time start ←getTime() two populations: 1) a population of solutions (like MOSA’s 3: Roofers ←initRoofers(C,n size) archive), called the roofers because they cover all the objec- 4: Misers ←∅ tives (§ VI-C6), and 2) a population of individuals on the 5: n←1 Paretofront(§III-D),calledthemisersbecausetheyminimize 6: stillTime ←True thecost,whilenotcoveringallobjectives.LikeNSGA-IIIand 7: while n≤n gens ∧stillTime do MOSA, MOCCO has to tackle challenges raised by many- 8: n←n+1 objective problems (§ II-E2). |
9: I 1,I 2 ←selectParents(Roofers,Misers) To address challenge 1, we take inspiration from the whole 10: I 3,I 4 ←crossover(I 1,I 2) suiteapproach[31]whichcountscoveredbranchesasasingle 11: for I ∈{I 3,I 4} do objective,bydefiningtheexposureasthesumofthecoverage 12: I ←mutate(I) objective functions (§ III-C): 13: I ←reduce(I) (cid:88) exposure(I)=def f (I) 14: if getTime()−time start >time budget then bli 15: stillTime ←False bli∈Coverage obj(C) 16: break Since f (.) is zero when the objective bl is covered, the 17: if I ∈Roofers ∪Misers then larger thbl ei exposure, the smaller the input si et coverage. As 18: continue described in § III-B, we do not use the exposure as objective 19: if Cover(I)=Coverage obj(C) then because we want to distinguish between input blocks. But we 20: Roofers ←updRoofers(Roofers,I) use it as a weight when randomly selecting a parent amongst 21: else themisers(§VIII-C),sothatthefurtherawayamiserisfrom 22: Misers ←updMisers(Misers,I) complete coverage, the less likely it is to be selected. That 23: I C ←selectSolution(Roofers) way, we aim to benefit from the large number of dimensions 24: return I C to avoid getting stuck in a local optimum and to have a better convergence rate [23], while still focusing the search on the region of interest. Since we want to deeply explore this particular region, isusefulingeneral,weareinterestedonlyinsolutionsthatare we do not need to preserve diversity over the whole Pareto close to a utopia point [0,0,...,0] (§ III-C) covering every front. Therefore, we do not use diversity operators, avoiding objective at no cost. Hence, we do not care about diversity challenge 2. over the Pareto front, and we want to explore a very specific Finally, we address challenge 3 by 1) restricting the recom- region of the search space. Moreover, apart from the starting bination operations and 2) tailoring them to our problem, as points, the main use of the reference points in NSGA-III is follows: to determine, after the first nondomination fronts are obtained 1) A crossover between roofers can only happen during fromNSGA-II[24],theindividualstobeselectedfromthelast the first generations, when no miser is present in the considered front, so that the population reaches a predefined population. After the first miser is generated, crossover number of individuals. This is not a problem we face because (§VIII-D)isallowedonlybetweenarooferandamiser. we know there is only one point (or several points, but at the Hence,therooferparentprovidesfullcoveragewhilethe same coordinates) in the Pareto front that would satisfy our miser parent provides almost full coverage at low cost. constraint of full coverage at minimal cost. Hence, we do not Moreover, because of how the objective functions are use the Pareto front as a set of solutions, even if we intend computed (§ III-C), the not-yet-covered objectives are to use individuals in the Pareto front as intermediate steps to likely to be covered in an efficient way. That way, we reach relevant solutions. hopetoincreaseourchancesofobtainingoffspringwith Regarding MOSA [28], trade-offs obtained from approxi- both large coverage and low cost. matingtheParetofrontareonlyusedformaintainingdiversity 2) Not only our recombination strategy is designed to be during the search, which is similar to what we intend to computationally efficient (by minimizing the number of do. But, as opposed to the use case tackled by MOSA, in introducedredundancies),butweexploitourknowledge our case determining inputs covering a given objective is of input coverage to determine a meaningful crossover straightforward. Indeed, for each objective, we can easily between parents, with inputs from one parent for one determine inputs that are able to cover it (§ III-B). Hence, half of the objectives and inputs from the other parent individuals ensuring the coverage of the objectives are easy for the other half. to obtain, while the hard part of our problem is to determine a combination of inputs able to cover all the objectives at a B. Population Initialization minimal cost. Hence, even if MOSA may find a reasonable solution,becauseitfocusesoninputsindividuallycoveringan During the search, because we need diversity to explore objective and not on their collective coverage and cost, it is the search space, we consider a population (with size n ≥ size unlikely to find the best solution. 2) of the least costly individuals generated so far that satisfyIEEETRANSACTIONSONSOFTWAREENGINEERING,VOL.XX,NO.XX,MONTH2024 15 Algorithm 3 Roofer population initialization. form distribution, and P denotes the following distribution: init 1: procedure INITROOFERS(C,n size) 1 2: Roofers ←∅ P (in )=def 1+occurrence(in1) init 1 (cid:80) 1 3: while |Roofers|<n size do 1+occurrence(in2) 4: I ←∅ in2∈Inputs(bl) 5: while Cover(I)̸=Coverage (C) do where occurrence(in) denotes the number of times the input obj 6: bl ←select(Coverage obj(C)\Cover(I),P unif) in was selected in the roofer population so far. This distribu- 7: in ←select(Inputs(bl),P init) tion ensures that inputs that were not selected so far are more 8: I ←I ∪{in} likely to be selected, so that the initial roofer population can 9: I ←reduce(I) be more diverse. Note that computing reduce(I) is tractable since adding a new input can only affect the redundancy of 10: if I ̸∈Roofers then inputs that overlap with it (§ VII-E). 11: Roofers ←Roofers ∪{I} 12: return Roofers C. Parents Selection For each generation n, parents are selected as follows. If Misers(n) ̸= ∅, then one parent is selected from the miser population and one from the roofer population. Otherwise, |
full coverage. We call roofers such individuals, by analogy two distinct parents are selected from the roofer population. with covering a roof, and we denote Roofers(n) the roofer A parent I ∈Misers(n) is randomly selected from the miser 1 population at generation n. population using the following distribution: Butfocusingonlyontherooferswouldpreventustoexploit 1 the least expensive solutions obtained in the Pareto front P misers(I 1)=def (cid:80)exposure(I1) 1 while trying to minimize the connected component (§ VII-F). exposure(I2) I2∈Misers(n) Instead, inputs that do not cover all the objectives, and will wheretheexposureisdefinedin§VIII-A.Thepurposeofthis thus not be retained for the final solution, but are efficient distributionistoensurethatinputsetswithlargecoverageorat at minimizing cost, are thus useful as intermediary steps leastlargepotential(§III-C)aremorelikelytobeselected.A towards finding an efficient solution. Hence, we maintain parent I ∈ Roofers(n) is randomly selected from the roofer a second population, formed by individuals that are non- 1 population using the following distribution: dominated so far and minimize cost while not covering all 1 the objectives. We call misers such individuals, because they P (I )=def cost(I1) focus on cost reduction more than objective coverage, and we roofers 1 (cid:80) 1 denote Misers(n) the miser population at generation n. I2∈Roofers(n)cost(I2) The purpose of this distribution is to ensure that less costly The reason for maintaining two distinct populations is to input sets are more likely to be selected. restrictthecrossoverstrategy(§VIII-D)sothat(inmostcases) one parent is a roofer and one parent is a miser. Since misers D. Parents Crossover prioritize cost over coverage, a crossover with a miser tends to reduce cost. Because roofers prioritize coverage over cost, AfterselectingtwodistinctparentsI 1 andI 2,wedetailhow a crossover with a roofer tends to increase coverage. Hence, theyareusedtogeneratetheoffspringI 3andI 4.Ourcrossover withsuchastrategy,weintendtoconvergetowardsasolution strategyexploitsthefactthat,foreachobjectivebl tocover,it minimizing cost and maximizing coverage. iseasytoinferinputsinInputs(bl)abletocoverbl (§III-B). For each crossover, we randomly split the objectives in two Forbothpopulations,wewanttoensurethattheindividuals halves O and O such that O ∪O =Coverage (C) and are reduced (§ III-C), i.e., they contain no redundant inputs. O ∩O 1 = ∅. W2 e consider he1 re a 2 balanced split,ob toj prevent 1 2 Hence, during the initialization and updates of these popula- cases where one parent massively contributes to offspring tions, we ensure that removal steps are performed. Because, coverage. as detailed in the following, the number of redundant inputs Then,weusethissplittodefinethecrossover:inputsinthe obtained for each generation is small, the optimal order of connected component C are split between S =defInputs(O ), 1 1 removal steps can be exhaustively computed. We denote by the ones covering the first half of the objectives, and reduce(I)theinputsetI aftertheseremovalsteps.Thislimits S =defInputs(O ),theonescoveringthesecondhalf.Notethat 2 2 the exploration space, since non-reduced input sets are likely some inputs may cover objectives both in O and O , so we 1 2 to have a large cost and hence to be far away for the utopia call the edge of the split the intersection S ∩S . Because we 1 2 point of full coverage at no cost we intend to focus on. assume both parents are reduced, this means that redundant Themiserpopulationisinitiallyempty,i.e.,Misers(0)=def∅, inputs can only happen at the edge of the split. The genetic as misers are generated during the search through mutations material of both parents I 1 and I 2 is then split in two parts: (§VIII-F).WedetailinAlgorithm3howtherooferpopulation inputs in S 1 and inputs in S 2, as follows: Roofers(0) is initialized, where select(X,P) randomly select I =(I ∩S )∪(I ∩S ) 1 1 1 1 2 one element in X using distribution P, P unif denotes the uni- I 2 =(I 2∩S 1)∪(I 2∩S 2)IEEETRANSACTIONSONSOFTWAREENGINEERING,VOL.XX,NO.XX,MONTH2024 16 status quo, to increase the odds of exploring new regions of in in in in 1 2 1 2 the search space. in in + → in , in For each candidate I to the miser population, we compute 3 3 3 3 1 its fitness vector F (I ) (§ VII-F). Then, for each I ∈ C 1 2 in in in in 4 5 5 4 Misers(n) we compare F (I ) and F (I ). If I ≻I in C 1 C 2 2 1 the sense of Pareto-dominance (§ II-E1), then we stop the Fig.6. CrossoverExample process and I is rejected. If I ≻I , then I is removed from 1 1 2 2 Misers(n). That way, we ensure that the miser population containsonlynon-dominatedindividuals.Aftercompletingthe Then, these parts are swapped to generated the offspring, comparisons, if the process was not stopped, then I itself is 1 as follows: non-dominated, so it is added to the miser population. In that I =def(I ∩S )∪(I ∩S ) 3 1 1 2 2 case, F (I ) is stored for future comparisons. I =def(I ∩S )∪(I ∩S ) C 1 4 2 1 1 2 Propertiessatisfiedbyroofersandmisersaredetailedinthe For illustration purpose we consider in Figure 6 a small appendix (Theorems 4 and 5). connected component C = {in ,in ,in ,in ,in } and the 1 2 3 4 5 following split for the inputs: Inputs(O ) = {in ,in ,in } 1 1 2 3 G. Termination and Inputs(O ) = {in ,in ,in }. The edge of the split 2 3 4 5 |
is Inputs(O ) ∩ Inputs(O ) = {in }. The offspring of MOCCOrepeatstheprocessuntilitreachesafixednumber 1 2 3 parents I = {in ,in ,in } and I = {in ,in } is I = ofgenerationsorexhaustsagiventimebudget.Then,amongst 1 1 3 4 2 2 5 3 {in ,in ,in } and I ={in ,in ,in }. the least costly roofers (several may have the same cost), it 1 3 5 4 2 3 4 randomly selects one individual I as solution to our sub- C problem (§ VII-F). I covers all the objectives and, amongst C E. Offspring Mutation the input sets covering those objectives, I has the smallest C EachgeneofanoffspringI correspondstoaninputin ∈C, cost encountered during the search. the gene value being 1 if in ∈I and 0 otherwise. A mutation happens when this gene value is changed, hence mutation IX. STEP6:DATAPOST-PROCESSING randomly adds or removes one input from an offspring. The set I was initially empty (§ VII-B) and then The crossover (at the edge of the split) and mutation (when necess accumulated necessary inputs each time redundancy was de- aninputisadded)stepsmayresultininputsbeingredundantin termined (§ VII-C). After removing inputs and reducing the theoffspring.Sinceredundanciesintheconnectedcomponent objectives to be covered accordingly (Section VII), IMPRO were already reduced by IMPRO (§ VII-F) and changes in obtainedasetI ofremaininginputsandobjectives.Then, redundancy could happen only amongst neighbors (for the search IMPRO divided the remaining problem into subproblems overlappingrelation)ofthechangedinputs(§VII-E),weonly (§ VII-F), one for each connected component C. Finally, for expect a few redundant inputs. Therefore, removal steps can each connected component C, the corresponding subproblem be exhaustively computed to replace each offspring I by its wassolvedusingMOCCO(SectionVIII),obtainingthecorre- reduced counterpart reduce(I) (§ VIII-B). sponding minimized component I . At the end of the search, C AIM merges inputs from each minimized component I with C F. Population Update the necessary inputs I to obtain a minimized input set necess I as solution to our initial problem (§ III-D): Wedetailnowtheoffspringisusedtoobtainthepopulations final Roofers(n+1) and Misers(n+1) at generation n+1. I =defI ∪ (cid:91) I final necess C First, we discard any offspring that is a duplicate of an individual already present in either Roofers(n) or Misers(n). C∈Comps(Isearch) Indeed, the duplication of individuals would only result in X. EMPIRICALEVALUATION altering their weight for being selected as parents (thus, the intended procedure), reducing roofer diversity, and increasing In this section, we report our results on the assessment the number of miser comparisons (as detailed below). of our approach with two Web systems. We investigate the If a remaining offspring I covers all the objectives in following Research Questions (RQs): 1 Coverage (C), then it is a candidate for the roofer pop- RQ1 What is the vulnerability detection effectiveness obj ulation. Otherwise, it is a candidate for the miser population. of AIM, compared to alternatives? This research For each candidate I for the roofer population, question aims to determine if and to what extent 1 MOCCO computes its cost. If cost(I ) ≤ max AIMreducestheeffectivenessofMSTbycomparing 1 {cost(I) | I ∈Roofers(n)}, then it selects the most the vulnerabilities detected between the initial and costly roofer I (or, in case of a tie, one of the most costly the minimized input sets. Also, we further compare 2 roofers), removes I from the population, and adds I . the vulnerability detection rate of AIM with simpler 2 1 Otherwise, I is rejected. Note that we chose ≤ instead of alternative approaches. 1 < for the above cost criterion because, in case of a tie, we RQ2 What is the input set minimization effectiveness prefer to evolve the population instead of maintaining the of AIM, compared to alternatives? This researchIEEETRANSACTIONSONSOFTWAREENGINEERING,VOL.XX,NO.XX,MONTH2024 17 questionaimstoanalyzethemagnitudeofminimiza- CWE 863 and 280) in cases where the CVE report denotes tion in terms of the number of inputs, cost (§ III-A a general vulnerability type (e.g., CWE 863 for incorrect and V-A), and execution time for the considered authorization [51]), though a more precise identification (e.g., MRs, both for AIM and alternative approaches. CWE280concerningimproperhandlingofprivilegesthatmay result in incorrect authorization) could be applied. Since the 12 considered vulnerabilities are associated to nine different A. Experiment Design CWE IDs and each vulnerability has a unique CWE ID, we 1) SubjectsoftheStudy: ToassessourapproachwithMRs can conclude that the selected subjects cover a diverse set of and input sets that successfully detect real-world vulnerabili- vulnerabilitytypes,thusfurtherimprovingthegeneralizability ties,werelyonthesameinputsetsandsettingsasMST-wi[6]. of our results. The targeted Web systems under test are Jenkins [45] and ThelastcolumninTablesIIandIIIlistsidentifiersforinputs Joomla [46]. Jenkins is a leading open source automation which were able to trigger the vulnerability using one of the server while Joomla is a content management system (CMS) correspondingMRs.Forinstance,onecandetectvulnerability that relies on the MySQL RDBMS and the Apache HTTP CVE-2018-1999046 in Jenkins by running the MR written server. We chose these Web systems because of their plug-in for CWE 200 with inputs 2 or 116. For the first two Joomla |
architecture and Web interface with advanced features (such vulnerabilities, two inputsneed to be present atthe same time as Javascript-based login and AJAX interfaces), which makes intheinputsetinordertotriggerthevulnerabilitybecause,as Jenkins and Joomla good representatives of modern Web opposed to most MRs, the corresponding MRs requires two systems. source inputs to generate follow-up inputs. For instance, to Further, these systems present differences in their output detect vulnerability CVE-2018-17857 in Joomla, one needs interfaceandinputtypesthat,sinceinputsandoutputsarekey input 1 and at least one input amongst inputs 22, 23, 24, or drivers for our approach, contribute to improve the generaliz- 25. ability of our results. Concerning outputs, Joomla is a CMS 3) AIMconfigurations: AIMcanbeconfiguredindifferent where Web pages tend to contain a large amount of static ways to obtain a minimized input set from an initial input set. text that differ in every page, while Jenkins provides mainly Such a configuration consists in a choice of distance function structuredcontentthatmaycontinuouslychange(e.g.,seconds and algorithm for output clustering (§ VI-B), and a choice of fromthelastexecutionofaJenkinstask).Theinputinterfaces algorithm for action clustering (§ VI-C). of Jenkins are mainly short forms and buttons whereas the For the sake of conciseness, in Tables V to XI, we denote inputs interfaces of Joomla often include long text areas and eachconfigurationbythreeletters,whereLandBrespectively several selection interfaces (e.g., for tags annotation). denote the Levenshtein and Bag distances, and K, D, and H The selected versions of Jenkins and Joomla (i.e., 2.121.1 respectively denote the K-means, DBSCAN, and HDBSCAN and 3.8.7, respectively) are affected by known vulnerabilities clustering algorithms. For instance, BDH denotes that Bag thatcanbetriggeredfromtheWebinterface;wedescribethem distance and DBSCAN were used for output clustering, and in § X-A2. then HDBSCAN for action clustering. These notations are The input set provided in the MST-wi’s replication package summarized in Table IV. has been collected by running Crawljax with, respectively, AIMperformsSilhouetteanalysis(§VI-A)todeterminethe four users for Jenkins and six users for Joomla having dif- hyper-parameters required for these clustering algorithms. We ferent roles, e.g., admin. For each role, Crawljax has been consideredthesamerangesofvaluesforthehyper-parameters executedforamaximumof300minutes,topreventthecrawler in both output clustering (§ VI-B) and action clustering steps fromrunningindefinitely,therebyavoidingexcessiveresource (§ VI-C). For K-means, we select the range [1,70] for the consumption. Further, to exercise features not reached by number of clusters k. In the case of DBSCAN, the range for Crawljax, a few additional Selenium [47]-based test scripts the distance threshold ϵ is [2,10] for Jenkins and [3,15] for (four for Jenkins and one for Joomla) have been added to the Joomla. The range is larger for Joomla because Joomla has a input set. In total, we have 160 initial inputs for Jenkins and larger number of Web pages than Jenkins. Finally, the range 148forJoomla,whichareallassociatedtoauniqueidentifier. for the minimum number of neighbours n is [1,5] for both 2) Security Vulnerabilities: The replication package for systems. For HDBSCAN, the range for the minimum number MST-wi[48]includes76metamorphicrelations(MRs).These n of individuals required to form a cluster is [2,8] for both MRs can identify nine vulnerabilities in Jenkins and three systems. vulnerabilitiesinJoomlausingtheinitialinputset,asdetailed We also determine the hyper-parameters for the genetic in Tables II and III, respectively. search(SectionVIII).Relatedworkonwholetestsuitegener- For both tables, the first column contains, when available, ationsuccessfullyreliesonapopulationof80individuals[31]. theCVEidentifiersoftheconsideredvulnerabilities.Thepass- Since we reduced the problem (Section VII) before applying word aging problem (for both Jenkins and Joomla) and weak MOCCO independently to each connected component, which password (for Jenkins) are vulnerabilities that were identified includesfewerinputsthanthewholetestsuitegeneration[31], during the MST-wi study [6] and therefore do not have CVE we experimented with a lower population size of 20 individ- identifiers. The second column provides a short description uals. Additionally, we set the number of generations for the of the vulnerabilities. The third column reports the CWE genetic algorithm to 100, similar to the value considered in ID for each vulnerability. We present two CWE IDs (e.g., previous work [31].IEEETRANSACTIONSONSOFTWAREENGINEERING,VOL.XX,NO.XX,MONTH2024 18 TABLEII JENKINSVULNERABILITIES. CVE Description VulnerabilityType InputIdentifiers CVE-2018- InthefilenameparameterofaJobconfiguration,users CWE 22 160 1000406 withJob/Configurepermissionscanspecifyarelative [49] pathescapingthebasedirectory.Suchpathcanbeused to upload a file on the Jenkins host, resulting in an arbitraryfilewritevulnerability. CVE-2018- AsessionfixationvulnerabilitypreventsJenkinsfrom CWE 384 112,113,114 1000409 invalidatingtheexistingsessionandcreatinganewone [50] whenausersignedupforanewuseraccount. CVE-2018- JenkinsdoesnotperformapermissioncheckforURLs CWE 280,CWE 863 116,157 1999003 handlingcancellationofqueuedbuilds,allowingusers |
[51] withOverall/Readpermissiontocancelqueuedbuilds. CVE-2018- Jenkins does not perform a permission check for the CWE 863,CWE 285 2,116 1999004 URLthatinitiatesagentlaunches,allowinguserswith [52] Overall/Readpermissiontoinitiateagentlaunches. CVE-2018- A exposure of sensitive information vulnerability al- CWE 200,CWE 668 33,55,57,61,62,63,64,75,107,108,110,135,136,156,160 1999006 lowsattackerstodeterminethedateandtimewhena [53] pluginwaslastextracted. CVE-2018- UserswithOverall/Readpermissionareabletoaccess CWE 200 2,116 1999046 theURLservingagentlogsontheUIduetoalackof [54] permissionchecks. CVE-2020- Jenkins does not set Content-Security-Policy headers CWE 79 1,18,19,23,26,75,156,158 2162[55] forfilesuploadedasfileparameterstoabuild,resulting inastoredXSSvulnerability. Password Jenkinsdoesnotintegrateanymechanismformanaging CWE 262 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,18,19,20,22,23, aging passwordaging;consequently,usersaren’tincentivized 24,25,26,27,28,30,32,110,33,34,35,38,39,41,42,43,44,45, problem in toupdatepasswordsperiodically. 46,47,58,61,62,64,65,66,69,70,71,73,74,75,104,108,116, Jenkins 117,118,119,120,121,122,123,124,125,126,127,128,129,130, 131,133,134,143,145,146,159,160 Weak Jenkins does not require users to have strong pass- CWE 521 112,113,114 password in words,whichmakesiteasierforattackerstocompro- Jenkins miseuseraccounts. TABLEIII JOOMLAVULNERABILITIES. CVE Description VulnerabilityType InputIdentifiers CVE-2018- Inadequate checks allow users to see the names of CWE 200 37with22,23,24,25,50 11327[56] tags that were either unpublished or published with restrictedviewpermission. CVE-2018- Inadequatechecksonthetagsearchfieldscanleadto CWE 863 1with22,23,24,25 17857[57] anaccesslevelviolation. Password Joomladoesnotintegrateanymechanismformanaging CWE 262 2,3,5,6,7,8,11,12,15,17,20,22,23,24,25,26,27,28,29,30, aging passwordaging;consequently,usersaren’tincentivized 66,110,144,146 problem in toupdatepasswordsperiodically. Joomla 4) Baselines: We identify the following baselines against failure behaviors than inputs further away from each other. which to compare AIM configurations. Thus, ART generates inputs widely spread across the input domain,inordertofindfailurewithfewerinputsthanRT[59]. A 2016 survey reported that 57% of metamorphic testing ART is also commonly used as baseline or approach in the (MT) work used Random Testing (RT) to generate source contextoftestsuiteminimization[16],[38].Itissimilartoour inputs[5]and,in2021,84%ofthepublicationsrelatedtoMT actionclusteringstep(§VI-C),sinceitisbasedonpartitioning adoptedtraditionalorimprovedRTmethodstogeneratesource the input space and generating new inputs in blocks that are inputs [58]. In the context of test suite minimization, random not already covered [58]. So, to perform ART, we use AIM search is a straightforward baseline against which to compare to perform action clustering directly on the initial input set AIM that is commonly used [21], [22]. This baseline consists instead of output classes. Then, for each cluster, we randomly inrandomlyselectingagivennumberofinputsfromtheinitial select one input that covers it. Finally, we gather the selected input set. This number is determined based on AIM runs. inputs to obtain an input set for the ART baseline. Again, we Each AIM run is performed using 18 different configurations repeatthisprocessforeachAIMrun.Sinceweconsideredthe (§ X-A3), each leading to a different minimized input set. So, K-means, DBSCAN, and HDBSCAN clustering algorithms, for a fair comparison, we configure random search to select n there are three variants of this baseline. inputs from the initial input, where n is the size of the largest input set produced by the 18 AIM configurations. We repeat In Tables V to XI, R denotes the random search baseline this process, for each AIM run, to obtain the same number of while AK, AD, and AH denote the ART baselines, using re- input sets for random search as for AIM. spectivelytheK-means,DBSCAN,andHDBSCANclustering algorithms. These notations are summarized in Table IV. Moreover, Adaptive Random Testing (ART) was proposed to enhance the performance of RT. It is based on the intuition 5) EvaluationMetrics: Toreducetheimpactofrandomness that inputs close to each other are more likely to have similar inourexperiment,eachconfigurationandbaselinewasrun50IEEETRANSACTIONSONSOFTWAREENGINEERING,VOL.XX,NO.XX,MONTH2024 19 TABLEIV have to take into account the AIM execution time required to AIMCONFIGURATIONSANDBASELINES. minimizetheinitialinputset.Thus,theinputsetminimization effectiveness is quantified as the sum of AIM execution time Outputclustering Actionclustering Distance Algorithm Algorithm to obtain the minimized input set plus MRs execution time LKK Levenshtein K-means K-means with the minimized input set, divided by that of the initial LKD Levenshtein K-means DBSCAN LKH Levenshtein K-means HDBSCAN input set. However, since MR execution time is usually large, LDK Levenshtein DBSCAN K-means we cannot collect the time required to execute our 76 MRs LDD Levenshtein DBSCAN DBSCAN LDH Levenshtein DBSCAN HDBSCAN on all the input sets generated by all AIM configurations LHK Levenshtein HDBSCAN K-means (1800 runs in total, resulting from 18 configurations × 50 |
LHD Levenshtein HDBSCAN DBSCAN LHH Levenshtein HDBSCAN HDBSCAN repetitions×2casestudysubjects).Forthisreason,werelyon AIMconfigurations BKK Bag K-means K-means threeadditionalmetrics,thatcanbeinferredwithoutexecuting BKD Bag K-means DBSCAN BKH Bag K-means HDBSCAN MRs, to identify (as detailed in the next paragraph) the “best” BDK Bag DBSCAN K-means configuration. Then, we report on the input set minimization BDD Bag DBSCAN DBSCAN BDH Bag DBSCAN HDBSCAN effectiveness obtained by such configuration. To make the BHK Bag HDBSCAN K-means experiment feasible, amongst the 50 minimized input sets of BHD Bag HDBSCAN DBSCAN BHH Bag HDBSCAN HDBSCAN the best configuration, we select one which has a median cost R × Random (§ III-A and V-A), so that is representative of the 50 runs. AK × K-means Baselines AD × DBSCAN To determine the “best” AIM configuration, we consider AH × HDBSCAN the size of the generated input set (i.e., the number of inputs in it), its cost (§ III-A and V-A), and the time required by the configuration to generate results. Input set size is times on each system, obtaining one minimized input set for a direct measure of effectiveness, while cost is an indirect each run. Moreover, for the sake of performance analysis, we measure, specific to our approach, that is linearly correlated also recorded the execution time required by AIM to generate withMRexecutiontime(§II-B).Forthesethreemetrics(size, minimized input sets. The purpose of the metrics we use for cost, AIM execution time), we compare, for each system, the RQ1 and RQ2 is to determine the “best” configuration to AIM configurations and baselines leading to full vulnerability run AIM on a given system. But one cannot know, before coverage. More precisely, for each metric, we denote by M i experimenting with the target system, which configuration the value of the metric obtained for the ith approach (AIM wouldbethe“best”forthissystem.Basedontheresultsfrom configurations or baseline); the 50 runs of approach i leading Jenkins and Joomla (see § X-A1), we determine the overall to a sample containing 50 data points. “best” configuration, to be recommended as default for a new To compare two samples for M and M , we perform 1 2 system. a Mann-Whitney-Wilcoxon test, which is recommended to For RQ1, we consider the vulnerabilities described in Ta- assess differences in stochastic order for software engineering bleIIforJenkinsandinTableIIIforJoomla.Foreachsystem, data analysis [60]. This is a non-parametric test of the null we consider that a vulnerability is detected by an input set if hypothesis that P(M >M ) = P(M <M ), i.e., M and 1 2 1 2 1 it contains at least one input able to trigger this vulnerability. M are stochastically equal [61]. Hence, from M and M 2 1 2 For the first two Joomla vulnerabilities requiring pairs of samples, we obtain the p-value p indicating how likely is the inputs, the vulnerability is detected if both inputs are present observation of these samples, assuming that M and M are 1 2 in the input set. Hence, for each configuration or baseline, stochastically equal. If p ≤ 0.05, we consider it is unlikely our metric is the vulnerability detection rate (VDR), i.e., the that M and M are stochastically equal. 1 2 totalnumberofvulnerabilitiesdetectedbytheminimizedinput To assess practical significance, we also consider a metric sets obtained for the 50 runs, divided by the total number for effect size. An equivalent reformulation of the null hy- of vulnerabilities detected by the corresponding initial input pothesis is P(M >M )+0.5×P(M =M ) = 0.5, which 1 2 1 2 sets. If VDR is 100%, then we say the configuration or can be estimated by counting in the samples the number baselineleadstofullvulnerabilitycoverage.Theoverall“best” of times a value for M is larger than a value for M 1 2 configuration for Jenkins and Joomla, regarding vulnerability (ties counting for 0.5), then by dividing by the number of detection, should have a large VDR for both systems, ideally comparisons. That way, we obtain the Vargha and Delaney’s 100%.Foreachsystem,werejectconfigurationsandbaselines A metric[61]which,forthesakeofconciseness,wesimply 12 which do not lead to full vulnerability coverage, and then denote A in Tables VI to XI. A is considered to be a robust compare the remaining ones to answer RQ2. metric for representing effect size in the context of non- RQ2 aims at evaluating the effectiveness of AIM in mini- parametric methods [62]. A ranges from 0 to 1, where A=0 mizing the initial input set. Our goal is to identify the AIM indicates that P(M 1 <M 2) = 1, A = 0.5 indicates that configuration generating minimized input sets leading to the P(M 1 >M 2) = P(M 1 <M 2), and A = 1 indicates that minimalexecutiontimeforthe76consideredMRs,acrossthe P(M 1 >M 2)=1. two case studies, and reporting on the execution time saved, B. Empirical Results compared to executing MST-wi on the full input set. But, to have a fair comparison between MRs execution time obtained We first describe the system configurations used to obtain respectively with the initial and minimized input sets, we our results (§ X-B1). To answer RQ1, we report the VDRIEEETRANSACTIONSONSOFTWAREENGINEERING,VOL.XX,NO.XX,MONTH2024 20 TABLEV worst (amongst baselines and AIM configurations) regard- COVERAGEOFTHEJENKINSANDJOOMLAVULNERABILITIESAFTER50 ing vulnerability coverage. After investigation, the minimized |
RUNSOFEACHCONFIGURATIONANDBASELINE. input sets acquired for AD are much smaller compared to Vulnerability SystemUnderTest those obtained for the other baseline methods. These results Coverage Jenkins Joomla cannotbeexplainedbythehyper-parameterasweemployeda Configurations Nbofdetected Nbofdetected orbaselines vulnerabilities VDR vulnerabilities VDR large range of values (§ X-A3). We conjecture that DBSCAN LKK 450 100.0% 146 97.3% merges together many action clusters even when the URLs LKD 371 82.4% 50 33.3% LKH 379 84.2% 150 100.0% involved in these actions are distinct. LDK 450 100.0% 150 100.0% On the other hand, configurations using K-means for the LDD 400 88.9% 50 33.3% LDH 400 88.9% 50 33.3% action clustering step always lead to full vulnerability LHK 450 100.0% 100 66.7% coverage for Jenkins and lead to the largest vulnerabil- LHD 403 89.6% 100 66.7% LHH 447 99.3% 100 66.7% ity coverage for Joomla. This is confirmed by the results BKK 450 100.0% 133 88.7% obtained for the AK baseline, which only uses K-means on BKD 403 89.6% 50 33.3% BKH 410 91.1% 150 100.0% the input space and performs the best (amongst baselines) BDK 450 100.0% 150 100.0% regarding vulnerability coverage. Indeed, even if this config- BDD 338 75.1% 50 33.3% BDH 450 100.0% 50 33.3% uration does not lead to full vulnerability coverage, it is very BHK 450 100.0% 100 66.7% close. In fact, even if it tends to perform worse than AIM BHD 404 89.8% 100 66.7% BHH 428 95.1% 100 66.7% configurations that use K-means for action clustering, it tends R 339 75.3% 74 49.3% to perform better than AIM configurations that do not use AK 447 99.3% 125 83.3% AD 77 17.1% 22 14.7% K-means for action clustering. The success of K-means in AH 350 77.8% 68 45.3% achieving better vulnerability coverage on these datasets can be attributed to its ability to handle well-separated clusters. In our case, these clusters are well-separated because of the associated with the obtained minimized input sets (§ X-B2). distinct URLs occurring in the datasets. Then, to answer RQ2, we describe the effectiveness of the Finally, no baseline reached full vulnerability coverage. input set reduction (§ X-B3). On top of the already mentioned AK and AD baselines, AH 1) System Configurations: We performed all the experi- performed similarly to random testing (R), indicating that the mentsonasystemwiththefollowingconfigurations:avirtual effectoftheHDBSCANalgorithmforactionclusteringisneu- machineinstalledonprofessionaldesktopPCs(DellG77500, tral. The only AIM configuration that performed worse than RAM16Gb,Intel(R)Core(TM)i9-10885HCPU@2.40GHz) random testing is BDD, combining DBSCAN (as mentioned and terminal access to a shared remote server with Intel(R) before, the worst clustering algorithm regarding vulnerability Xeon(R) Gold 6234 CPU (3.30GHz) and 8 CPU cores. detection) for both output and action clustering with Bag dis- 2) RQ1 - Detected Vulnerabilities: Results are presented tance.OnlyLDKandBDKleadtofullvulnerabilitycoverage in Table V. Configurations and baselines that lead to full for both Jenkins and Joomla, and hence are our candidate vulnerabilitycoverageforbothsystemsareingreen,inyellow “best” configurations in terms of VDR. The combination of if they lead to full vulnerability coverage for one system, and DBSCANandK-meanswasveryeffectiveonourdatasetsince in red if they never lead to full vulnerability coverage. DBSCAN was able to identify dense regions of outputs and First, note that the choice of distance function for output K-means allowed for further refinement, forming well-defined clustering does not have a significant impact on vulner- action clusters based on URLs. ability coverage. Indeed, apart from LDH and BDH, the 3) RQ2 - Input Set Reduction Effectiveness: To answer resultsusingtheLevenshteinorBagdistancesarefairlysimilar RQ2 on the effectiveness of minimization, we compare the (e.g., both LKK and BKK discover 450 vulnerabilities in input set reduction of baselines and configurations for both Jenkins) and seem to only depend on the choice of clustering Jenkins and Joomla. Amongst them, only the LKK, LDK, algorithms. This indicates that the order of words in a Web LHK, BKK, BDK, BDH, and BHK configurations lead to page is not a relevant distinction when performing clustering full vulnerability coverage for Jenkins. Their input set sizes for vulnerability coverage. Considering now LDH and BDH, are compared in Table VI, their costs in Table VII, and their takingintoaccounttheorderofwordscanevenbedetrimental, AIM execution time in Table VIII. Similarly, only the LKH, since they perform equally poorly for Joomla but they differ LDK,BKH,andBDKconfigurationsleadtofullvulnerability for Jenkins, where only BDH leads to full vulnerability coverage for Joomla. Their input set sizes are compared in coverage. TableIX,theircostsinTableX,andtheirAIMexecutiontime Second, the choice of clustering algorithm for action inTableXI.Configurationswithfullvulnerabilitycoveragefor clusteringseemstobethemainfactordeterminingvulner- both Jenkins and Joomla (i.e., LDK and BDK) are in bold. abilitycoverage.ConfigurationsusingDBSCANasalgorithm Inthesesixtables,configurationsineachrowarecompared for the action clustering step never lead to full vulnerability with configurations in each column. p denotes the statistical coverage for any system. This indicates that this clustering significance and A the effect size (§ X-A5). When p>0.05, |
algorithm poorly fits the data in the input space. This is we consider the metric values obtained from the two con- confirmed by the results obtained for the AD baseline, which figurations not to be significantly different, and hence the only uses DBSCAN on the input space and performs the cell is left white. Otherwise, the cell is colored, either inIEEETRANSACTIONSONSOFTWAREENGINEERING,VOL.XX,NO.XX,MONTH2024 21 TABLEVI COMPARISONOFJENKINSINPUTSETSIZESFORCONFIGURATIONSWITHFULLVULNERABILITYCOVERAGE. sizes LKK LDK LHK BKK BDK BDH BHK p 5.4e−15 5.1e−1 4.2e−1 8.8e−1 3.1e−20 7.2e−1 LKK A 0.05 0.54 0.55 0.49 1.0 0.48 p 5.4e−15 1.8e−17 1.4e−15 1.8e−14 3.2e−20 3.8e−14 LDK A 0.95 0.99 0.96 0.94 1.0 0.94 p 5.1e−1 1.8e−17 9.8e−1 3.4e−1 3.0e−20 1.6e−1 LHK A 0.46 0.01 0.5 0.44 1.0 0.42 p 4.2e−1 1.4e−15 9.8e−1 4.1e−1 3.2e−20 2.1e−1 BKK A 0.45 0.04 0.5 0.45 1.0 0.43 p 8.8e−1 1.8e−14 3.4e−1 4.1e−1 3.2e−20 6.9e−1 BDK A 0.51 0.06 0.56 0.55 1.0 0.48 p 3.1e−20 3.2e−20 3.0e−20 3.2e−20 3.2e−20 3.2e−20 BDH A 0.0 0.0 0.0 0.0 0.0 0.0 p 7.2e−1 3.8e−14 1.6e−1 2.1e−1 6.9e−1 3.2e−20 BHK A 0.52 0.06 0.58 0.57 0.52 1.0 TABLEVII COMPARISONOFJENKINSINPUTSETCOSTSFORCONFIGURATIONSWITHFULLVULNERABILITYCOVERAGE. costs LKK LDK LHK BKK BDK BDH BHK p 2.5e−16 5.9e−5 1.5e−2 4.4e−3 4.1e−18 5.4e−3 LKK A 0.02 0.73 0.64 0.67 1.0 0.66 p 2.5e−16 7.0e−18 9.5e−18 7.0e−18 4.1e−18 7.0e−18 LDK A 0.98 1.0 1.0 1.0 1.0 1.0 p 5.9e−5 7.0e−18 1.0e−1 2.4e−1 4.1e−18 1.4e−1 LHK A 0.27 0.0 0.41 0.43 1.0 0.41 p 1.5e−2 9.5e−18 1.0e−1 5.8e−1 4.1e−18 6.1e−1 BKK A 0.36 0.0 0.59 0.53 1.0 0.53 p 4.4e−3 7.0e−18 2.4e−1 5.8e−1 4.1e−18 8.2e−1 BDK A 0.33 0.0 0.57 0.47 1.0 0.49 p 4.1e−18 4.1e−18 4.1e−18 4.1e−18 4.1e−18 4.1e−18 BDH A 0.0 0.0 0.0 0.0 0.0 0.0 p 5.4e−3 7.0e−18 1.4e−1 6.1e−1 8.2e−1 4.1e−18 BHK A 0.34 0.0 0.59 0.47 0.51 1.0 TABLEVIII COMPARISONOFJENKINSAIMEXECUTIONTIMESFORCONFIGURATIONSWITHFULLVULNERABILITYCOVERAGE. times LKK LDK LHK BKK BDK BDH BHK p 1.5e−6 1.0e−11 2.0e−2 3.8e−5 3.1e−18 1.3e−9 LKK A 0.78 0.89 0.63 0.74 1.0 0.85 p 1.5e−6 1.2e−5 3.9e−3 5.4e−1 2.6e−18 1.1e−3 LDK A 0.22 0.75 0.33 0.54 1.0 0.69 p 1.0e−11 1.2e−5 1.7e−9 1.4e−3 3.2e−17 8.7e−1 LHK A 0.11 0.25 0.15 0.32 0.98 0.49 p 2.0e−2 3.9e−3 1.7e−9 8.0e−3 3.0e−18 6.7e−7 BKK A 0.37 0.67 0.85 0.65 1.0 0.79 p 3.8e−5 5.4e−1 1.4e−3 8.0e−3 1.5e−17 1.1e−2 BDK A 0.26 0.46 0.68 0.35 0.99 0.65 p 3.1e−18 2.6e−18 3.2e−17 3.0e−18 1.5e−17 6.3e−16 BDH A 0.0 0.0 0.02 0.0 0.01 0.04 p 1.3e−9 1.1e−3 8.7e−1 6.7e−7 1.1e−2 6.3e−16 BHK A 0.15 0.31 0.51 0.21 0.35 0.96 TABLEIX TABLEX COMPARISONOFJOOMLAINPUTSETSIZESFORCONFIGURATIONSWITH COMPARISONOFJOOMLAINPUTSETCOSTSFORCONFIGURATIONSWITH FULLVULNERABILITYCOVERAGE. FULLVULNERABILITYCOVERAGE. sizes LKH LDK BKH BDK costs LKH LDK BKH BDK p 4.8e−15 1.3e−1 2.9e−16 p 1.3e−14 6.9e−1 3.5e−15 LKH LKH A 0.06 0.42 0.06 A 0.06 0.48 0.05 p 4.8e−15 1.1e−14 4.5e−16 p 1.3e−14 1.3e−14 7.0e−15 LDK LDK A 0.94 0.94 0.94 A 0.94 0.94 0.94 p 1.3e−1 1.1e−14 7.4e−16 p 6.9e−1 1.3e−14 3.6e−15 BKH BKH A 0.58 0.06 0.06 A 0.52 0.06 0.05 p 2.9e−16 4.5e−16 7.4e−16 p 3.5e−15 7.0e−15 3.6e−15 BDK BDK A 0.94 0.06 0.94 A 0.95 0.06 0.95 green or red. Since we consider input set size and cost and intensity of the color is proportional to the effect size. More AIM execution time, the smaller the values the better. Thus, precisely, the intensity is |δ|, where δ = 2×A−1 is Cliff’s green (resp. red) indicates that the configuration in the row is delta[62].|δ|isanumberbetween0and1,where0indicates |
better (resp. worse) than the configuration in the column. The the smallest intensity (the lightest color) and 1 indicates theIEEETRANSACTIONSONSOFTWAREENGINEERING,VOL.XX,NO.XX,MONTH2024 22 TABLEXI TABLEXII COMPARISONOFJOOMLAAIMEXECUTIONTIMESFORCONFIGURATIONS COMPARISONOFMRSEXECUTIONTIMEBEFOREANDAFTERINPUTSET WITHFULLVULNERABILITYCOVERAGE. MINIMIZATION.THEPERCENTAGEOFREDUCTIONISONEMINUSTHE RATIOBETWEENTOTALEXECUTIONTIMEAFTERMINIMIZATIONAND times LKH LDK BKH BDK MRSEXECUTIONTIMEBEFOREMINIMIZATION. p 2.5e−18 3.6e−3 1.2e−18 LKH A 0.0 0.33 0.0 Executiontime(minutes) Jenkins Joomla LDK p 2.5e−18 2.8e−18 1.5e−18 MRswithinitialinputset 38,307 20,703 A 1.0 1.0 1.0 MRswithminimizedinputset 6119 3675 p 3.6e−3 2.8e−18 1.3e−18 BKH +AIMexecutiontime 22 22 A 0.67 0.0 0.0 p 1.2e−18 1.5e−18 1.3e−18 =Totalexecutiontime 6141 3697 BDK A 1.0 0.0 1.0 PercentageofReduction 84% 82% largest intensity (the darkest color). Indeed, over 50 Jenkins runs, the average input set size for For Jenkins, among the candidate best configurations (i.e., AK was 94.92 inputs, while it ranges from 38 inputs (40% LDK and BDK), BDK performed significantly better than of AK) for BDH to 74.8 inputs (79%) for LDK. The average LDK for input set size and cost, and even if the difference input set cost for AK was 193,698.94 actions, while it ranges is smaller for AIM execution time, the effect size is also in from 70,500.76 actions (36%) for BDH to 152,373.54 actions favor of BDK. As for the other configurations, Table VI on (79%) for LDK. Over 50 Joomla runs, the average input set input set sizes and Table VII on input set costs consistently size for AK was 70 inputs, while it ranges from 36.02 inputs indicate that BDH is the best configuration while LDK is the (51%) for LKH to 41.46 inputs (59%) for LDK. The average worst configuration. The other configurations seem equivalent inputsetcostforAKwas2,312,784.58actions,whileitranges in terms of size. Regarding cost, LKK tends to be the second from580,705.24actions(25%)forLKHto872,352.72actions tolastconfiguration,theotherconfigurationsbeingequivalent. (38%) for LDK. In short, all AIM configurations with full Regarding AIM execution time in Table VIII, the results vulnerabilitycoverageoutperformedthebestbaselineAK, are more nuanced, BDH is again the best configuration, but whichhighlightstherelevanceofourapproachinreducingthe this time LKK is the worst configuration instead of LDK. cost of testing. BDH is the only configuration that reached full vulnerability Finally,inTableXII,wepresenttheresultsofexecutingthe coverage for Jenkins without using the K-means clustering MRs using both the initial input set and the minimized input algorithm and it performs significantly better than the other set derived from the best configuration. In total, by applying configurations,especiallytheonesinvolvingK-meansforboth AIM, we reduced the execution time of all 76 MRs from output and action clustering steps. This indicates, without 38,307 minutes to 6119 minutes for Jenkins and from 20,703 surprise, that the K-means algorithm takes more resources to minutesto3675minutesforJoomla.Moreover,executingAIM be executed. BDH did not lead to full vulnerability coverage to obtain this minimized input set required 22 minutes for for Joomla, so we do not consider it as a candidate for “best” both systems. Hence, we have a total execution time of 6141 configuration. minutesforJenkinsand3697minutesforJoomla.Asaresult, For Joomla, BDK performed significantly better than the ratio of the total execution time for the minimized input LDK for the considered metrics. As for the other configu- sets divided by the execution time for the initial input sets is rations, Table IX for input set sizes and Table X for input set 16.03% for Jenkins and 17.85% for Joomla. In other words, costs provide identical results, indicating that LKH and BKH AIMreducedtheexecutiontimebyabout84%forJenkins dominate the others while being equivalent. Moreover, BDK and more than 82% for Joomla. This large reduction in dominates LDK, which is the worst configuration. The results executiontimedemonstratestheeffectivenessofourapproach are almost identical for AIM execution time in Table XI, in reducing the cost of metamorphic security testing. with the small difference that LKH performs slightly better than BKH. However, LKH and BKH did not lead to full XI. THREATSTOVALIDITY vulnerability coverage for Jenkins, as opposed to BDK and In this section, we discuss internal, conclusion, construct, LDK. and external validity according to conventional practices [63]. Since we obtained similar results for both Jenkins and Joomla,we consider BDK to be the “best” AIM configura- A. Internal Validity tion.ThisisnotsurprisingsinceBagdistanceislesscostlyto compute than Levenshtein distance (§ VI-B) and we already A potential internal threat concerns inadequate data pre- observed that the order of words in a Web page does not processing, which may adversely impact our results. Indeed, appear to be a relevant distinction for vulnerability coverage clustering relies on the computed similarity among the pre- (§ X-B2). processed outputs and inputs. To address this potential con- As mentioned in § X-B2, no baseline leads to full vulnera- cern, we have conducted a manual investigation of the quality |
bility coverage. AD fared poorly and AH performed similarly of the clusters obtained without pre-processing. This led us to random testing R, but AK was much better, with 99.3% to remove, from the textual content extracted from each Web VDR for Jenkins and 83.3% for Joomla. But even if AK page,allthecontentthatwassharedbymanyWebpages,like had reached full vulnerability coverage for both systems, it system version, date, or (when present) the menu of the Web would be at a disadvantage compared to AIM configurations. page.IEEETRANSACTIONSONSOFTWAREENGINEERING,VOL.XX,NO.XX,MONTH2024 23 For RQ1 on vulnerability detection, one potential threat in the considered Web systems, AIM is a general approach we face is missing inputs that would be able to exercise for metamorphic security testing which does not depend on a vulnerability or incorrectly considering that an input is the considered MRs. Finally, in § X-A1, we highlighted that able to exercise a vulnerability. To ensure our list of inputs the different input/output interfaces provided by Jenkins and triggering vulnerabilities is complete, one author inspected all Joomla, along with the diverse types of vulnerabilities they the MST-wi execution logs to look for failures. contain, is in support of the generalizability of our results. Furthermore, the AIM approach can be generalized to other Web systems, if the data collection and pre-processing com- B. Conclusion Validity ponents are updated accordingly. Nevertheless, further studies For RQ2, we rely on a non-parametric test (i.e., Mann- involving systems with known vulnerabilities are needed. Whitney-Wilcoxontest)toevaluatethestatisticalandpractical significance of differences in results, computing p-value and Vargha and Delaney’s A metric for effect size. Moreover, XII. RELATEDWORK 12 to deal with the randomness inherent to search algorithms, all MT enables the execution of a SUT with a potentially the configurations and baselines were executed over 50 runs. infinite set of inputs thus being more effective than testing Randomness may also arise from (1) the workload of the techniques requiring the manual specification of either test machines employed to conduct experiments, potentially slow- inputs or oracles. However, in MT, the manual definition of ingdowntheperformanceofMST-wi,AIM,andthecasestudy metamorphic relations (MRs) is an expensive activity because subjects, and (2) the presence of other users interacting with it requires that engineers first acquire enough information on thesoftwareundertest,whichcanimpactbothexecutiontime the subject under test and then analyze the testing problem and system outputs. To address these concerns, we conducted to identify MRs. For this reason, in the past, researchers experimentsindedicatedenvironments,ensuringthatthestudy focused on both the definition of methodologies supporting subjects were exclusively utilized by AIM. the identification of MRs [64], [65] and the development of techniques for the automated generation of MRs, based on C. Construct Validity meta-heuristic search [66], [67] and natural language process- ing [68], and targeting query-based systems [69] and Cyber- The constructs considered in our work are vulnerability Physical Systems [67]. detection effectiveness and input set reduction effectiveness. However, source inputs also impact the effectiveness and Vulnerability detection effectiveness is measured in terms of performance of MT; indeed, MRs generate follow-up inputs vulnerability detection rate. Reduction effectiveness is mea- from source inputs and both are executed by the SUT. sured in terms of MR execution time, size and cost of the Consequently, the research community has recently shown minimized input set, and AIM execution time for each con- increasing interest towards investigating the impact of source figuration. As it is expensive to execute all 18 configurations inputs on MT. We summarize the most relevant works in on the MRs, we consider the size of the input set and its the following paragraphs. Note that all these studies focus cost to select the most efficient configuration. The cost of on general fault detection, while we focus on metamorphic the input set has been defined in § III-A and shown to be security testing for Web systems. However, our approach linearly correlated with MR execution time, thus enabling us could also be applied to fault detection while the approaches to evaluate the efficiency of the results. below could also be applied to security testing. We therefore Finally, we executed the minimized input set obtained from comparetheseapproacheswithoutconsideringtheirdifference the best configuration on the MRs and compared the obtained in application. However, we excluded from our survey those execution time, plus the AIM execution time required to approaches that study the effect of source and follow-up minimize the initial input set, with the MRs execution time inputs on the metamorphic testing of systems that largely obtained with the initial input set. Execution time is a direct differ from ours (i.e., sequence alignment programs [70], measure, allowing us to evaluate whether, for systems akin to system validation [71], and deep neural networks [72]). In our case study subjects, AIM should be adopted for making the following paragraphs, we group the surveyed works into vulnerability testing more efficient and scalable. three categories: input generation techniques, input selection techniques, and feedback-directed metamorphic testing. D. External Validity Input generation techniques for MT use white-box ap- One threat to the generalizability of our results stems from proaches based on knowledge of the source code (mainly, thebenchmarkthatweused.Itincludes160inputsforJenkins for statement or branch coverage), while we use a black-box and148inputsforJoomla.Furthermore,weconsideredthelist approach based on input and output information (Section VI). |
ofvulnerabilitiesinJenkinsandJoomlathatweresuccessfully For instance, a study [73] leveraged the evolutionary search triggeredwithMST-wi.However,evenifinthisstudyweused approach EvoSuite [74] to evolve whole input sets in order MST-witocollectourdata,theAIMapproachdoesnotdepend to obtain inputs that lead to more branch coverage or to on a particular data collector, and using or implementing different results on the mutated and non-mutated versions another data collector would enable the use of our approach of the source code. Another example study [75] leveraged with other frameworks. Moreover, even if we relied on pre- symbolic execution to collect constraints of program branches viously obtained MRs to be sure they detect vulnerabilities covered by execution paths, then solved these constraints toIEEETRANSACTIONSONSOFTWAREENGINEERING,VOL.XX,NO.XX,MONTH2024 24 generate the corresponding source inputs. Finally, the exe- time, we cannot execute them and use execution information cution of the generated inputs on the SUT was prioritized to guide source input or MR selection during testing. Finally, based on their contribution regarding uncovered statements. in our problem definition (Section III), we do not consider In this case, both generation and prioritization phases were source inputs independently from each other, which is why white-box approaches based on branch coverage. Note that, we reduced (Section VII) then minimized (Section VIII) the while our approach on input set minimization could be seen cost of the input set as a whole. as similar to input prioritization, both studies focused on increasing coverage, while we focused on reducing cost while XIII. CONCLUSION maintaining full coverage. As demonstrated in our previous work [6], metamorphic Input selection techniques share the same objective of testingalleviatestheoracleproblemfortheautomatedsecurity our work (i.e., reducing the number of source inputs while testing of Web systems. However, metamorphic testing has maximizing MT effectiveness). Because of its simplicity, showntobeatime-consumingapproach.Ourapproach(AIM) random testing (RT) is a common strategy for test suite aims to reduce the cost of metamorphic security testing by minimization [21], [22] that has been used in MT [58]. minimizing the initial input set while preserving its capability RT was enhanced with Adaptive Random Testing (ART), a at exercising vulnerabilities. Our contributions include 1) a technique for obtaining source inputs spread across the input clustering-based black box approach that identifies similar in- domain with the aim of finding failures with fewer number putsbasedontheirsecurityproperties,2)IMPRO,anapproach of inputs than RT. As input selection technique for MT, ART to reduce the search space as much as possible, then divide it outperforms RT in terms of fault detection [59], which (as in into smaller independent parts, 3) MOCCO, a novel genetic thefollowingstudies)wasevaluatedusingtheF-measure,i.e., algorithm which is able to efficiently select diverse inputs the number of inputs necessary to reveal the first failure. In while minimizing their total cost, and 4) a testing framework the AIM approach, our action clustering step (§ VI-C) bears automatically performing input set minimization. similaritieswithART,sincewepartitioninputsbasedonaction We considered 18 different configurations for AIM and we parameters which are relevant for our SUT. But, instead of evaluated our approach on two open-source Web systems, assuming that close inputs lead to close outputs, we directly Jenkins and Joomla, in terms of vulnerability detection rate used SUT outputs during our output clustering step (§ VI-B), and magnitude of the input set reduction. Our empirical since they are inexpensive to obtain compared to executing results show that the best configuration for AIM is BDK: Bag MRs. Finally, instead of only counting the number of inputs distance, DBSCAN to cluster the outputs, and K-means to asintheF-measure(orthesizeoftheinputset,inthecontext clustertheinputs.Theresultsshowthatourapproachcanauto- of input generation), we considered the cost of each source matically reduce MRs execution time by 84% for Jenkins and inputas thenumber ofexecuted actionsas surrogatemeasure, 82% for Joomla while preserving full vulnerability detection. whichistailoredtoreducingMRexecutiontimeinthecontext Across 50 runs, the BDK configuration consistently detected ofWebsystems.Insteadoffocusingonlyondistancesbetween all vulnerabilities in Jenkins and Joomla. We also compared source inputs as in ART, another study [58] also investigated AIMwithfourbaselinescommoninsecuritytesting.Notably, distances with follow-up inputs, which is an improvement none of the baselines reached full vulnerability coverage. since usually there are more follow-up inputs than source Among them, AK (ART baseline using K-means) emerged as inputs. This led to the Metamorphic testing-based adaptive the closest to achieving full vulnerability coverage. All AIM random testing (MT-ART) technique, which performed better configurations with full vulnerability coverage outperformed than other ART algorithms regarding test effectiveness, test this baseline in terms of minimized input set size and cost, efficiency, and test coverage (considering statement, block, demonstrating the effectiveness of our approach in reducing andbranchcoverage).Unfortunately,intheAIMapproach,we the cost of metamorphic security testing. couldnotconsiderfollow-upinputstodrivetheinputselection, sinceexecutingMRstogeneratethesefollow-upinputswould ACKNOWLEDGMENT defeat our purpose of reducing MR execution time. Finally, while studies on MT usually focus either on the This work is supported by the H2020 COSMOS European |
identification of effective MRs or on input generation/selec- project, grant agreement No. 957254, the Science Foundation tion, a recent study proposed feedback-directed metamorphic Irelandgrant13/RC/2094-2,andNSERCofCanadaunderthe testing (FDMT) [76] to determine the next test to perform Discovery and CRC programs. (both in terms of source input and MR), based on previous test results. They proposed adaptive partition testing (APT) to dynamically select source inputs, based on input categories thatleadtofaultdetection,andadiversity-orientedstrategyfor MR selection (DOMR) to select an MR generating follow- up inputs that are as different as possible from the already obtained ones. While this approach is promising in general, it is not adapted to our case, where we consider a fixed set of MRs, MR selection being considered outside the scope of this paper. Moreover, since we aim to reduce MR executionIEEETRANSACTIONSONSOFTWAREENGINEERING,VOL.XX,NO.XX,MONTH2024 25 APPENDIX input is redundant in an input set, then it can be removed without reducing the coverage of the input set: We prove in this appendix theoretical results mentioned in the body of the paper and which demonstrates the correctness Proposition 1 (Redundancy Soundness). Let I ⊆ I be init oftheAIMapproach.ThisincludesTheorem1(AppendixC), an input set and in ∈ I. If in ∈ Redundant(I), then presented in Section III-C on the objective functions, Propo- Cover(I \{in})=Cover(I). sition 3 and Theorem 2 (Appendix D) presented in Section Proof. Cover(I) = Cover(I \{in}) ∪ Cover(in) (§VI-C). VII-E on local dominance, Proposition 8 and Theorem 3 (Ap- We assume that in is redundant in I. So, according to pendixE)presentedinSectionVII-Fondividingtheproblem, Lemma 3, we have redundancy(in,I) > 0 (§III-C). Thus, and desirable properties of roofer (Theorem 4) and miser for each bl ∈ Cover(in), we have that superpos(bl,I) ≥ 2. (Theorem 5) populations (Appendix F) presented in Section Hence, according to our definition of superposition (§III-C), VIII-F on the MOCCO population update. Appendices A there are at least two inputs in and in in I which also and B present intermediate results. 1 2 belong to Inputs(bl). Because we have bl ∈ Cover(in), we have that in ∈ Inputs(bl) (§III-A). So, in is one of A. Input Coverage and Cost these inputs. We assume this is the first one i.e., in = 1 Lemma 1 (Non-Empty Coverage). For every input in, we in. Therefore, for each bl ∈ Cover(in), there exists an have Cover(in)̸=∅. input in ∈ Inputs(bl) ∩ I which is not in. We denote 2 Proof. We consider only inputs containing at least one action in 2 = in bl this input. For each bl ∈ Cover(in), we have (§VI-B). Let act 1 = action(in,1) be the first action of in. in bl ∈ Inputs(bl). So, we have bl ∈ Cover(in bl) (§III-A). act 1 has an output in outCl 1 = OutputClass(in,1) (§VI- Moreover,in bl ∈I andisnotin,soisinI\{in}.Thus(§VI- B).act is in the action set actSet = ActionSet(outCl ) C) we have bl ∈ Cover(I \{in}). Therefore, Cover(in) ⊆ 1 1 1 (§VI-C). After clustering of this action set, let bl = Cover(I \{in}). Finally, Cover(I) = Cover(I \{in}) ∪ 1 Subclass(in,1) = ActionSubclass(act ,actSet ) be the in- Cover(in)=Cover(I \{in}). 1 1 putblockoftheactionact executedinin (§VI-C).Therefore, 1 Finally, removing an input in an input set may update the we have bl ∈Cover(in). 1 redundancy of other inputs: Lemma 2 (Cost is Positive). For every input in, we have Lemma 4 (Removing an Input). Let I be an input set and cost(in) > 0. For every input set I, we have cost(I) ≥ 0, in ,in ∈I be two inputs in this input set. and cost(I)=0 if and only if I =∅. 2 1 redundancy(in ,I)−1≤redundancy(in ,I \{in }) 1 1 2 Proof. Since we removed inputs with no cost (§III-A), for ≤redundancy(in ,I) 1 eachinputin,cost(in)>0.Thecostofaninputsetisthesum Proof. Letbl ∈Cover(in ).Wedenotec =|Inputs(bl)∩I| of the cost of its inputs, hence cost(I) ≥ 0. In particular, if 1 1 I =∅,thencost(I)=0.Finally,becausethecostispositive, andc 2 =|Inputs(bl)∩(I \{in 2})|.Ifin 2 ∈Inputs(bl)then the only case where cost(I)=0 is when I =∅. c 2 = c 1 −1. Otherwise, c 2 = c 1. Therefore, for every bl ∈ Cover(in ) we have c − 1 ≤ c ≤ c . This includes the 1 1 2 1 input blocks minimizing the cardinality in the definition of B. Redundancy redundancy (§III-C). Hence the result. In this section, we prove that our characterization of re- dundancy is sound regarding the coverage of an input set C. Valid Orders of Removal Steps (Proposition 1), then we introduce Lemma 4 which is useful In this section, we first prove Lemma 5 that establishes to prove Lemma 6 in Appendix C and Proposition 8 in that orders of removal steps contain inputs without repetition. Appendix E. Then, we introduce the sublists and prove Lemma 7, used in First, if an input in is in the considered input set I, then its the rest of the section and in proofs of Appendix E. Finally, redundancyisnotnegative.Indeed,thereisalwaysatleastone we prove Theorem 1 presented in §III-C. input in I that covers the input blocks covered by in, which is in itself. Lemma 5 (Order without Repetition). Let I be an input set. If [in ,...,in ] ∈ ValidOrders(I), then in ,...,in are Lemma 3 (Redundancy is Non-negative). Let I be an input 1 n 1 n distinct. |
setandin beaninput.Ifin ∈I,thenredundancy(in,I)≥0. Proof. The proof is done by induction on n. Proof. Let in be an input in I. Let bl ∈ Cover(in) be any If n=0, then [in ,...,in ]=[] is empty, hence there are block covered by in. Because bl ∈ Cover(in), we have 1 n no two identical inputs. that in ∈ Inputs(bl) (§III-A). Moreover, in ∈ I, so in ∈ We now consider [in ,...,in ,in ]∈ValidOrders(I). Inputs(bl)∩I, thus we have superpos(bl,I) ≥ 1 (§III-C). 1 n n+1 By induction hypothesis, in ,...,in are distinct. Ac- Therefore,foranybl ∈Cover(in)wehavesuperpos(bl,I)≥ 1 n cording to the definition of valid order of removal steps 1. So, min{superpos(bl,I) | bl ∈Cover(in)} ≥ 1. By (§III-C), we have [in ,...,in ,in ] only if in ∈ subtracting 1, we have redundancy(in,I)≥0 (§III-C). 1 n n+1 n+1 Redundant(I \{in ,...,in }). 1 n We prove that our characterization of redundancy is sound SinceRedundant(I)={in ∈I |redundancy(in,I)>0}, regarding the coverage of an input set. In other words, if an we have in ∈I \{in ,...,in }. n+1 1 nIEEETRANSACTIONSONSOFTWAREENGINEERING,VOL.XX,NO.XX,MONTH2024 26 Therefore, in is distinct from the previous inputs Finally,weproveinProposition2thatinputsinavalidorder n+1 in ,...,in . Hence the result, which concludes the induction of removal steps can be rearranged in any different order and 1 n step. the rearranged order of removal steps is also valid. Lemma 8 (TranspositionofaValid Order). Let I beaninput Lemma 6 (Redundant Inputs After Reduction). Let I be an set. input set. For each subset of inputs {in ,...,in } ⊆ I, we 1 n have Redundant(I \{in 1,...,in n})⊆Redundant(I). If [in 1,...,in i,...,in j,...,in n]∈ValidOrders(I) and 1≤i<j ≤n, Proof. The proof is done by induction on n. then [in ,...,in ,...,in ,...,in ]∈ValidOrders(I) If n=0 then I \{in ,...,in }=I, hence the result. 1 j i n 1 n Otherwise, we assume by induction that where in i and in j were exchanged and the other inputs are Redundant(I \{in ,...,in })⊆Redundant(I). left unchanged. 1 n According to Lemma 4, redundancies can only decrease Proof. The proof is done in four steps. when performing a removal step. So, for every input in 0 ∈ 1) [in 1,...,in i−1,in j] is a sublist of Redundant(I \{in 1,...,in n,in n+1}), we have: [in 1,...,in i−1,in i,...,in j,...,in n] ∈ ValidOrders(I). So, according to Lemma 7, we have [in ,...,in ,in ] ∈ 0<redundancy(in ,I \{in ,...,in ,in }) 1 i−1 j 0 1 n n+1 ValidOrders(I). Note that the first step even applies to the ≤redundancy(in ,I \{in ,...,in }) 0 1 n case i=1. So, Redundant(I \{in ,...,in ,in }) ⊆ 2) If j = i+1, then one can go directly to the third step. 1 n n+1 Redundant(I \{in ,...,in })⊆Redundant(I). Otherwise, for each i<k <j, we denote: 1 n I =defI \{in ,...,in } i+1 1 i−1 Since orders of removal steps contain inputs without rep- I =defI \{in } k+1 k k etition (Lemma 5), the index index(in,ℓ) of an input in in a removal order ℓ containing in is well defined. We leverage and, for the sake of conciseness, I ki = I k \{in i} and I kj = this to define sublists. I k\{in j}. We now prove by induction on i≤k ≤j−1 that: Definition 1 (Sublists). Let [in ,...,in ] and 1 n [in′,...,in′ ] be two orders of removal steps. We [in 1,...,in i−1,in j,in i+1,...,in k]∈ValidOrders(I) 1 m say that [in′,...,in′ ] is a sublist of [in ,...,in ] 1 m 1 n For the initialization k =i, we have from the first step: if {in ∈[in′,...,in′ ]} ⊆ {in ∈[in ,...,in ]} 1 m 1 n and, for any two inputs in i,in j ∈ [in′ 1,...,in′ m], if [in 1,...,in i−1,in j]∈ValidOrders(I) index(in ,[in ,...,in ]) ≤ index(in ,[in ,...,in ]), then i 1 n j 1 n We now assume by induction hypothesis on i≤k <j−1 index(in ,[in′,...,in′ ]) ≤ index(in ,[in′,...,in′ ]), i 1 m j 1 m that: where index(in,ℓ) is the index of input in in ℓ. [in ,...,in ,in ,in ,...,in ]∈ValidOrders(I) 1 i−1 j i+1 k Lemma 7. Let I be an input set and [in ,...,in ],[in′,...,in′ ] be two orders of removal We first prove, for each bl ∈Cover(in ), that: 1 n 1 m k+1 steps in I. If [in ,...,in ] ∈ ValidOrders(I) and (cid:12) (cid:12) [in′ 1,...,in′ m] is 1 a sublin st of [in 1,...,in n], then (cid:12) (cid:12)Inputs(bl)∩I kj +1(cid:12) (cid:12)>1 [in′,...,in′ ]∈ValidOrders(I). 1 m According to Lemma 5, in 1,...,in i,...,in j,...,in n are distinct. We consider two cases. Proof. The proof is done by induction on m. If m=0 then [in′,...,in′ ]=[]∈ValidOrders(I). a) We assume bl ∈Cover(in j). |
Otherwise, we con1 sider [inm ′,...,in′ ,in′ ] and we as- Because I ji = I \ {in 1,...,in i−1,in i,in i+1,...,in j−1} sum Bee cb ay usi enduc [t ii no ′ 1n ,.th .a .t ,i[ nin ′ m′ 1 ,, i. n.1 ′ m., +in 1]′ m]∈m isValm i ad+ O1 rd sue brs li( sI t). of a I jn i∪d {I ikj n+ k1 += 1}I ⊆\{ I ki jn +1 1, ∪. {. i. n, jin }.i− T1 h, ui sn ,i f+ o1 r, e. a. c. h,i bn lk ∈,i Cn oj} v, erw (e inh ja )v ∩e [in 1,...,in n], there exists an index i such that in i =in′ m+1. Cover(in k+1): B iny ∈de Rfi en di uti no dn ano tf (Iv \al {id inr ,e .m .o .v ,a inl ste }p )s . (§III-C), we have (cid:12) (cid:12)Inputs(bl)∩I ji(cid:12) (cid:12)+1≤(cid:12) (cid:12) (cid:12)Inputs(bl)∩I kj +1(cid:12) (cid:12) (cid:12)+1 i 1 i−1 Moreover, [in′,...,in′ ,in′ ] is a sublist of Because [in ,...,in ,...,in ,...,in ] ∈ 1 m m+1 1 i j n [in 1,...,in i], so we have {in ∈[in′ 1,...,in′ m]} ⊆ ValidOrders(I), we have: {in ∈[in ,...,in ]}. Hence, according to Lemma 6 1 i−1 redundancy(in ,Ii)>0 we have Redundant(I \{in ,...,in }) ⊆ j j 1 i−1 Redundant(I \{in′ 1,...,in′ m}). Therefore, in′ m+1 = So, for each bl ∈Cover(in j), we have: in ∈Redundant(I \{in′,...,in′ }). i [in′,...,in′ ] ∈ V1 alidOrdm ers(I) and in′ ∈ (cid:12) (cid:12)Inputs(bl)∩I ji(cid:12) (cid:12)>1 1 m m+1 Redundant(I \{in′ 1,...,in′ m}), so, according to our defi- Thus, for each bl ∈Cover(in j)∩Cover(in k+1), we have: nition for valid orders of removal steps (§III-C), we have (cid:12) (cid:12) [in′,...,in′ ,in′ ]∈ValidOrders(I). (cid:12)Inputs(bl)∩Ij (cid:12)>1 1 m m+1 (cid:12) k+1(cid:12)IEEETRANSACTIONSONSOFTWAREENGINEERING,VOL.XX,NO.XX,MONTH2024 27 b) We assume bl ∈/ Cover(in ). Thus, for each bl ∈Cover(in )∩Cover(in ), we have: j i j Foreachbl ∈Cover(in k+1)\(Cover(in i)∪Cover(in j)), (cid:12) (cid:12) we have: (cid:12) (cid:12)Inputs(bl)∩I jj(cid:12) (cid:12)>1 (cid:12) (cid:12)Inputs(bl)∩I ki +1(cid:12) (cid:12)=(cid:12) (cid:12) (cid:12)Inputs(bl)∩I kj +1(cid:12) (cid:12) (cid:12) b) We assume bl ∈/ Cover(in j). In that case, for each bl ∈ Cover(in )\Cover(in ), we have: i j Moreover, for each bl ∈Cover(in )\Cover(in ), we have: (cid:12) (cid:12)Inputs(bl)∩I ki +1(cid:12) (cid:12)+1=(cid:12) (cid:12) (cid:12)i Inputs(bl)∩j I kj +1(cid:12) (cid:12) (cid:12) (cid:12) (cid:12)Inputs(bl)∩I ji(cid:12) (cid:12)+1=(cid:12) (cid:12) (cid:12)Inputs(bl)∩I jj(cid:12) (cid:12) (cid:12) We consider two cases. So, for each bl ∈Cover(in )\Cover(in ), we have: k+1 j i)Thereexistsi+1≤k ≤j−1suchthatbl ∈Cover(in ). k (cid:12) (cid:12)Inputs(bl)∩I ki +1(cid:12) (cid:12)≤(cid:12) (cid:12) (cid:12)Inputs(bl)∩I kj +1(cid:12) (cid:12) (cid:12) In that case, let k max be the largest. So, we have: (cid:0) Inputs(bl)∩Ii(cid:1) ∪{in }=Inputs(bl)∩Ii Because [in 1,...,in i,...,in j,...,in n] ∈ ValidOrders(I), j kmax kmax we have: Hence: redundancy(in ,Ii )>0 k+1 k+1 (cid:12) (cid:12)Inputs(bl)∩Ii(cid:12) (cid:12)+1=(cid:12) (cid:12)Inputs(bl)∩Ii (cid:12) (cid:12) So, for each bl ∈Cover(in ), we have: j kmax k+1 (cid:12) (cid:12)Inputs(bl)∩Ii (cid:12) (cid:12)>1 Thus: Thus, for each bl ∈Cover(in k+1)k+ \1 Cover(in j), we have: (cid:12) (cid:12) (cid:12)Inputs(bl)∩I jj(cid:12) (cid:12) (cid:12)=(cid:12) (cid:12)Inputs(bl)∩I ki max(cid:12) (cid:12) (cid:12) (cid:12)Inputs(bl)∩Ij (cid:12) (cid:12)>1 Because [in 1,...,in i,...,in j,...,in n] ∈ ValidOrders(I), (cid:12) k+1(cid:12) we have: So, we proved by case, for each bl ∈Cover(in k+1), that: redundancy(in kmax,I ki max)>0 (cid:12) (cid:12) (cid:12) (cid:12)Inputs(bl)∩I kj +1(cid:12) (cid:12)>1 So, because bl ∈Cover(in kmax), we have: |
Thus, we have redundancy(in k+1,I kj +1)>0. (cid:12) (cid:12)Inputs(bl)∩I ki max(cid:12) (cid:12)>1 Moreover, according to Lemma 5, Finally: in 1,...,in i,...,in j,...,in n are distinct, so in k+1 ∈ I kj +1. (cid:12) (cid:12)Inputs(bl)∩Ij(cid:12) (cid:12)>1 Hence, in ∈ Redundant(Ij ). By induction hypothesis, (cid:12) j(cid:12) k+1 k+1 we have: ii) Otherwise, for each i+1 ≤ k ≤ j −1, we have bl ∈/ Cover(in ). Note that this is also the case if j = i+1. In [in ,...,in ,in ,in ,...,in ]∈ValidOrders(I) k 1 i−1 j i+1 k that case, we have: So, according to the definition of valid removal steps (§III- (cid:0) Inputs(bl)∩Ii(cid:1) ∪{in }=Inputs(bl)∩I C): j i i where I =I \{in ,...,in }. So: [in ,...,in ,in ,in ,...,in ,in ]∈ValidOrders(I) i 1 i−1 1 i−1 j i+1 k k+1 which concludes the induction step. (cid:12) (cid:12)Inputs(bl)∩I ji(cid:12) (cid:12)+1=|Inputs(bl)∩I i| Therefore, we proved by induction: Thus: (cid:12) (cid:12) [in ,...,in ,in ,in ,...,in ]∈ValidOrders(I) (cid:12)Inputs(bl)∩Ij(cid:12)=|Inputs(bl)∩I | 1 i−1 j i+1 j−1 (cid:12) j(cid:12) i 3) We now prove that, for each bl ∈Cover(in i), we have: Because [in 1,...,in i,...,in j,...,in n] ∈ ValidOrders(I), (cid:12) (cid:12) we have: (cid:12)Inputs(bl)∩Ij(cid:12)>1 (cid:12) j(cid:12) redundancy(in ,I )>0 i i where Ii and Ij are defined in step two. We consider two j j So, because bl ∈Cover(in i), we have: cases. a) We assume bl ∈ Cover(in j). Because I ji ∪ {in i} = |Inputs(bl)∩I i|>1 Ij∪{in }, for each bl ∈Cover(in )∩Cover(in ), we have: j j i j Finally: (cid:12) (cid:12)Inputs(bl)∩I ji(cid:12) (cid:12)+1=(cid:12) (cid:12) (cid:12)Inputs(bl)∩I jj(cid:12) (cid:12) (cid:12)+1 (cid:12) (cid:12) (cid:12)Inputs(bl)∩I jj(cid:12) (cid:12) (cid:12)>1 Because [in 1,...,in i,...,in j,...,in n] ∈ This concludes the proof by case for bl ∈ Cover(in i)\ ValidOrders(I), we have: Cover(in j). Therefore, we proved by case that, for each bl ∈ Cover(in ), we have: redundancy(in ,Ii)>0 i j j (cid:12) (cid:12) (cid:12)Inputs(bl)∩Ij(cid:12)>1 So, for each bl ∈Cover(in ), we have: (cid:12) j(cid:12) j (cid:12) (cid:12)Inputs(bl)∩I ji(cid:12) (cid:12)>1 Thus, we have redundancy(in i,I jj)>0.IEEETRANSACTIONSONSOFTWAREENGINEERING,VOL.XX,NO.XX,MONTH2024 28 Moreover, according to Lemma 5, steps, then [in ,...,in ] is also a valid order, when σ σ(1) σ(n) in ,...,in ,...,in ,...,in are distinct, so in ∈ Ij. denotes a permutation of the n inputs, i.e., the same inputs 1 i j n i j Thus, in ∈Redundant(Ij). but (potentially) in a different order: i j Finally, we have from the second step (or the first one, if Proposition 2 (Permutation of a Valid Order). Let I be j =i+1): an input set. If [in ,...,in ] ∈ ValidOrders(I) and σ is 1 n [in 1,...,in i−1,in j,in i+1,...,in j−1]∈ValidOrders(I) a permutation on n elements, then [in σ(1),...,in σ(n)] ∈ ValidOrders(I). Therefore, according to the definition of valid orders of removal steps (§III-C): Proof. Every permutation of a finite set can be expressed as theproductoftranspositions[77](p.60).Letmbethenumber [in ,...,in ,in ,in ,...,in ,in ]∈ValidOrders(I) 1 i−1 j i+1 j−1 i of such transpositions for σ. We prove the result by induction on m. 4) Finally, if j =n then the proof is complete. Otherwise, we prove by induction on j ≤k ≤n that: Ifm=0,thenσ istheidentityandwehavebyhypothesis: [in 1,...,in j,...,in i,in j+1,...,in k]∈ValidOrders(I) [in σ(1),...,in σ(n)]∈ValidOrders(I) For the initialization k =j, we have from the third step: Ifσ =τ ◦τ ◦···◦τ ,thenwedenoteσ′ =τ ◦···◦τ . m+1 m 1 m 1 By induction hypothesis, we have: [in ,...,in ,...,in ]∈ValidOrders(I) 1 j i [in ,...,in ]∈ValidOrders(I) We now assume by induction hypothesis on j ≤ k < n σ′(1) σ′(n) that: Then, by applying Lemma 8 to the transposition (ij) = τ , we have: [in ,...,in ,...,in ,in ,...,in ]∈ValidOrders(I) m+1 1 j i j+1 k Because [in 1,...,in i,...,in j,...,in n] ∈ ValidOrders(I), [in τm+1◦σ′(1),...,in τm+1◦σ′(n)]∈ValidOrders(I) we have: Finally, because σ =τ ◦σ′, we conclude the induction m+1 in ∈Redundant(I \{in ,...,in ,...,in ,...,in }) step. k+1 1 i j k |
Moreover, we have: Since the order of valid removal steps does not matter, we introduceacanonicalorderonremovalsteps.Weassumeeach {in ,...,in ,...,in ,...,in } 1 i j k input is numbered, and we focus on steps where inputs are ={in ,...,in ,...,in ,...,in } 1 j i k alwaysremovedinincreasingorder.Formally,[in ,...,in ] So: i1 im is in canonical order if 1≤i <···<i ≤n. For instance, 1 m [in ,in ,in ] is in canonical order, but not [in ,in ,in ]. in ∈Redundant(I \{in ,...,in ,...,in ,...,in }) 2 4 7 4 2 7 k+1 1 j i k That way, to investigate which orders of removal steps are Thus, according to the definition of valid orders of removal valid, we can focus on inputs in increasing order of index, steps (§III-C): instead of considering backtracking on previous inputs. [in ,...,in ,...,in ,in ,...,in ,in ] 1 j i j+1 k k+1 Theorem 1 (Canonical Order). Let I = {in ,...,in } be 1 n ∈ValidOrders(I) an input set. There exists [in ,...,in ] ∈ ValidOrders(I) i1 im which concludes the induction step. such that 1≤i 1 <···<i m ≤n and: Therefore, we proved by induction the lemma: (cid:88) cost(in )=gain(I) ij [in 1,...,in j,...,in i,in j+1,...,in n]∈ValidOrders(I) 1≤j≤m Proof. Let [in ,...,in ] ∈ ValidOrders(I) be a valid i′ i′ 1 m order of removal steps with a maximal cumulative cost i.e., The issue with our definition of the gain (§III-C) is that, according to our definition of the gain (§III-C): to compute gain(I) of an input set I, one has to try all the (cid:88) possible order of removal steps to determine which ones are cost(in )=gain(I) i′ valid. If I contains n inputs and we consider orders of 0 ≤ j 1≤j≤m k ≤n removal steps, there are n×···×(n−k+1)= !n !(n−k) If m=0, then [] satisfies the theorem for a gain =0. possibilities to investigate, which is usually large. Otherwise, we assume m > 0. According to Lemma 5, Thus, we present an optimization to reduce the cost of [in ,...,in ] contains m distinct inputs. Let σ ∈ S be computing the gain. First, we prove in Proposition 2 that, i′ i′ m 1 m the permutation such that: while the order of inputs matters to determine if an order of removal steps is valid, it does not matter anymore once [in ,...,in ]=[in ,...,in ] we know the order is valid. In other words, inputs in a valid σ(i′ 1) σ(i′ m) i1 im order of removal steps may be rearranged in any different with 1≤i <···<i ≤n. 1 m order, and the rearranged order of removal steps would be According to Proposition 2, we have [in ,...,in ] ∈ i1 im valid. Formally, if [in ,...,in ] is a valid order of removal ValidOrders(I). 1 nIEEETRANSACTIONSONSOFTWAREENGINEERING,VOL.XX,NO.XX,MONTH2024 29 Moreover, because a permutation of the elements of a sum Proof. The proof is done by contradiction. If S = ∅ does not change the value of the sum, we have: then Cover(S) = ∅. Because in ⊑ S we have (cid:88) (cid:88) Cover(in) ⊆ Cover(S). So, Cover(in) = ∅, which con- cost(in )= cost(in )=gain(I) ij i′ j tradicts Lemma 1. 1≤j≤m 1≤j≤m Second, when two inputs locally dominates each other, Hence the theorem. they are equivalent in the sense of the equivalence relation Theorem 1 allows us to focus on inputs in increasing from §VII-D (two inputs are equivalent if they have the order,insteadofexploringallpossibleinputorders.Thus,this same coverage and cost), that we denote ≡ in the following. optimization saves computation steps and makes computing But, because we removed duplicates in §VII-D, this case can the gain more tractable. More precisely, since there are !k happen only if the inputs are the same, which is excluded by ways of ordering k selected inputs, there are “only” !k!(! nn −k) the definition of local dominance (§VII-E). remaining possibilities to investigate, instead of !n . !(n−k) Lemma 10 (LocalDominanceforSingletonsisAsymmetric). Let in ,in ∈ I be two inputs. If in ⊑ {in }, then 1 2 search 1 2 D. Local Dominance in ̸⊑{in }. 2 1 In this section we prove, as presented in §VII-E, that the Proof. We assume by contradiction that in ⊑ {in } and local dominance relation is local (Proposition 3), hence is 1 2 in ⊑{in }. faithful to its name, and that non locally-dominated inputs, 2 1 Hence, we have: 1) Cover(in ) ⊆ Cover(in ) and as a whole, locally-dominate all the locally dominated inputs 1 2 Cover(in ) ⊆ Cover(in ), so Cover(in ) = Cover(in ). (Theorem 2). We start by the locality property. 2 1 1 2 2) cost(in ) ≥ cost(in ) and cost(in ) ≥ cost(in ), so 1 2 2 1 Proposition 3 (Local Dominance is Local). Let in 1 ∈ cost(in 1)=cost(in 2). So, we have in 1 ≡in 2. I search be a remaining input and S ⊆ I search be a subset Because we removed duplicates in §VII-D, we have in 1 = of the remaining inputs. If in 1 ⊑ S then in 1 ⊑ S ∩ in 2. So, in 1 ⊑ {in 1}, which contradicts in the definition {in 2 ∈I search | in 1⊓in 2}. of local dominance (§VII-E) that an input cannot locally dominates itself. Proof. We assume in ⊑ S. So, by definition of local domi- 1 |
nance (§VII-E), we have in 1 ̸∈ S, Cover(in 1) ⊆ Cover(S), Note that this result is also useful to prove transitivity and cost(in 1) ≥ cost(S). First, because in 1 ̸∈ S, we have (Proposition 7). in ̸∈S∩{in ∈I | in ⊓in }. 1 2 search 1 2 Third,becausethecostofaninputispositive,ifaninputis Moreover, for every in ∈ I , if there is no overlap 2 search locally dominated by several inputs, then they have a strictly between in and in then, according to the overlapping 1 2 smaller cost. relation (§VII-E), we have Cover(in )∩Cover(in ) = ∅. 1 2 Thus: Lemma 11 (Cost Hierarchy). Let in 1 ∈ I search be an input and S ⊆I be a subset of the remaining inputs. If in ⊑ search 1 Cover(in 1)∩Cover(S∩{in 2 ∈I search | ¬in 1⊓in 2})=∅ S and |S| ≥ 2 then for every in 2 ∈ S we have cost(in 2) < cost(in ). Hence, because Cover(in )⊆Cover(S), we have: 1 1 Cover(in 1) Proof. Because in 1 ⊑ S we have cost(in 1) ≥ cost(S). So, =Cover(in 1)∩Cover(S) for every in 2 ∈S we have cost(in 2)≤cost(in 1). If |S|≥2, =Cover(in )∩(Cover(S∩{in ∈I | in ⊓in }) then the inequality is strict because, according to Lemma 2, 1 2 search 1 2 ∪ Cover(S∩{in 2 ∈I search | ¬in 1⊓in 2})) the cost is positive cost(in 2)>0. =(Cover(in )∩Cover(S∩{in ∈I | in ⊓in })) 1 2 search 1 2 Finally, we prove as expected that the local dominance ∪ (Cover(in ) 1 relation is asymmetric. ∩ Cover(S∩{in ∈I | ¬in ⊓in })) 2 search 1 2 =(Cover(in )∩Cover(S∩{in ∈I | in ⊓in })) Proposition 4 (LocalDominanceforSubsetsisAsymmetric). 1 2 search 1 2 Let in ,in ∈ I be two inputs and S ,S ⊆ I be 1 2 search 1 2 search So, Cover(in )⊆Cover(S∩{in ∈I | in ⊓in }). 1 2 search 1 2 two subsets of the remaining inputs. If in ⊑ S , in ∈ S , Finally, cost(S) ≥ cost(S∩{in ∈I | in ⊓in }). 1 1 2 1 2 search 1 2 and in ⊑S , then in ̸∈S . Thus, because cost(in ) ≥ cost(S), we have cost(in ) ≥ 2 2 1 2 1 1 cost(S∩{in 2 ∈I search | in 1⊓in 2}). Proof. The proof is made by contradiction. We assume in 1 ∈ Therefore, in ⊑S∩{in ∈I | in ⊓in }. S and prove a contradiction in different cases for |S | and 1 2 search 1 2 2 1 |S |. Wenowprovethatthelocaldominancerelationisasymmet- 2 |S | = 0 or |S | = 0 are not possible, because this would ric(Proposition5).First,becauseaninputalwayscoverssome 1 2 contradict Lemma 9. objectives, it can be locally dominated only by a non-empty If |S | = 1 and |S | = 1, then S = {in } and S = input subset. 1 2 1 2 2 {in }.Thus,in ⊑{in }andin ⊑{in },whichcontradicts 1 1 2 2 1 Lemma 9 (Non-Empty Local Dominance). Let in ∈ I Lemma 10. search be an input and S ⊆ I be a subset of the remaining If |S | = 1 then S = {in }. Because in ⊑ S , we search 1 1 2 1 1 inputs. If in ⊑S then S ̸=∅. have cost(in ) ≥ cost(in ). Moreover, because in ⊑ S 1 2 2 2IEEETRANSACTIONSONSOFTWAREENGINEERING,VOL.XX,NO.XX,MONTH2024 30 and in ∈ S , if |S | ≥ 2 then according to Lemma 11 we Proposition7(LocalDominanceforInputsisTransitive). The 1 2 2 have cost(in )<cost(in ), hence the contradiction. (cid:44)→ relation is transitive i.e., for every in ,in ,in ∈I , 1 2 1 2 3 search Because in ⊑ S and in ∈ S , if |S | ≥ 2 then if in (cid:44)→in and in (cid:44)→in , then in (cid:44)→in . 1 1 2 1 1 3 2 2 1 3 1 according to Lemma 11 we have cost(in ) < cost(in ). 2 1 Proof. If in (cid:44)→ in and in (cid:44)→ in , then there exists Because in ⊑ S and in ∈ S , if |S | ≥ 2 then according 3 2 2 1 2 2 1 2 2 S ,S ⊆ I such that in ∈ S , in ⊑ S , in ∈ S , to Lemma 11 we have cost(in ) < cost(in ). Hence the 1 2 search 2 1 1 1 3 2 1 2 and in ⊑ S . According to Proposition 6, we have in ⊑ contradiction cost(in )<cost(in ). 2 2 1 1 1 (S \{in })∪S .Hence,becausein ∈S ⊆(S \{in })∪S 1 2 2 3 2 1 2 2 Based on our definition of local dominance for subsets we have in (cid:44)→in . 3 1 (§VII-E), we introduce the corresponding definition for in- Wefinallyprovethatthelocaldominancerelationisacyclic. puts, in order to state more easily Corollary 1, then prove Theorem 2. Corollary 1 (Local Dominance is Acyclic). There exists no cycle in (cid:44)→ ... (cid:44)→ in (cid:44)→ in such that in ,...,in ∈ Definition 2 (Local Dominance). The input in ∈ I 1 n 1 1 n 1 search I . locally-dominatestheinputin ∈I ,denotedin (cid:44)→in , search 2 search 1 2 if there exists a subset S ⊆ I search such that in 1 ∈ S and Proof. The proof is done by contradiction, by assuming the in 2 ⊑S. existence of such a cycle in 1 (cid:44)→ ... (cid:44)→ in n (cid:44)→ in 1. According to Proposition 7, we have by transitivity that Proposition 5 (Local Dominance for Inputs is Asymmetric). in (cid:44)→ in . Then, according to Proposition 5, we have by The(cid:44)→relationisasymmetrici.e.,foreveryin ,in ∈I , 1 1 1 2 search asymmetry in ̸(cid:44)→in . Hence the contradiction. if in (cid:44)→in , then in ̸(cid:44)→in . 1 1 2 1 1 2 Finally, we use the transitivity (Proposition 7 and Proposi- Proof. If in (cid:44)→ in then there exists S ⊆ I such that 2 1 1 search |
tion6)andtheacyclicity(Corollary1)ofthelocal-dominance in ∈ S and in ⊑ S . The proof is done by contradiction, 2 1 1 1 relationtoprovethatnonlocally-dominatedinputs,asawhole, assuming in (cid:44)→ in . Hence, there exists S ⊆ I such 1 2 2 search locally-dominate all the locally dominated inputs. that in ∈ S and in ⊑ S According to Proposition 4 we 1 2 2 2 have in 1 ̸∈S 2, hence the contradiction. Theorem 2 (Local Dominance Hierarchy). For every locally- dominatedinputin ∈I ,thereexistsasubsetS ⊆I Wenowprovethatthelocaldominancerelationistransitive search search of not locally-dominated inputs such that in ⊑S. (Proposition 7). Proposition 6 (Local Dominance for Subsets is Transitive). Proof. For each input in 0 ∈I search we denote: Let in 1,in 2 ∈ I search be two inputs and S 1,S 2 ⊆ I search be LocDoms(in )=def{in ∈I | in (cid:44)→in } 0 search 0 two subsets of the remaining inputs. If in ⊑ S , in ∈ S , and in ⊑ S , then in ⊑ (S \ the input set which locally dominate in . 1 1 2 1 2 2 1 1 0 {in })∪S . Let in ∈ I be an input. We prove by induction on 2 2 0 search LocDoms(in ) that either in is not locally dominated or Proof. Because Cover(in ) ⊆ Cover(S ), in ∈ S , and 0 0 1 1 2 1 there exists a subset S ⊆ I of not locally-dominated Cover(in )⊆Cover(S ), we have: search 2 2 inputs such that in ⊑S. 0 Cover(in )⊆Cover(S \{in })∪Cover(S ) For the initialization, we consider LocDoms(in ) = ∅. In 1 1 2 2 0 =Cover((S \{in })∪S ) that case, in is not locally dominated. 1 2 2 0 For the induction step, we consider LocDoms(in ) ̸= ∅, Moreover, because cost(in ) ≥ cost(S ), in ∈ S , and 0 1 1 2 1 so in is a locally dominated input. Hence, there exists S ⊆ cost(in )≥cost(S ), we have: 0 0 2 2 I such that in ⊑S . We consider two cases. search 0 0 cost(in 1)≥cost(S 1\{in 2})+cost(S 2) Either every input in S 0 is not locally dominated. In that ≥cost((S 1\{in 2})∪S 2) case S =S 0 satisfies the inductive property. Finally, because in 1 ⊑ S 1 we have in 1 ̸∈ S 1. We prove Or there exists n ≥ 1 input in 1,...,in n in S 0 which are in ̸∈ (S \ {in }) ∪ S by contradiction. If in ∈ (S \ locally dominated. The rest of the proof is done in two steps. 1 1 2 2 1 1 {in 2})∪S 2 then, because in 1 ̸∈ S 1, we have in 1 ∈ S 2. In First, let 1 ≤ i ≤ n. We prove that LocDoms(in i) ⊂ that case, we have LocDoms(in 0), with a strict inclusion. Because in ∈ S and in ⊑ S , we have in (cid:44)→ in . cost(in ) i 0 0 0 i 0 1 Hence, in ∈LocDoms(in ). ≥cost(S \{in })+cost(S ) i 0 1 2 2 Let in ∈ LocDoms(in ), so we have in (cid:44)→ in . So, =cost(S \{in })+cost(S \{in })+cost(in ) i i 1 2 2 1 1 according to Proposition 7, we have by transitivity in (cid:44)→ ≥cost(in ) 1 in , thus in ∈ LocDoms(in ). Hence, LocDoms(in ) ⊆ 0 0 i Hence, by subtracting cost(in ), we have: LocDoms(in ). 1 0 According to Corollary 1, there is no cycle so in ̸(cid:44)→ in . cost(S \{in })+cost(S \{in })=0 i i 1 2 2 1 Hence, in ̸∈LocDoms(in ). i i Thus, according to Lemma 2, we have S = {in } and Finally, we have LocDoms(in ) ⊆ LocDoms(in ) 1 2 i 0 S ={in }. Therefore in ⊑{in } and in ⊑{in }, which and in ∈ LocDoms(in ) \ LocDoms(in ). Therefore 2 1 1 2 2 1 i 0 i contradicts Lemma 10. LocDoms(in )⊂LocDoms(in ). i 0IEEETRANSACTIONSONSOFTWAREENGINEERING,VOL.XX,NO.XX,MONTH2024 31 Because LocDoms(in ) ⊂ LocDoms(in ), we can apply for each input in ∈ C, and for each order of re- i 0 0 the induction hypothesis on in , which is locally dominated. moval steps [in ,...,in ], if in ,...,in ̸∈ C, then i 1 n 1 n Therefore, for each 1 ≤ i ≤ n, there exists a subset S ⊆ redundancy(in ,I \{in ,...,in })=redundancy(in ,I). i 0 1 n 0 I of not locally-dominated inputs such that in ⊑S . search i i Proof. Comps(I) are the connected components for the Second,weusetheseS todefinebyinductionon0≤m< i redundant inputs in I, hence they are disjoint. There- n the following input sets: fore, for each in ∈ C, because in ,...,in ̸∈ C, 0 1 n I 0 =S 0 we have that in 0 do not overlap with any of the I m+1 =(I m\{in m+1})∪S m+1 in 1,...,in n. Therefore, according to Lemma 12, we have redundancy(in ,I) = redundancy(in ,I \{in }) = ··· = 0 0 1 and we prove by induction on 0≤m≤n that in 0 ⊑I m and redundancy(in ,I \{in ,...,in }). 0 1 n that either, m < n and in ,...,in are the only locally m+1 n dominated inputs in I , or m=n and I contains no locally Proposition 8 (Independent Removal Steps). m m dominated input. Let I be an input set and let C ,...,C ∈ Comps(I) 1 c For the initialization, we have I = S and the inductive denote c connected components. If, for each 0≤i≤c, there 0 0 property is satisfied because in ⊑ S and in ,...,in are exists inputs ini,...,ini ∈ C such that [ini,...,ini ] ∈ 0 0 1 n 1 ni i 1 ni the only locally dominated inputs in S . ValidOrders(I), then: 0 Fortheinductionstepm+1≤n,weassumethatin ⊑I 0 m [in1,...,in1 ]+···+[inc,...,inc ]∈ValidOrders(I) |
andthatin m+1,...,in n aretheonlylocallydominatedinputs 1 n1 1 nc in I . m where + denotes list concatenation. BecauseI =(I \{in })∪S andS contains m+1 m m+1 m+1 m+1 no locally-dominated input, we have that either, m+1 < n Proof. The proof is done by induction on c. and in ,...,in are the only locally dominated inputs in If c=0, then [in1,...,in1 ]+···+[inc,...,inc ]=[]∈ m+2 n 1 n1 1 nc I , or m+1=n and I contains no locally-dominated ValidOrders(I). m+1 m+1 input. We assume by induction that [in1,...,in1 ] + ··· + Moreover, because in ⊑ I , in ∈ I , and in ⊑ [inc,...,inc ]∈ValidOrders(I). 1 n1 0 m m+1 m m+1 1 nc S , then according to Proposition 6, we have in ⊑(I \ We denote ℓ this order of removal steps m+1 0 m {in })∪S . Hence in ⊑ I , which concludes the and for the sake of simplicity we also denote m+1 m+1 0 m+1 induction step. S ={in1,...,in1 ,...,inc,...,inc }. 1 n1 1 nc Therefore, in 0 ⊑ I n and I n contains no locally-dominated By induction, let C c+1 ∈ Comps(I) be another connected input. Thus, S = I n satisfies the inductive property on component and let inc 1+1, ..., inc n+ c+1 1 ∈ C c+1 such that LocDoms(in ). Hence the claim for not locally-dominated [inc+1,...,inc+1 ]∈ValidOrders(I). 0 1 nc+1 inputs. We now prove that ℓ + [inc+1,...,inc+1 ] ∈ 1 nc+1 ValidOrders(I). Thisisdonebyprovingbyinductionon0≤j ≤n that: E. Dividing the Problem c+1 In this section, we prove that redundancy updates are ℓ+[inc 1+1,...,inc j+1]∈ValidOrders(I) local (Lemma 12), that reductions on different connected and that, for each in ∈C \S , we have: components can be performed independently (Proposition 8), 0 c+1 j and finally that the gain can be independently computed on redundancy(in ,I \(S∪S ))=redundancy(in ,I \S ) 0 j 0 j each connected component (Theorem 3). The main purpose of the section is to justify we can divide our problem into where S j ={inc 1+1,...,inc j+1}. subproblems (§VII-F). We start by the locality. If n = 0, then ℓ + [inc+1,...,inc+1 ] = ℓ ∈ c+1 1 nc+1 ValidOrders(I). Moreover, because C is disjoint with Lemma 12 (Redundancy Updates are Local). Let I be an c+1 C ,...,C ,accordingtoLemma13,performingtheℓremoval input set and let in ,in ∈I. 1 c 1 2 steps does not change the redundancies of inputs in C i.e., If redundancy(in ,I \{in })̸=redundancy(in ,I), then c+1 2 1 2 for each in ∈ C , we have redundancy(in ,I \S) = in ⊓in . 0 c+1 0 1 2 redundancy(in ,I). 0 Proof. The proof is done by contraposition. If in and in We now assume by induction that ℓ+[inc+1,...,inc+1]∈ 1 2 1 j do not overlap, then Cover(in ) ∩ Cover(in ) = ∅. So, ValidOrders(I) and for each in ∈ C \ S , we have 1 2 0 c+1 j for every bl ∈ Cover(in ), we have in ̸∈ Inputs(bl), and redundancy(in ,I \(S∪S ))=redundancy(in ,I \S ). 2 1 0 j 0 j thus Inputs(bl)∩(I \{in }) = Inputs(bl)∩I. Therefore, Because [inc+1,...,inc+1 ] ∈ ValidOrders(I), we have 1 1 nc+1 redundancy(in 2,I \{in 1})=redundancy(in 2,I). inc j+ +1 1 ∈Redundant(I \S j). Moreover, inc j+ +1 1 ∈C c+1 hence inc+1 ̸∈ S. Therefore, using the induction hypothesis with Then, we prove the independence of removal steps per- j+1 in = inc+1, we have inc+1 ∈ Redundant(I \(S∪S )). formed on different components. 0 j+1 j+1 j Therefore, according to our definition of valid removal steps Lemma 13 (Independent Redundancies). Let I be an in- (§III-C), ℓ+[inc+1,...,inc+1]∈ValidOrders(I). 1 j+1 put set. For each connected component C ∈ Comps(I), We denote S =S ∪{inc+1}. Let in ∈C \S . j+1 j j+1 0 c+1 j+1IEEETRANSACTIONSONSOFTWAREENGINEERING,VOL.XX,NO.XX,MONTH2024 32 To complete the induction step, we prove that: Wenowprovethatthegaincanbeindependentlycomputed oneachconnectedcomponent(Theorem3).Todoso,wehave redundancy(in ,I \(S∪S )) 0 j+1 to prove intermediate results, including Proposition 9. =redundancy(in ,I \S ) 0 j+1 Lemma 14 (Redundancy within a connected component). We remind that, for each input set X, we have: Let I be an input set. For each connected component C ∈ redundancy(in 0,X) Comps(I), for each input in 0 ∈ C, and for each order of =min{|Inputs(bl)∩X| | bl ∈Cover(in 0)}−1 removal steps [in 1,...,in n], if in 1,...,in n ∈C, then: We denote as critical the objectives contributing to the redun- redundancy(in ,C \{in ,...,in }) 0 1 n dancy: =redundancy(in ,I \{in ,...,in }) 0 1 n CritSubCls(in ,X) 0 Proof. We remind that, for each input set X, we have: =defargmin{|Inputs(bl)∩X| | bl ∈Cover(in )} 0 redundancy(in ,X) 0 Because in ∈ C , we know that in does not overlap 0 c+1 0 =min{|Inputs(bl)∩X| | bl ∈Cover(in )}−1 0 with inputs in S so, for each bl ∈ Cover(in ), we have 0 |
Inputs(bl)∩(I \S) = Inputs(bl)∩I. Thus, by removing For the sake of simplicity, we denote X C = C \ inputs of S j from both sides, we have Inputs(bl)∩(I \(S∪ {in 1,...,in n} and X I = I \{in 1,...,in n}. Let in 0 ∈ C S j))=Inputs(bl)∩(I \S j). Therefore: and bl ∈Cover(in 0). Because C ∈ Comps(I), we have C ⊆ I, hence X ⊆ C CritSubCls(in ,I \(S∪S ))=CritSubCls(in ,I \S ) 0 j 0 j X . So Inputs(bl)∩X ⊆ Inputs(bl)∩X . Moreover, for I C I each in ∈ Inputs(bl) ∩ X we have bl ∈ Cover(in ) ∩ We consider two cases: I 0 Cover(in), so in ⊓in, and thus in ∈X . 1) Either there exists bl ∈ CritSubCls(in ,I \S ) such 0 C 0 j that inc+1 ∈Inputs(bl). In that case: Therefore, Inputs(bl)∩X C =Inputs(bl)∩X I. j+1 Hence the result redundancy(in ,X ) = 0 C |Inputs(bl)∩(I \(S∪S j+1))| redundancy(in 0,X I). =|Inputs(bl)∩(I \(S∪S ))|−1 j Corollary 2 (Component Validity). Let I be an input set. |Inputs(bl)∩(I \S j+1)|=|Inputs(bl)∩(I \S j)|−1 For each connected component C ∈ Comps(I) and for eachorderofremovalsteps[in ,...,in ],if[in ,...,in ]∈ Because bl is critical and a redundancy can decrease at most 1 n 1 n ValidOrders(C), then [in ,...,in ]∈ValidOrders(I). by1afterareduction(Lemma4)sothecardinalitiesforother 1 n objectives cannot decrease below the previous redundancy Proof. The proof is done by induction on n. minus one, we have: For the initialization n=0, we have [in ,...,in ]=[]∈ 1 n redundancy(in ,I \(S∪S )) ValidOrders(I). 0 j+1 =redundancy(in 0,I \(S∪S j))−1 For the induction step, we consider [in 1,...,in n,in n+1]∈ ValidOrders(C) and we assume by induction that redundancy(in ,I \S )=redundancy(in ,I \S )−1 0 j+1 0 j [in ,...,in ]∈ValidOrders(I). 1 n Because [in ,...,in ,in ] ∈ ValidOrders(C), accord- 2) Or for each bl ∈ CritSubCls(in ,I \S ), we have 1 n n+1 0 j inc+1 ̸∈Inputs(bl). In that case: ing to our definition of redundant inputs and removal steps j+1 (§III-C), we have in ,...,in ∈ C. Hence, according to 1 n |Inputs(bl)∩(I \(S∪S j+1))| Lemma 14, we have for each in 0 ∈C: =|Inputs(bl)∩(I \(S∪S ))| j redundancy(in ,C \{in ,...,in }) 0 1 n |Inputs(bl)∩(I \S )|=|Inputs(bl)∩(I \S )| =redundancy(in ,I \{in ,...,in }) j+1 j 0 1 n Because bl is critical and a redundancy can decrease at most Thus, because C ⊆ I, according to the definition of by 1 after a reduction (Lemma 4) so the cardinalities for redundancy (§III-C) we have: non-critical objectives cannot decrease below the previous Redundant(C \{in ,...,in }) 1 n redundancy we have: ⊆Redundant(I \{in ,...,in }) 1 n redundancy(in ,I \(S∪S )) 0 j+1 Because [in ,...,in ,in ] ∈ ValidOrders(C), =redundancy(in ,I \(S∪S )) 1 n n+1 0 j we have in ∈ Redundant(C \{in ,...,in }) n+1 1 n redundancy(in ,I \S )=redundancy(in ,I \S ) by definition of removal steps (§III-C). Thus, 0 j+1 0 j in ∈Redundant(I \{in ,...,in }). n+1 1 n In both cases, by induction hypothesis By induction hypothesis [in ,...,in ]∈ValidOrders(I). redundancy(in ,I \(S∪S )) = redundancy(in ,I \S ), 1 n 0 j 0 j Therefore [in ,...,in ,in ] ∈ ValidOrders(I), which hence we have redundancy(in ,I \(S∪S )) = 1 n n+1 0 j+1 completes the induction step. redundancy(in ,I \S ). 0 j+1 This completes the induction on 0≤j ≤n . In particu- Proposition 9 (Reductions withing a connected component). c+1 lar,weprovedthatℓ+[inc+1,...,inc+1 ]∈ValidOrders(I), Let I be an input set. For each connected component C ∈ 1 nc+1 which completes the induction on c, hence the result. Comps(I)andforeachorderofremovalsteps[in ,...,in ], 1 nIEEETRANSACTIONSONSOFTWAREENGINEERING,VOL.XX,NO.XX,MONTH2024 33 if [in ,...,in ] ∈ ValidOrders(I) and in ,...,in ∈ C, Second, we prove by contradiction that [in ,...,in ] 1 n 1 n 1 n Cj1 then [in ,...,in ]∈ValidOrders(C). has a maximal cumulative cost in C . We assume by con- 1 n j1 tradiction that there exists a valid order of removal steps Proof. The proof is done by induction on n. [in′,...,in′ ]∈ValidOrders(C ) such that: If n=0 then [in ,...,in ]=[]∈ValidOrders(C). 1 n′ j1 1 n (cid:88) (cid:88) Otherwise, we consider [in ,...,in ,in ] with cost(in)> cost(in) 1 n n+1 [i in n1, ,. .. .. ., ,i in nn, ]in ∈n V+ a1 li∈ dOC rdea rn sd (Cw )e . assume by induction that in∈[in′ 1,...,in′ n′] in∈[in1,...,inn]Cj1 1 n We assume [in 1,...,in n,in n+1] ∈ ValidOrders(I), so Because [in′ 1,...,in′ n′] ∈ ValidOrders(C j1), according to in n+1 |
∈Redundant(I \{in 1,...,in n}). Corollary 2 we have [in′ 1,...,in′ n′] ∈ ValidOrders(I) as well. So, according to Proposition 8, we have: According to Lemma 14 applied to inputs in ∈ C \ 0 {in ,...,in } with redundancy >0, we have: 1 n [in ,...,in ] +···+[in ,...,in ] 1 n C1 1 n Cj1−1 =Re Rdu en dud nan dat( nC t(I\{ \i {n i1 n,. ,. .. ., .i ,n in n}) }) +[in′ 1,...,in′ n′]+[in 1,...,in n] Cj1+1 +... 1 n +[in ,...,in ] ∈ValidOrders(I) 1 n Cc Hence, in ∈Redundant(C \{in ,...,in }). n+1 1 n The cumulative cost of this valid order of removal steps is Thus,because[in ,...,in ]∈ValidOrders(C),according 1 n thuslargerthanthecumulativecostof[in ,...,in ] +···+ to our definition of valid removal steps (§III-C), we have the 1 n C1 [in ,...,in ] : result [in 1,...,in n,in n+1]∈ValidOrders(C). 1 n Cc (cid:88) (cid:88) (cid:88) cost(in)+ cost(in) Finally, we conclude the section by proving the claim on Theorem 3, which is used to divide our problem into in∈[in′ 1,...,in′ n′] 1≤j≤c∧j̸=j1in∈[in1,...,inn]Cj subproblems (§VII-F). (cid:88) (cid:88) > cost(in)=gain(I) Theorem 3 (Divide the Gain). For each input set I: 1≤j≤cin∈[in1,...,inn]Cj (cid:88) which contradicts the maximality of gain(I). gain(I)= gain(C) Hence, [in ,...,in ] ∈ValidOrders(C ) has a maxi- C∈Comps(I) 1 n Cj1 j1 mal cumulative cost in C . So, according to the definition of j1 Proof. Let [in ,...,in ] ∈ ValidOrders(I) be a valid the gain (§III-C), we have for any connected component C : 1 n j1 order of removal steps such that its cumulative cost (cid:88) (cid:80) cost(in)=gain(C ) 1≤i≤ncost(in i) = gain(I) is maximal (§III-C), and let j1 Comps(I)={C 1,...,C c}denotetheconnectedcomponents, in∈[in1,...,inn]Cj1 without a particular order. Therefore, we have the result: Wedenote[in ,...,in ] thelargestsublist(Definition1) 1 n Cj (cid:88) (cid:88) (cid:88) of[in 1,...,in n]containingonlyinputsintheconnectedcom- gain(I)= cost(in)= gain(C j) ponentC j.Because[in 1,...,in n]∈ValidOrders(I),accord- 1≤j≤cin∈[in1,...,inn]Cj 1≤j≤c ing to Lemma 7 we have [in ,...,in ] ∈ValidOrders(I) 1 n Cj as well. So, according to Proposition 8: [in ,...,in ] +···+[in ,...,in ] ∈ValidOrders(I) F. Genetic Search 1 n C1 1 n Cc In this section, we prove desirable properties satisfied for Because the connected components form a partition of the each generation by roofers (Theorem 4) and misers (Theo- redundant inputs, the cumulative cost of this order of removal rem 5) during the genetic search (Section VIII). We start by steps is the same as [in ,...,in ]: 1 n roofers. (cid:88) (cid:88) (cid:88) cost(in)= cost(in i)=gain(I) Theorem 4 (Invariant of the Roofers). For every generation 1≤j≤cin∈[in1,...,inn]Cj 1≤i≤n n, we have: |Roofers(n)|=n size We now consider any connected component C and we j1 prove that: min{cost(I) | I ∈Roofers(n)} (cid:83) =min{cost(I) | I ∈ Roofers(m)} (cid:88) cost(in)=gain(C ) 0≤m≤n j1 in∈[in1,...,inn]Cj1 and for every I ∈Roofers(n), we have: • I is reduced (in the sense of §III-C) The proof is done in two steps. First, note that because [in ,...,in ] ∈ • Cover(I)=Coverage obj(C) 1 n Cj1 ValidOrders(I)andcontainsonlyinputsinC ,accordingto Proof. There are n individuals in the initial roofer pop- j1 size Proposition 9 we have [in ,...,in ] ∈ValidOrders(C ) ulation (§VIII-B). Moreover, the procedure detailed above to 1 n Cj1 j1 as well. updatetherooferpopulationensuresthatanoffspringcanonlyIEEETRANSACTIONSONSOFTWAREENGINEERING,VOL.XX,NO.XX,MONTH2024 34 take the place of an existing roofer. Hence, the number of candidate. Therefore, I is present in the next generation and 4 roofers does not change over generations. I =I satisfies the lemma. 4 In the initial population, a roofer I is always replaced Corollary 3 (Dominance across Generations). For every gen- by its reduced counterpart reduce(I) after adding an input eration n and for every I ∈ Misers(n), if there exists (§VIII-B). Moreover, after mutation (§VIII-E) each offspring 1 0 ≤ m ≤ n and I ∈ Misers(m) such that I ≻I , then I is replaced by its reduced counterpart reduce(I) before 2 2 1 there exists I ∈Misers(n) such that I ≻I . determining if it is accepted in the roofer population or 3 3 1 rejected. Hence, for each generation, each roofer is reduced. Proof. The proof is done by induction on n−m. Roofers in the initial population are built so that they cover If I ∈Misers(n) then I =I satisfies the corollary. 2 2 3 alltheobjectives(§VIII-B).Moreover,intheaboveprocedure, Otherwise, there exists a generation m < g ≤ n where an offspring can be added to the roofer population only if I was removed. In that case, according to Lemma 16, there 2 it covers all the objectives. Hence, for each generation, each exists I ∈ Misers(g) such that I ≻I . Hence, according 3 3 2 |
roofer covers all the objectives. to Lemma 15, we have by transitivity that I ≻I . Finally, 3 1 Finally, in the above procedure one can remove an indi- becausen−g <n−m,wehavetheresultusingtheinduction vidual from the roofer population only if a less or equally hypothesis on I 3. costly roofer is found. Hence, the minimal cost amongst the We finally prove, as expected, the properties satisfied by rooferscanonlyremainthesameordecreaseovergenerations. misers on each generation. Therefore, the minimal cost in the last generation is the minimal cost encountered so far during the search. Theorem 5 (InvariantoftheMisers). Foreverygenerationn, for every I ∈Misers(n), we have: 1 To ease the proof for misers (Theorem 5), we first prove in Lemma16thatamiserI canberemovedbetweengeneration • I 1 is reduced (in the sense of §III-C) n and generation n+1 o0 nly by a dominating miser I. Then, • Cover(I)⊂Coverage (cid:83)obj(C) (the inclusion is strict) we prove in Corollary 3 that being dominated is carried from • ThereexistsnoI 2 ∈ Misers(m)suchthatI 2≻I 1. 0≤m≤n generation to generation. Proof. The initial miser population is empty (§VIII-B), hence Lemma 15 (Transitivity of Pareto Dominance). The ≻ rela- the properties trivially hold. tion (§II-E) is transitive i.e., for every input sets I ,I ,I , if After mutation (§VIII-E) each offspring I is replaced by 1 2 3 I ≻I and I ≻I , then I ≻I . its reduced counterpart reduce(I) before determining if it is 1 2 2 3 1 3 accepted in the miser population or rejected. Hence, for each Proof. For each 0 ≤ i ≤ n, we have f (I ) ≤ f (I ) and i 1 i 2 generation, each miser is reduced. f (I ) ≤ f (I ), so f (I ) ≤ f (I ). Moreover, there exists i 2 i 3 i 1 i 3 In the miser population update (§VIII-F), an offspring can 0 ≤ i ≤ n such that f (I ) < f (I ) ≤ f (I ) and there 1 i1 1 i1 2 i1 3 beacandidatetothemiserpopulationonlyifitdoesnotcover exists 0 ≤ i ≤ n such that f (I ) ≤ f (I ) < f (I ). 2 i2 1 i2 2 i2 3 all the objectives. i and i can be the same or distinct. In any case, we have 1 2 Finally, the last property is proved for a miser I ∈ 1 f (I )<f (I ) and f (I )<f (I ). i1 1 i1 3 i2 1 i2 3 Misers(n). Let n′ be the first generation when I 1 was ac- Lemma 16. For every generation n, if I 0 ∈ Misers(n) \ cepted, so we have n′ ≤n and I 1 ∈Misers(n′). Misers(n+1),thenthereexistsI ∈Misers(n+1)suchthat The proof is done by contradiction, assuming that there I ≻I 0, where ≻ is the domination relation from (§II-E). existed a generation 0≤m≤n and a miser I 2 ∈Misers(m) such that I ≻I . Let m′ be the first generation when I was 2 1 2 Proof. In the miser population update (§VIII-F), I 0 can be accepted, so we have m′ ≤ m and I 2 ∈ Misers(m′). We removed from the miser population only if there exists a consider two cases. miser candidate I 1 such that I 1≻I 0. If I 1 is accepted in the If m′ ≤ n′ then, according to Corollary 3, there exists population then I =I 1 satisfies the lemma. I 3 ∈Misers(n′) such that I 3≻I 1. This contradicts the above Otherwise, I 1 is rejected only because there exists a miser procedure,becauseifI 1wasdominateditwouldnothavebeen I 2 such that I 2≻I 1. Hence, according to Lemma 15, we have accepted in Misers(n′). by transitivity that I 2≻I 0. Note that there is at most two If m′ > n′ then, because I 2≻I 1, according to the above miser candidates per generation. If either I 1 is the only miser procedure I 1 is removed so I 1 ̸∈Misers(m′). But m′ ≤m≤ candidate or there exists a second miser candidate I 3 which n and I 1 ∈ Misers(n), so I 1 was added between m′ and n. doesnotdominateI 2,thenI 2 ispresentinthenextgeneration Let g be the first generation when this occurred. and I =I 2 satisfies the lemma. In that case we have I 1 ∈ Misers(g), 0 ≤ m′ ≤ g, I 2 ∈ Otherwise, there exists a second candidate I 3 such that Misers(m′), and I 2≻I 1. So, according to Corollary 3, there I 3≻I 2. Hence, according to Lemma 15, we have by tran- exists I 3 ∈Misers(g) such that I 3≻I 1, which contradicts the sitivity that I 3≻I 0. If I 3 is accepted in the population then fact that I 1 was accepted in Misers(g). I =I satisfies the lemma. 3 Otherwise, I is rejected only because there exists a miser 3 I such that I ≻I . Hence, according to Lemma 15, we have 4 4 3 bytransitivitythatI ≻I .Becausethereisatmosttwomiser 4 0 candidates per generation, I cannot be removed by another 4IEEETRANSACTIONSONSOFTWAREENGINEERING,VOL.XX,NO.XX,MONTH2024 35 REFERENCES [22] M. Zhang, S. Ali, and T. Yue, “Uncertainty-wise test case generation and minimization for cyber-physical systems,” Journal of Systems [1] P. X. Mai, F. Pastore, A. Goknil, and L. C. Briand, “A natural and Software, vol. 153, pp. 1–21, 2019. [Online]. Available: language programming approach for requirements-based security https://www.sciencedirect.com/science/article/pii/S0164121219300561 testing,” 2018 IEEE 29th International Symposium on Software [23] B. Li, J. Li, K. Tang, and X. Yao, “Many-objective evolutionary Reliability Engineering (ISSRE), pp. 58–69, 2018. [Online]. Available: algorithms: A survey,” ACM Comput. Surv., vol. 48, no. 1, sep 2015. https://api.semanticscholar.org/CorpusID:53711718 [Online].Available:https://doi.org/10.1145/2792984 |
[2] P. X. Mai, A. Goknil, L. K. Shar, F. Pastore, L. C. Briand, and [24] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist S. Shaame, “Modeling security and privacy requirements: a use multiobjective genetic algorithm: Nsga-ii,” IEEE Transactions on Evo- case-drivenapproach,”InformationandSoftwareTechnology,vol.100, lutionaryComputation,vol.6,no.2,pp.182–197,April2002. pp.165–182,2018.[Online].Available:https://www.sciencedirect.com/ [25] E.Zitzler,M.Laumanns,andL.Thiele,“Spea2:Improvingthestrength science/article/pii/S0950584918300703 paretoevolutionaryalgorithm,”ComputerEngineeringandCommunica- [3] W. Howden, “Theoretical and empirical studies of program testing,” tionNetworksLab(TIK),SwissFederalInstituteofTechnology(ETH), IEEETransactionsonSoftwareEngineering,vol.SE-4,no.4,pp.293– Zurich,Tech.Rep.103,2001. 298,July1978. [26] M.Kim,T.Hiroyasu,M.Miki,andS.Watanabe,“Spea2+:Improving [4] E. T. Barr, M. Harman, P. McMinn, M. Shahbaz, and S. Yoo, “The the performance of the strength pareto evolutionary algorithm 2,” in oracle problem in software testing: A survey,” IEEE Transactions on ParallelProblemSolvingfromNature-PPSNVIII,X.Yao,E.K.Burke, SoftwareEngineering,vol.41,no.5,pp.507–525,2015. J. A. Lozano, J. Smith, J. J. Merelo-Guervo´s, J. A. Bullinaria, J. E. [5] S. Segura, G. Fraser, A. B. Sanchez, and A. Ruiz-Corte´s, “A survey Rowe,P.Tinˇo,A.Kaba´n,andH.-P.Schwefel,Eds. Berlin,Heidelberg: onmetamorphictesting,”IEEETransactionsonSoftwareEngineering, SpringerBerlinHeidelberg,2004,pp.742–751. vol.42,no.9,pp.805–824,Sep.2016. [27] K. Deb and H. Jain, “An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, [6] N. C. Bayati, F. Pastore, A. Goknil, and L. C. Briand, “Metamorphic testingforwebsystemsecurity,”IEEETransactionsonSoftwareEngi- part i: Solving problems with box constraints,” IEEE Transactions on neering,vol.49,no.6,pp.3430–3471,2023. EvolutionaryComputation,vol.18,no.4,pp.577–601,Aug2014. [28] A. Panichella, F. M. Kifetew, and P. Tonella, “Reformulating branch [7] P. X. Mai, F. Pastore, A. Goknil, and L. C. Briand, “Metamorphic coverage as a many-objective optimization problem,” in 2015 IEEE security testing for web systems,” 2020 IEEE 13th International 8th International Conference on Software Testing, Verification and Conference on Software Testing, Validation and Verification (ICST), Validation(ICST). Graz,Austria:IEEE,April2015,pp.1–10. pp.186–197,2019.[Online].Available:https://api.semanticscholar.org/ [29] H. Jain and K. Deb, “An evolutionary many-objective optimization CorpusID:209202564 algorithm using reference-point based nondominated sorting approach, [8] N.B.Chaleshtari,Y.Marquer,F.Pastore,andL.C.Briand,“Replication part ii: Handling constraints and extending to an adaptive approach,” package:Oursubjectsystems,experimentaldataandAIMprototypewill IEEE Transactions on Evolutionary Computation, vol. 18, no. 4, pp. bemadeavailableuponacceptanceofthepaper,”2024. 602–622,Aug2014. [9] T. Y. Chen, F.-C. Kuo, H. Liu, P.-L. Poon, D. Towey, T. H. Tse, [30] P.AmmannandJ.Offutt,IntroductiontoSoftwareTesting. Cambridge: and Z. Q. Zhou, “Metamorphic testing: A review of challenges and CambridgeUniversityPress,2016. opportunities,”ACMComput.Surv.,vol.51,no.1,jan2018.[Online]. [31] G.FraserandA.Arcuri,“Wholetestsuitegeneration,”IEEETransac- Available:https://doi.org/10.1145/3143561 tionsonSoftwareEngineering,vol.39,no.2,pp.276–291,Feb2013. [10] OWASP. (2023) Open web application security project. OWASP [32] A. Arcuri, “It does matter how you normalise the branch distance in Foundation.[Online].Available:https://www.owasp.org/ search based software testing,” in Third International Conference on [11] MITRE.Cweview:Architecturalconcepts.MITRE.[Online].Available: Software Testing, Verification and Validation. Paris, France: IEEE, https://cwe.mitre.org/data/definitions/1008.html April2010,pp.205–214. [12] ——. Cwe-668: Exposure of resource to wrong sphere. MITRE. [33] Z.Li,M.Harman,andR.M.Hierons,“Searchalgorithmsforregression [Online].Available:https://cwe.mitre.org/data/definitions/668.html test case prioritization,” IEEE Transactions on Software Engineering, [13] S. Yoo and M. Harman, “Regression testing minimization, selection vol.33,no.4,pp.225–237,April2007. andprioritization:asurvey,”Softw.Test.Verif.Reliab.,vol.22,no.2,p. [34] S. Busygin, O. Prokopyev, and P. M. Pardalos, “Biclustering 67–120,mar2012.[Online].Available:https://doi.org/10.1002/stv.430 in data mining,” Computers & Operations Research, vol. 35, [14] B. Miranda and A. Bertolino, “Scope-aided test prioritization, no. 9, pp. 2964–2987, 2008, part Special Issue: Bio-inspired |
selection and minimization for software reuse,” Journal of Systems Methods in Combinatorial Optimization. [Online]. Available: https: and Software, vol. 131, pp. 528–549, 2017. [Online]. Available: //www.sciencedirect.com/science/article/pii/S0305054807000159 https://www.sciencedirect.com/science/article/pii/S0164121216300875 [35] B. Pontes, R. Gira´ldez, and J. S. Aguilar-Ruiz, “Biclustering on [15] R. Noemmer and R. Haas, “An evaluation of test suite minimization expressiondata:Areview,”JournalofBiomedicalInformatics,vol.57, techniques,” in Software Quality: Quality Intelligence in Software and pp.163–180,2015.[Online].Available:https://www.sciencedirect.com/ Systems Engineering, D. Winkler, S. Biffl, D. Mendez, and J. Bergs- science/article/pii/S1532046415001380 mann,Eds. Cham:SpringerInternationalPublishing,2020,pp.51–66. [36] M. Attaoui, H. Fahmy, F. Pastore, and L. Briand, “Black-box safety [16] E.Cruciani,B.Miranda,R.Verdecchia,andA.Bertolino,“Scalableap- analysis and retraining of dnns based on feature extraction and proachesfortestsuitereduction,”in2019IEEE/ACM41stInternational clustering,” ACM Trans. Softw. Eng. Methodol., vol. 32, no. 3, apr ConferenceonSoftwareEngineering(ICSE),May2019,pp.419–429. 2023.[Online].Available:https://doi.org/10.1145/3550271 [17] R. Pan, T. A. Ghaleb, and L. Briand, “Atm: Black-box test case [37] M.O.Attaoui,H.Fahmy,F.Pastore,andL.Briand,“Dnnexplanationfor minimization based on test code similarity and evolutionary search,” safetyanalysis:anempiricalevaluationofclustering-basedapproaches,” in Proceedings of the 45th International Conference on Software arXiv,2023. Engineering, ser. ICSE ’23. IEEE Press, 2023, p. 1700–1711. [38] H. Hemmati, A. Arcuri, and L. Briand, “Achieving scalable model- [Online].Available:https://doi.org/10.1109/ICSE48619.2023.00146 basedtestingthroughtestcasediversity,”ACMTransactionsonSoftware [18] C.Arora,M.Sabetzadeh,L.Briand,andF.Zimmer,“Automatedextrac- EngineeringandMethodology(TOSEM),vol.22,no.1,pp.1–42,2013. tionandclusteringofrequirementsglossaryterms,”IEEETransactions [39] J.Thome´,L.K.Shar,D.Bianculli,andL.Briand,“Search-drivenstring onSoftwareEngineering,vol.43,no.10,pp.918–945,2017. constraintsolvingforvulnerabilitydetection,”in2017IEEE/ACM39th [19] M. Ester, H.-P. Kriegel, J. Sander, and X. Xu, “A density-based al- International Conference on Software Engineering (ICSE). Buenos gorithm for discovering clusters in large spatial databases with noise,” Aires,Argentina:IEEE,2017,pp.198–208. in Proceedings of the Second International Conference on Knowledge [40] S.Zhang,Y.Hu,andG.Bian,“Researchonstringsimilarityalgorithm Discovery and Data Mining, ser. KDD’96. Portland, Oregon: AAAI basedonlevenshteindistance,”in2017IEEE2ndAdvancedInformation Press,1996,p.226–231. Technology, Electronic and Automation Control Conference (IAEAC). [20] L. McInnes, J. Healy, and S. Astels, “hdbscan: Hierarchical density Chongqing,China:IEEE,2017,pp.2247–2251. based clustering,” Journal of Open Source Software, vol. 2, no. 11, p. [41] S. Mergen, “Extending the bag distance for string similarity search,” 205,2017.[Online].Available:https://doi.org/10.21105/joss.00205 SN Comput. Sci., vol. 4, no. 2, dec 2022. [Online]. Available: [21] S.Wang,S.Ali,andA.Gotlieb,“Cost-effectivetestsuiteminimization https://doi.org/10.1007/s42979-022-01502-5 in product lines using search techniques,” Journal of Systems [42] I. Bartolini, P. Ciaccia, and M. Patella, “String matching with metric and Software, vol. 103, pp. 370–391, 2015. [Online]. Available: trees using an approximate distance,” in String Processing and Infor- https://www.sciencedirect.com/science/article/pii/S0164121214001757 mation Retrieval: 9th International Symposium, SPIRE 2002 Lisbon,IEEETRANSACTIONSONSOFTWAREENGINEERING,VOL.XX,NO.XX,MONTH2024 36 Portugal, September 11–13, 2002 Proceedings 9. Berlin, Heidelberg: [65] C.-A.Sun,A.Fu,P.-L.Poon,X.Xie,H.Liu,andT.Y.Chen,“Metric++: Springer,2002,pp.271–283. A metamorphic relation identification technique based on input plus [43] M.Biagiola,A.Stocco,F.Ricca,andP.Tonella,“Diversity-basedweb outputdomains,”IEEETransactionsonSoftwareEngineering,vol.47, test generation,” in Proceedings of the 2019 27th ACM Joint Meeting no.9,pp.1764–1785,2021. on European Software Engineering Conference and Symposium on [66] B. Zhang, H. Zhang, J. Chen, D. Hao, and P. Moscato, “Automatic theFoundationsofSoftwareEngineering,ser.ESEC/FSE2019. New discovery and cleansing of numerical metamorphic relations,” in 2019 York, NY, USA: Association for Computing Machinery, 2019, p. IEEEInternationalConferenceonSoftwareMaintenanceandEvolution 142–153.[Online].Available:https://doi.org/10.1145/3338906.3338970 (ICSME). Cleveland,USA:IEEE,2019,pp.235–245. [44] B. Korte and R. Schrader, “On the existence of fast approximation [67] J. Ayerdi, V. Terragni, A. Arrieta, P. Tonella, G. Sagardui, and |
schemes,” in Nonlinear Programming 4, O. L. Mangasarian, R. R. M. Arratibel, “Generating metamorphic relations for cyber-physical Meyer,andS.M.Robinson,Eds. Madison,Wisconsin:AcademicPress, systems with genetic programming: An industrial case study,” in 1981,pp.415–437.[Online].Available:https://www.sciencedirect.com/ Proceedings of the 29th ACM Joint Meeting on European Software science/article/pii/B9780124686625500203 EngineeringConferenceandSymposiumontheFoundationsofSoftware [45] EclipseFoundation,“Jenkinsci/cdserver.”https://jenkins.io/,2018. Engineering,ser.ESEC/FSE2021. NewYork,NY,USA:Association [46] Joomla,“Joomla,https://www.joomla.org/,”2018. for Computing Machinery, 2021, p. 1264–1274. [Online]. Available: [47] (2018)SeleniumWebTestingFramework,https://www.seleniumhq.org/. https://doi.org/10.1145/3468264.3473920 Selenium. [68] A.Blasi,A.Gorla,M.D.Ernst,M.Pezze`,andA.Carzaniga,“Memo: [48] N.B.Chaleshtari,F.Pastore,A.Goknil,andL.C.Briand,“Replication Automatically identifying metamorphic relations in javadoc comments package,”2023,https://doi.org/10.5281/zenodo.7702754. for test automation,” Journal of Systems and Software, vol. 181, [49] MITRE. (2018, Nov.) Cve-2018-1000406, concerns cwe-276. MITRE. p. 111041, 2021. [Online]. Available: https://www.sciencedirect.com/ [Online]. Available: https://cve.mitre.org/cgi-bin/cvename.cgi?name= science/article/pii/S0164121221001382 CVE-2018-1000406 [69] S. Segura, J. C. Alonso, A. Martin-Lopez, A. Dura´n, J. Troya, and A. Ruiz-Corte´s, “Automated generation of metamorphic relations [50] ——. (2018) Cve-2018-1000409, concerns otg-sess-003. MITRE. for query-based systems,” in Proceedings of the 7th International [Online]. Available: https://cve.mitre.org/cgi-bin/cvename.cgi?name= Workshop on Metamorphic Testing, ser. MET ’22. New York, NY, CVE-2018-1999003 USA:AssociationforComputingMachinery,2023,p.48–55.[Online]. [51] ——. (2018, Nov.) Cve-2018-1999003, concerns otg-authz-002. Available:https://doi.org/10.1145/3524846.3527338 MITRE. [Online]. Available: https://cve.mitre.org/cgi-bin/cvename.cgi? [70] J. Y. Tang, A. Yang, T. Y. Chen, and J. W. Ho, “Harnessing multiple name=CVE-2018-1999003 sourcetestcasesinmetamorphictesting:Acasestudyinbioinformat- [52] ——. (2018, Nov.) Cve-2018-1999004, concerns otg-authz-002. ics,” in 2017 IEEE/ACM 2nd International Workshop on Metamorphic MITRE. [Online]. Available: https://cve.mitre.org/cgi-bin/cvename.cgi? Testing(MET). BuenosAires,Argentina:IEEE,2017,pp.10–13. name=CVE-2018-1999004 [71] M. Zhang, J. W. Keung, T. Y. Chen, and Y. Xiao, “Validating class [53] ——. (2018) Cve-2018-1999006, concerns cwe-138. MITRE. integration test order generation systems with metamorphic testing,” [Online]. Available: https://cve.mitre.org/cgi-bin/cvename.cgi?name= Information and Software Technology, vol. 132, p. 106507, 2021. CVE-2018-1999006 [Online]. Available: https://www.sciencedirect.com/science/article/pii/ [54] ——. (2018, Nov.) Cve-2018-1999046, concerns otg-authz-002. S0950584920302470 MITRE. [Online]. Available: https://cve.mitre.org/cgi-bin/cvename.cgi? [72] J. Zhou, K. Qiu, Z. Zheng, T. Y. Chen, and P.-L. Poon, “Using name=CVE-2018-1999046 metamorphic testing to evaluate dnn coverage criteria,” in 2020 IEEE [55] ——. (2020) Cve-2020-2162, concerns otg-inpval-003. MITRE. InternationalSymposiumonSoftwareReliabilityEngineeringWorkshops [Online]. Available: https://cve.mitre.org/cgi-bin/cvename.cgi?name= (ISSREW). Coimbra,Portugal:IEEE,2020,pp.147–148. CVE-2020-2162 [73] P. Saha and U. Kanewala, “Fault detection effectiveness of source test [56] ——. (2018) Cve-2018-11327, concerns cwe-200. MITRE. case generation strategies for metamorphic testing,” in Proceedings of [Online]. Available: https://cve.mitre.org/cgi-bin/cvename.cgi?name= the3rdInternationalWorkshoponMetamorphicTesting,ser.MET’18. CVE-2018-11327 New York, NY, USA: Association for Computing Machinery, 2018, p. [57] ——. (2018) Cve-2018-17857, concerns cwe-200. MITRE. 2–9.[Online].Available:https://doi.org/10.1145/3193977.3193982 [Online]. Available: https://cve.mitre.org/cgi-bin/cvename.cgi?name= [74] G.FraserandA.Arcuri,“Evosuite:Automatictestsuitegenerationfor CVE-2018-17857 object-oriented software,” in Proceedings of the 19th ACM SIGSOFT [58] Z.-w. Hui, X. Wang, S. Huang, and S. Yang, “MT-ART: A Test Case Symposium and the 13th European Conference on Foundations of GenerationMethodBasedonAdaptiveRandomTestingandMetamor- Software Engineering, ser. ESEC/FSE ’11. New York, NY, USA: phic Relation,” IEEE Transactions on Reliability, vol. 70, no. 4, pp. Association for Computing Machinery, 2011, p. 416–419. [Online]. |
1397–1421,Dec2021. Available:https://doi-org.proxy.bnl.lu/10.1145/2025113.2025179 [59] A. C. Barus, T. Y. Chen, F.-C. Kuo, H. Liu, and H. W. Schmidt, [75] C.-a.Sun,B.Liu,A.Fu,Y.Liu,andH.Liu,“Path-directedsourcetest “The impact of source test case selection on the effectiveness case generation and prioritization in metamorphic testing,” Journal of of metamorphic testing,” in Proceedings of the 1st International Systems and Software, vol. 183, p. 111091, 2022. [Online]. Available: Workshop on Metamorphic Testing, ser. MET ’16. New York, NY, https://www.sciencedirect.com/science/article/pii/S0164121221001886 USA: Association for Computing Machinery, 2016, p. 5–11. [Online]. [76] C.-A. Sun, H. Dai, H. Liu, and T. Y. Chen, “Feedback-directed Available:https://doi.org/10.1145/2896971.2896977 metamorphic testing,” ACM Trans. Softw. Eng. Methodol., vol. 32, [60] A. Arcuri and L. Briand, “A hitchhiker’s guide to statistical tests no.1,feb2023.[Online].Available:https://doi.org/10.1145/3533314 for assessing randomized algorithms in software engineering,” Softw. [77] M.J.Hall,TheTheoryofGroups. USA:MacMillan,1959. Test. Verif. Reliab., vol. 24, no. 3, p. 219–250, may 2014. [Online]. Available:https://doi-org.proxy.bnl.lu/10.1002/stvr.1486 [61] A. Vargha and H. D. Delaney, “A critique and improvement of the cl common language effect size statistics of mcgraw and wong,” Journal of Educational and Behavioral Statistics, vol. 25, no. 2, pp. 101–132, 2000.[Online].Available:https://doi.org/10.3102/10769986025002101 [62] B. Kitchenham, L. Madeyski, D. Budgen, J. Keung, P. Brereton, S. Charters, S. Gibbs, and A. Pohthong, “Robust statistical methods for empirical software engineering,” Empirical Softw. Engg., vol. 22, no. 2, p. 579–630, apr 2017. [Online]. Available: https://doi.org/10. 1007/s10664-016-9437-5 [63] C. Wohlin, P. Runeson, M. Hst, M. C. Ohlsson, B. Regnell, and A. Wessln, Experimentation in Software Engineering. Heidelberg, Germany:SpringerPublishingCompany,Incorporated,2012. [64] T. Y. Chen, P.-L. Poon, and X. Xie, “Metric: Metamorphic relation identification based on the category-choice framework,” Journal of SystemsandSoftware,vol.116,pp.177–190,2016.[Online].Available: https://www.sciencedirect.com/science/article/pii/S0164121215001624 |
2402.12023 Evaluation of ChatGPT’s Smart Contract Auditing Capabilities Based on Chain of Thought Yuying Du Xueyan Tang Salus Security Salus Security ∗ February 20, 2024 Abstract Smart contracts, as a key component of blockchain technology, play a crucial role in ensuring the automationoftransactionsandadherencetoprotocolrules. However,smartcontractsaresusceptibleto securityvulnerabilities,which,ifexploited,canleadtosignificantassetlosses. Inrecentyears,numerous incidents of smart contract attacks have highlighted the importance of smart contract security audits. This study explores the potential of enhancing smart contract security audits using the GPT-4 model. Weutilizedadatasetof35smartcontractsfromtheSolidiFI-benchmarkvulnerabilitylibrary,containing 732 vulnerabilities, and compared it with five other vulnerability detection tools to evaluate GPT-4’s ability to identify seven common types of vulnerabilities. Moreover, we assessed GPT-4’s performance in code parsing and vulnerability capture by simulating a professional auditor’s auditing process using CoT(ChainofThought)promptsbasedontheauditreportsofeightgroupsofsmartcontracts. Wealso evaluated GPT-4’s ability to write Solidity Proof of Concepts (PoCs). Through experimentation, we foundthatGPT-4performedpoorlyindetectingsmartcontractvulnerabilities,withahighPrecisionof 96.6%,butalowRecallof37.8%,andanF1-scoreof41.1%,indicatingatendencytomissvulnerabilities during detection. Meanwhile, it demonstrated good contract code parsing capabilities, with an average comprehensivescoreof6.5,capableofidentifyingthebackgroundinformationandfunctionalrelationships of smart contracts; in 60% of the cases, it could write usable PoCs, suggesting GPT-4 has significant potential application in PoC writing. These experimental results indicate that GPT-4 lacks the ability to detect smart contract vulnerabilities effectively, but its performance in contract code parsing and PoC writing demonstrates its significant potential as an auxiliary tool in enhancing the efficiency and effectiveness of smart contract security audits. Keywords: Blockchain, Smart Contract, Chatgpt, Vulnerabilities ∗Correspondingauthor. Email:77728@gmail.com 1 4202 beF 91 ]RC.sc[ 1v32021.2042:viXra1 Introduction Smartcontractsareautomatedprogramsdesignedtoexecutethenecessaryactionsstipulatedinagreements or contracts automatically. Once completed, transactions are traceable and irreversible. They focus on setting and deploying rules on the network to complete transactions. Ethereum is a blockchain platform thatenablesdeveloperstocreateanddeploysmartcontractsWoodetal.(2014). Essentially,smartcontracts arebuiltbasedonalgorithmicandprogrammingchoicestoensurecorrectexecutionHeetal.(2020). Dueto their code-based and immutable deployment nature, errors in the code can lead to vulnerabilities in smart contracts, making them susceptible to attacks and nearly impossible to modify or remove after deployment Sayeed et al. (2020). Vulnerable smart contracts can often lead to more severe issues. For instance, in February 2022, Wormhole was robbed of $325 million when a hacker exploited the smart contract on the SOL-ETH bridge to cash out without depositing any collateral sabocrypto (2022). In March 2023, Euler Finance lost $197 million due to a smart contract vulnerability BUTT (2023). On July 30, 2023, Curve suffered a loss of about $70 million due to vulnerabilities in the Vyper language it used, highlighting the necessity of comprehensive smart contract audits TEAM (2023). These incidents of smart contract attacks demonstrate the unpredictable financial losses that can occur if smart contracts, which control a significant amount of cryptocurrency and financial assets, are targeted and compromised. Smart contract security auditing De Andrs and Lorca (2021) is the process of assessing the security and reliability of smart contracts. Smart contract audits involve detailed analysis of the contract code to identify security issues, incorrect and inefficient coding, and determine solutions for these problems. Conducting thorough audits helps minimize the risks and potential losses that could result from security vulnerabilities after deploying smart contracts. In practice, smart contracts often present more complex transaction scenarios or very intricate execution logic, significantly increasing the burden of auditing work Feng et al. (2019). As of 2022, a variety of smart contract vulnerability analysis tools, including both static and dynamic analysis tools, totaling approximately 86, have been developed Kushwaha et al. (2022). The number of related analysis tools continues to grow. In recent years, Large Language Models (LLMs) Chang et al. (2023) have been in a phase of iteration and rapid development, and their outstanding performance in generation and analysis tasks has become a transformativeforceinvariousfields. Thecurrentstate-of-the-artLLM,GenerativePre-trainedTransformer (GPT) Wu et al. (2023a), has evolved to GPT-4 Egli (2023), showcasing exceptional performance in text analysis and generation. In the context of smart contract audits, GPT-4 has the potential to serve as a powerful analytical tool. To better leverage GPT-4 in smart contract auditing tasks, it’s worth assessing its capability in analyzing smart contracts and detecting vulnerabilities. 2This paper evaluates the capacity of GPT-4 in the domain of smart contract audits, with the knowledge cut-off for the GPT-4 used in this experiment being April 2023. Our main contributions are as follows: 1. To assess GPT-4’s capability in detecting smart contract vulnerabilities, we chose the SolidiFI- |
benchmarkGhalebandPattabiraman(2020)vulnerabilitylibraryasourdataset,selecting35smartcontracts fortesting,whichcollectivelycontain732injectedvulnerabilities,toevaluateGPT-4’sdetectionperformance across 7 common types of vulnerabilities. 2. To simulate real-world audit scenarios, we conducted experiments based on audit reports of 8 sets of smart contracts, which revealed a total of 60 vulnerabilities, covering 18 types of audit vulnerabilities. We designed a prompt based on CoT(Chain of Thought) Wei et al. (2022) reasoning. The goal of this prompt is to require GPT-4 to mimic the way a professional auditor would audit a smart contract, to assess GPT-4’s code parsing capabilities and its ability to identify vulnerabilities within. 3. ToevaluateGPT-4’scapabilityinidentifyingandverifyingsecurityvulnerabilities,wealsoselected10 smartcontractsandadoptedadual-modalexperimentaldesigntotestGPT-4’sabilityinwritingPoC(Proof of Concept ) in Solidity. ThispaperprimarilyfocusesonGPT-4’sabilityinvulnerabilitydetection,codeparsing,andPoCwriting for smart contracts. The details of our focus are summarized in the table1 below: Evaluation Aspect Description Vulnerability Detection EvaluatetheefficiencyandaccuracyofGPT-4inde- tecting vulnerabilities in smart contracts. This in- volvesanalyzingitscapabilitytoidentifysecurityis- sues such as reentrancy attacks, overflow, etc. Code Parsing Capability Test whether GPT-4 can accurately understand and analyze the business logic, context, and code struc- tureofsmartcontracts. Thisincludesunderstanding the functionality of the contract, the relationships between function calls, etc. PoC Writing Ability Examine GPT-4’s ability to write PoCs, assessing its capability to recognize and validate potential se- curity vulnerabilities. Table 1: GPT4 evaluation aspect Through these experiments, we can more comprehensively assess the role of GPT-4 in auditing tasks, which is beneficial for us to better utilize GPT-4 to assist auditors in their work and improve efficiency. This not only has the potential to reduce manpower costs in the auditing process but also to enhance the quality and accuracy of audits. More importantly, such evaluations help us understand and develop the application of artificial intelligence in the field of smart contract security, providing references for the tech- nological development and security strategy formulation in the field of cybersecurity. For your convenience, ourexperimentaldataishere: https://github.com/Mirror-Tang/Evaluation-of-ChatGPT-s-Smart-Contract- 3Auditing-Capabilities-Based-on-Chain-of-Thought/tree/master 2 Related Work In this section, we will introduce the related work in smart contract audits, mainly focusing on vulnerability detection methods, smart contract audit work based on GPT-4, and Proof of Concept. 2.1 Vulnerability detection methods SmartcontractauditingDeAndrsandLorca(2021)involvesadetailedanalysisofaprotocol’ssmartcontract code to identify security vulnerabilities, poor coding practices, and inefficient code, followed by proposing solutions to these issues. During the smart contract auditing process, some automated tools are also used. Several static auditing tools based on Solidity code have achieved significant performance in detecting vulnerabilitiesinsmartcontracts. SlitherFeistetal.(2019)usestheAbstractSyntaxTree(AST)toanalyze Solidity code, thereby detecting vulnerabilities and code flaws in smart contracts. SmartCheck Tikhomirov et al. (2018) improves contract security by setting various rules to detect different types of vulnerabilities. Securify is a security scanner for Ethereum smart contracts supported by the Ethereum Foundation and ChainSecurity, supporting 37 types of vulnerabilities Securify (2023). Symbolic execution tools have also demonstrated significant performance in smart contract auditing. Mythril Ruskin (1980) combines symbolic execution, taint analysis, and control flow analysis to review the securityofEthereumsmartcontracts. ThisapproachallowsMythriltoanalyzenotonlySoliditysourcecode butalsocompiledEVMbytecode. ManticoreManticore(2023)isasymbolicexecutiontoolusedforanalyzing smart contracts and binary files. Oyente Yu (2023) is a symbolic execution-based smart contract analysis tool that supports the detection of reentrancy, unhandled exceptions, transaction-ordering dependencies, and timestamp dependencies. Although static analysis tools and symbolic execution tools provide some level of assistance in smart contract auditing work, this help is limited Zhang et al. (2023). These tools may not accurately identify all potential vulnerabilities and security risks, especially those involving complex business logic or specific blockchain environments. Static analysis tools can comprehensively check the code but often produce a high false-positive rate and struggle to fully inspect complex business logic Hicken (2023). On the other hand, symbolic execution tools, while testing contract behavior through program analysis, may not cover all execution paths and are resource-intensive, especially when dealing with complex and advanced security issues Kim and Ryu (2020). 42.2 GPT-4-based Smart Contract Audit ChatGPT is a large language model-based chatting robot developed by OpenAI team (2023). The core of ChatGPT is Artificial Intelligence Generated Content (AIGC)-related task, and the critical technologies underliesChatGPTconsistofPre-trainedLanguageModel,In-contextLearningandReinforcementLearning from Human Feedback Wu et al. (2023b). ChatGPT is the extension of GPT series, which mainly includes |
GPT-1, GPT-2, GPT-3, GPT-3.5 and GPT-4. The most widely-used models currently is GPT-4. Pre- traning data size and learning target for each GPT model vary. In terms of pre-training data size, GPT-1 trainedonaround5GBofdata,andGPT-2trainedon40GB.GPT-3achieves45TBofdatainpre-training phase, while the data size of training GPT-4 is not published. For learning target, unsupervised learning, multi-task learning, in-context learning and multi-modal learning are implemented respectively in GPT-1, GPT-2, GPT-3 and GPT-4 Wu et al. (2023b). Advances in learning objectives and increasing amounts of training data make the newly released GPT model, GPT-4, powerful. The current latest updated GPT model, GPT-4, performs well on text analysis and generation tasks, providing the expected potential for smart contract auditing. Multiple researchers conduct researches to evaluate the capabilities of GPT in smart contract audit. David et al. (2023) focus on investigating the accuracy of GPT-4 in detecting vulnerabilities and it achieves 78.7% true positive rate. Sun et al. (2023) propose GPTScan for smart contract logic vulnerability detection and precision achieves over 90%. Hu et al. (2023) provide systematic analysis of GPT-4 and propose an adversarial framework called GPTLENS for detecting a broad spectrum of vulnerabilities. Chen et al. (2023) present an empirical study on the performances of ChatGPT vulnerabilities detection. In our experiments, we expanded the scope of evaluation, mainly assessing GPT-4’s (with the knowledge cut-off in April 2023) ability to detect vulnerabilities in smart contracts, parse code, and its capability for PoC. This evaluation framework is more representative and comprehensive. 2.3 Proof of Concept PoC vulnerability exploitation is a harmless attack on a computer or network. PoC demonstrates the feasibility and viability of an idea, and in the context of smart contract auditing, it is used to verify the effectiveness of vulnerabilities Defihacklabs (2023b). Writing PoCs for smart contracts is a key step in verifying their functionality, security, and performance. Through PoCs, developers can identify and verify potential security vulnerabilities in a controlled environment, evaluate the performance of the contract in processingtransactions,includinggasfeesandexecutionefficiency. Itisalsousedtoverifytheaccuracyand consistency of contract logic, test interoperability with other contracts or external data sources, and more. 5IntermsofwritingPoCsforsmartcontracts,WeiandotherresearchershaveproposedamodelcalledSem- Fuzz You et al. (2017), which is a semantic-based system for auto-generating PoC vulnerabilities. SemFuzz utilizes error reports of software to automatically generate PoCs, which helps in more effectively identifying and fixing security vulnerabilities. Tools like Foundry Foundry (2023b) and Hardhat hardhat (2023) are widely used. Foundry provides a guide for writing PoC tests Foundry (2023a), while Hardhat is also used for developing PoCs to verify vulnerabilities in smart contracts Immunefi (2021). The use of these tools demonstrates the importance and effectiveness of automation tools in the field of smart contract security. DefiHackLabs Defihacklabs (2023a) provides a set of guidelines for audit steps Defihacklabs (2023b), an important part of which is about writing PoCs. This indicates that writing PoCs is not only a crucial step in identifying and verifying potential vulnerabilities during the smart contract audit process but also a key aspect in understanding and analyzing contract security. 3 Methodology 3.1 Prompt Design AccordingtosomeexperimentsonsmartcontractsconductedbySunetal.(2023),Huetal.(2023),andothers usingGPT,wefoundthattheeffectivenessofpromptssignificantlyaffectsGPT-4’sresponses. Therefore,for the experiments in this paper, we designed clear and effective prompts, incorporating role definition, CoT, requirement specifications, and response format in our prompt design. 3.1.1 Vulnerability Detection in the SolidiFI-benchmark Dataset This paper tests GPT-4’s capability in identifying vulnerabilities in smart contracts, selecting 35 smart contracts with artificially injected vulnerabilities from the SolidiFI-benchmark dataset, covering 7 common types of vulnerabilities. These types include overflow/underflow, reentrancy, TOD, timestamp dependency, unchecked send, unhandled exceptions, and tx.origin vulnerabilities. We defined prompts as scenarios that fulfill specific roles and needs. As shown in Figure1, we provided clear role definitions, prior knowledge, and response formats in our prompts. We tasked GPT-4 with acting asasmartcontractauditor,toidentifytheseventypesofvulnerabilitiespresentinthecontractsweprovided, and to specify their exact locations. 6Figure 1: Vulnerability detecting prompt 3.1.2 Smart Contract Code Analysis We examined vulnerabilities in 8 sets of audit reports, covering 18 types of smart contract vulnerabilities, including Business Logic, Access Control, Data Validation, Numerics, Reentrancy, Cryptography, Denial of Service, Upgradeable, and others. TobetterassessGPT-4’scapabilitiesindetectingvulnerabilitiesandparsingcodeinsmartcontracts,we evaluated its detection results against audit reports as a benchmark. We designed a prompt based on the CoT approach. The aim of this prompt was to require GPT-4 to mimic the manner in which auditors audit smart contracts. The entire audit process involves thoroughly understanding the background information and objectives of the smart contract, as well as conducting a comprehensive analysis of the logic behind its |
functions and call relationships. Based on the aforementioned procedure, we further requested GPT-4 to identify potential vulnerabilities. Figure 2: Smart contract code detecting prompt 7The prompt contains three critical thinking steps, which corresponds to the above critical procedures for manual audit. We combine role definition, chain of thought, requirement specification and response format in the prompt design Figure2. We also excluded certain functions that did not require specific explanations, such as ERC20, ERC721, and ERC777 functions, as well as information on function variable naming conventions. This is because such content is assumed to be well-known to auditors and does not require explanation. 3.1.3 PoC Quality Assessment To evaluate GPT-4’s ability to write PoC code, we designed two types of prompts. The first type of prompt 3 asks GPT-4 to first detect vulnerabilities and then write a PoC for the given smart contract. The second typeofprompt4providesavulnerabilityhintandrequestsGPT-4towriteaPoCtoverifythevulnerability. This was done to test the usability of PoCs provided by GPT-4 under two different scenarios: on one hand, toassessGPT-4’svulnerabilitydetectioncapabilities, andontheotherhand, tocomparewhetherproviding sufficient hints leads to more effective PoC writing, thereby assisting auditors in their work. Figure 3: The first type of PoC prompt Figure 4: The second type of PoC prompt We assess GPT’s responses to the two types of prompts based on two dimensions: the accuracy of vulnerability detection and the usability of the PoCs. The usability of PoCs primarily involves whether the logicofthePoCcodecanverifythediscoveredvulnerabilities,whetherthegeneratedPoCisunderstandable, implementable, and runnable in an experimental environment. This includes examining the quality of the PoC code, the presence of any potential misleading elements, the structure of the PoC code, the quality of comments, and the ease of implementation. 83.2 Evaluation System Design To provide a substantive evaluation of overall capabilities, we created an assessment system. This system includes various dimensions of evaluation mechanisms, aimed at assessing GPT-4’s performance in different aspects of smart contract auditing. The dimensions of the system include vulnerability detection, code parsing ability, and the capability to write PoCs. We used Precision, Recall, and Accuracy to evaluate GPT-4’s vulnerability detection capabilities. The formulas are as follows: Precision, alsoknownasthepositivepredictivevalue, measurestheproportionofcorrectlypredictedTP among all instances predicted as positive. It focuses on the accuracy of positive predictions. TP Precision= (1) TP+FP Recall,alsoknownassensitivityorthetruepositiverate,calculatestheproportionofcorrectlypredicted TP out of all actual positive instances. It focuses on the model’s ability to identify all positive instances. TP Recall= (2) TP+FN Accuracy: For a given test dataset, it is the ratio of the number of samples correctly classified by the classifier to the total number of samples. TP +TN Accuracy= (3) TP +FP +FN +TN F1-score: The harmonic mean of precision and recall. 2×Precision×Accuracy F1-score= (4) Accuracy+Precision For evaluating GPT-4’s code parsing capabilities for smart contracts, we designed an assessment system that includes 3 metrics. Based on the content of audit reports combined with detection results, we assigned a score (0-10) to each metric. As illustrated in Figure 5, Metric1 assesses GPT-4’s ability to understand the background of smart contracts, Metric2 evaluates the understanding of the relationships between smart contract functions, and Metric3 represents the comprehensive understanding ability, which is the average of Metric1 and Metric2. 9Figure 5: Metrics specification 4 Results 4.1 Vulnerability Detection in the SolidiFI-benchmark Dataset For the test results on the SolidiFI-benchmark dataset, we use Precision, Recall, and F1-score to evaluate the detection outcomes for each type of vulnerability, as illustrated in Figure 6. 10Figure 6: Vulnerability detection results The results in Figure 6 indicate that GPT-4 achieves a high level of accuracy in detecting vulnerabili- ties in the SolidiFI-benchmark dataset, with 100% accuracy for vulnerabilities such as TOD, Timestamp- dependency, Unhandled-Exceptions, and Tx.origin. However, the recall rates are relatively lower, with Overflow-underflow, Timestamp-dependency, and Unchecked-send having recall rates below 12%. Overall, the detection of the Tx.origin vulnerability is the most effective. Similarly, the F1-scores vary significantly due to the large differences between Precision and Recall, ranging from as high as 100% to as low as 19.8%. This indicates that GPT-4’s effectiveness in detecting smart contract vulnerabilities is inconsistent. Metrics Precision Recall F1-score Score 96.6% 37.8% 41.1% Table 2: Vulnerability detection results Table 2 shows the overall detection results for 7 types of vulnerabilities, with a total Precision of 96.6% and a Recall of 37.8%. The high overall Precision and low Recall imply that most of the positives identified byGPTarecorrect,butmanyactualpositiveswerenotidentified,indicatingthatsomevulnerabilitiescould not be detected by GPT-4. Similarly, the significant gap between Precision and Recall could also be due to insufficient training of GPT-4’s knowledge base on smart contract vulnerabilities, resulting in a lower F1-score. |
Table 3 presents the detection results of several security vulnerability detection tools for 7 types of 11Table 3: Comparison of detection tools Security Overflow- Re- TOD Timestamp-Unchecked- Unhandled- Tx.origin bugs underflow entrancy dependency send Exceptions Injected 104 104 105 109 100 105 105 bugs Manticore 15 25 NA NA NA NA NA Mythril 27 29 NA 59 90 67 95 Securify NA 94 95 NA 90 64 NA Smartcheck 21 0 NA 33 NA 2 9 Slither NA 104 NA NA NA 72 105 GPT-4 12 47 39 12 11 45 105 vulnerabilities. The values in the table represent the number of vulnerabilities detected by each tool, while ”NA” indicates that the vulnerability is beyond the detection scope of a particular tool. From these data, it is evident that GPT-4 excels in detecting tx.origin vulnerabilities, correctly identifying all 105 injected vulnerabilitiesinthecontracts. However,inothercategories,suchasOverflow-underflow,TOD,Timestamp- dependency,andUnchecked-sendvulnerabilities,GPT-4detectedthefewestcorrectvulnerabilitiescompared to other tools, with only 11 detected in the Unchecked-send category. These findings suggest that GPT-4 does not stand out in comparison to other vulnerability detection tools. 4.2 Smart Contract Code Analysis For the smart contract source code in the audit reports, we employed the designed prompt of Figure 2. The levelofunderstandingsignificantlyimpactsGPT-4’sauditingcapabilities. Therefore,toguideGPT-4infully comprehendingtheinternallogicandbackgroundinformationofthesmartcontract’sfunctionalities,wepre- defined a fine-grained process in the prompt based on the CoT. The process within the prompt encourages GPT-4tounderstandandanalyzeinthemannerofahumanauditor. Withathoroughunderstandingofthe smart contract’s background and function calls, we further requested GPT-4 to detect vulnerabilities within the smart contract. 12Figure 7: Smart contract code parsing ability test results We evaluated the results based on 8 sets of smart contract audit reports, as shown in Figure 7, where the numbers represent the sets of contracts. From the results, we can see that GPT-4 has a good ability to recognize basic information such as the background and function calls of the eight sets of smart contracts. For Report 6, GPT-4 scored 9 points. Except for Report 2, all evaluation results were above 5 points, indi- cating that GPT-4 has a certain ability to recognize the background and functionalities of smart contracts. The average comprehensive score for GPT-4’s audits was 6.5 points. However, we can also see cases of poor recognition, such as Report 2, which scored only 1 point. This was because GPT-4 identified only the librarycallswithoutexplainingthemaincontentofthesmartcontract. Moredetailscanbefoundinourex- perimental content: https://github.com/Mirror-Tang/Evaluation-of-ChatGPT-s-Smart-Contract-Auditing- Capabilities-Based-on-Chain-of-Thought/tree/master. This situation also shows that GPT-4’s ability to recognize smart contract detection requirements is inconsistent, and in practical situations, GPT-4 could be further guided to provide explanations for the main contract. Table 4 displays GPT-4’s vulnerability detection results for smart contracts. We tallied the number of vulnerabilities in each set of audit reports to obtain GPT-4’s vulnerability detection outcomes. The table reveals that GPT-4’s effectiveness in detecting vulnerabilities in smart contracts is not satisfactory. For the eight sets of smart contracts, three sets had no vulnerabilities detected, and among the remaining five sets, the highest Accuracy was 33%, with an average Accuracy of only 12.8%. This further illustrates that GPT-4’s ability to recognize vulnerabilities in smart contracts is still lacking. 13Table 4: Smart contract vulnerability detection results Metrics Report Report Report Report Report Report Report Report 1 2 3 4 5 6 7 8 Number 8 8 4 13 7 3 4 13 of Vulnera- bilities TP 2 0 0 2 1 1 0 2 Accuracy 25% 0 0 15.4% 14.3% 33% 0 15.4% 4.3 PoC Quality Assessment We further evaluated GPT-4’s ability to generate PoCs. The design prompt used in the experiments was open-ended. For the two types of prompts 3 and 4, one provides hints about existing vulnerabilities in the contract, and the other asks GPT-4 to first detect vulnerabilities and then write PoCs, we selected 10 smart contract code snippets and asked GPT-4 to analyze and write PoCs for them. Writing PoCs after GPT-4 identifies vulnerabilities: Table 5: GPT-4 results of detecting vulnerabilities in 10 contract code Vulnerability Detection Accuracy Score Rate Incorrectly marked as unavailable 30% scores 0 points, correctly identifying a single item scores 5 points. According to Table 5, we can see that when GPT-4 is provided with hints about vulnerabilities, the inability to correctly identify vulnerabilities leads to the generation of unusable PoCs. This indicates a deficiency in GPT-4’s ability to detect vulnerabilities in smart contracts. The figure and code below represent the PoC with the highest usability score from the aforementioned table. We provided a smart contract with a reentrancy vulnerability, where the issue lies in the balance deduction occurring after the transfer is successful. In GPT-4’s response, after identifying the vulner- ability, the logic analyzed was correct. However, in the provided PoC, the critical attack function was not placed in the receive function, making the attack unsuccessful. Additional experimental data from this study can be viewed at: https://github.com/Mirror-Tang/Evaluation-of-ChatGPT-s-Smart-Contract- Auditing-Capabilities-Based-on-Chain-of-Thought/tree/master Figure 8: Prompt demo |
141 pragma solidity ^0.8.0; 2 3 /// @dev Contract managing the storage and redemption of ETH. 4 contract Vault { 5 mapping(address => uint256) public balances; 6 7 /// @dev Stores the ETH of the sender in the contract. 8 function store() external payable { 9 balances[msg.sender] += msg.value; 10 } 11 12 /// @dev Redeems the ETH of the sender in the contract. 13 function redeem() external { 14 (bool success,) = msg.sender.call{value: balances[msg.sender]}(""); 15 require(success, "Transfer failed"); 16 balances[msg.sender] = 0; 17 // The deduction of the balance occurs after the transfer and there is no reentrancy protection, hence there is a reentrancy vulnerability. 18 } 19 } Inthecodeweprovidedabove,theredeemfunctionimplementsthewithdrawalfunctionality,usingacall to make the transfer. This method of transfer forwards all remaining gas for the current execution, and the balanceisclearedafterthetransferiscompleted. Ifanattackingcontractre-entersthisfunctionbetweenthe transfer and the balance clearing, it could make multiple withdrawals, causing a reentrancy vulnerability. Figure 9: GPT-4’s Response(Part 1) InGPT-4’sresponseFigure9,itcorrectlyidentifiedthepotentialvulnerabilityinthecontract,specifically the possibility of a reentrancy attack in the redeem() function. An attacker can exploit this by nesting calls to other contracts within the contract call, allowing them to re-enter the contract and execute its code multiple times. 1 // SPDX-License-Identifier: MIT 2 pragma solidity ^0.8.0; 153 4 contract Attack { 5 Vault public vault; 6 7 constructor(address _vaultAddress) { 8 vault = Vault(_vaultAddress); 9 } 10 // Fallback function to receive ETH 11 receive() external payable {} 12 13 // Perform the attack 14 function performAttack() external { 15 // Deposit some ETH to trigger the vulnerable function 16 vault.store{value: 1 ether}(); 17 18 // Call the vulnerable function to initiate the reentrancy attack 19 vault.redeem(); 20 } 21 22 // Function to be called by the vulnerable contract (Vault) 23 function reentrantFunction() external { 24 // Reenter the vulnerable function before the previous call is completed 25 vault.redeem(); 26 ///This function cannot execute a reentry attack, as the ‘vault.redeem()‘ function is not placed inside the ‘receive‘ function. 27 } 28 29 // Function to withdraw the stolen ETH 30 function withdrawStolenETH() external { 31 // Transfer the stolen ETH to the attacker’s address 32 payable(msg.sender).transfer(address(this).balance); 33 } 34 } According to the code provided, GPT-4’s PoC was unable to execute a reentrancy attack because it did not place the vault.redeem() function inside the receive function. A contract will automatically execute the receive function or fallback function upon receiving a transfer (if not through a payable function). By adding vault.redeem() there, with an added balance restriction, the attacking contract can automatically triggeranothertransferuponreceivingthefirstoneandbeforethebalanceiscleared,leadingtoareentrancy attack. 16Figure 10: GPT-4’s Response(Part 2) Combining the explanation given by GPT-4: the reentrantFunction() function in the attacking contract implementsreentrybycallingthetargetcontract’sredeem()functionagaineachtimeitisentered. However, in reality, this function cannot achieve a reentry attack. For this set of smart contracts, GPT-4 correctly analyzed the potential vulnerability within the contract but failed to execute a reentry attack, hence the usability score for this PoC is 5 points. Providing vulnerability hints before asking to write a PoC, the table below shows the evaluation results of PoCs written by GPT-4: Table 6: Usability scores for PoCs PoCs PoC1 PoC2 PoC3 PoC4 PoC5 PoC6 PoC7 PoC8 PoC9 PoC10 Score 10 10 10 10 0 10 5 10 8 7 Table 6 shows the results of PoCs written by GPT-4 when vulnerability hints are provided. It is evident that most PoCs written by GPT-4 have high usability scores, with all but the fifth PoC scoring at least 5 points. Among the 10 contracts, 6 PoCs scored full marks for usability, indicating that GPT-4 is capable of writing PoCs. However, its performance is inconsistent. 5 Conclusion In this paper, we assessed the capabilities of GPT-4 (updated with knowledge as of April 2023) in auditing smart contracts. Our evaluation encompassed smart contract vulnerability detection, smart contract code parsing capabilities, and PoC writing abilities. In the experiments on smart contract vulnerability detection using the Solidifi-benchmark vulnerability set, GPT-4’s results indicate high precision in detecting 7 types of vulnerabilities, all above 80%, but with low recall rates, the lowest being around 11%. This suggests that GPT-4 may miss some vulnerabilities during detection. The F1-scores were inconsistent, ranging from 100% to as low as 19.8%. Moreover, the vulnerability detection results for the 8 sets of smart contracts, 17compared with audit report outcomes, also indicate that GPT-4’s vulnerability detection capabilities are lacking, with the highest accuracy being only 33%. In the PoC writing experiments, GPT-4 correctly identified only one vulnerability out of 10 smart contract sets, indicating a deficiency in GPT-4’s ability to detect smart contract vulnerabilities. However, GPT-4 performed well in understanding smart contracts, accurately parsing some background and function relationships of smart contracts in 8 sets of experiments, thoughtherewereinstancesofinstabilityandissuesinanalyzingthemainfunctionalitiesofsmartcontracts. |
IntheexperimentswhereGPT-4wrotePoCs,itperformednoticeablybetterwhengivenvulnerabilityhints. These experimental results suggest that GPT-4 possesses certain capabilities in parsing smart contract code and writing PoCs, but still faces challenges in vulnerability detection. GPT-4 can serve as an auxiliary tool in smart contract auditing, but it does not imply that it can be helpful in all aspects related to smart contract auditing. In summary, GPT-4 can be a useful tool in assisting with smart contract auditing, especially in code parsingandprovidingvulnerabilityhints. However,givenitslimitationsinvulnerabilitydetection,itcannot fullyreplaceprofessionalauditingtoolsandexperiencedauditorsatthistime. WhenusingGPT-4,itshould be combined with other auditing methods and tools to enhance the overall accuracy and efficiency of the audit. Author biographies • YuyingDureceivedherBachelorofEngineeringdegreeinSoftwareEngineeringin2020andherMaster of Electronic Information in 2023. SheiscurrentlyablockchainresearcheratSalusSecurity. Herresearchinterestsincludesmartcontract vulnerability detection based on deep learning, security research on Layer2 scaling schemes, and EIP securityresearch. SherecentlypublishedeLetteronblockchainresearcharticlesinthejournalScience. • XueyanTangreceivedaBachelor’sdegreeinEngineeringfromBeijingJiaotongUniversityin2019,and a Master’s degree in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology in 2021. He is currently pursuing a Ph.D. in Computer Education at the University of Buenos Aires and conducting research at UC Berkeley. From 2019 to 2021, he served as the Head of Innovation Laboratory at Babel Finance, and in 2022, he co-founded GeekCartel Fund and Salus Security as a co-founder. He authored ”Research on Big Data and Network Security” published by Northeast Forestry University Press in 2019 and holds 18 patents. His communication comments have been selected multiple times by the editors of the journal 18”Science”. Competing interests The authors declare none. References BUTT, M. (2023). Web3 industry faces $1.7 billion in losses, salus re- veals trends and vulnerabilities. https://blockchainreporter.net/ web3-industry-faces-1-7-billion-in-losses-salus-reveals-trends-and-vulnerabilities/. Accessed: 2024-02-18. Chang, Y., Wang, X., Wang, J., Wu, Y., Zhu, K., Chen, H., Yang, L., Yi, X., Wang, C., Wang, Y., et al. (2023). A survey on evaluation of large language models. arXiv preprint arXiv:2307.03109. Chen, C., Su, J., Chen, J., Wang, Y., Bi, T., Wang, Y., Lin, X., Chen, T., and Zheng, Z. (2023). When chatgpt meets smart contract vulnerability detection: How far are we? arXiv preprint arXiv:2309.05520. David, I., Zhou, L., Qin, K., Song, D., Cavallaro, L., and Gervais, A. (2023). Do you still need a manual smart contract audit? arXiv preprint arXiv:2306.12338. De Andrs, J. and Lorca, P. (2021). On the impact of smart contracts on auditing. International Journal of Digital Accounting Research, 21. Defihacklabs (2023a). Let’s make web3 more secure! https://defihacklabs.substack.com/. Accessed: 2024-02-18. Defihacklabs (2023b). Solidity security - lesson 1: Smart contract audit methodology and tips. https: //defihacklabs.substack.com/p/lesson-1-smart-contract-audit-methodology. Accessed: 2024- 02-18. Egli, A. (2023). Chatgpt, gpt-4, and other large language models: The next revolution for clinical microbi- ology? Clinical Infectious Diseases, 77(9):1322–1328. Feist, J., Grieco, G., and Groce, A. (2019). Slither: a static analysis framework for smart contracts. In 2019 IEEE/ACM 2nd International Workshop on Emerging Trends in Software Engineering for Blockchain (WETSEB), pages 8–15. IEEE. 19Feng, T., Yu, X., Chai, Y., and Liu, Y. (2019). Smart contract model for complex reality transaction. International Journal of Crowd Science, 3(2):184–197. Foundry(2023a). Foundrybook-test. https://book.getfoundry.sh/forge/tests. Accessed: 2024-02-18. Foundry (2023b). Foundry is a blazing fast, portable and modular toolkit for ethereum application develop- ment written in rust. https://github.com/foundry-rs/foundry. Accessed: 2024-02-18. Ghaleb, A. and Pattabiraman, K. (2020). How effective are smart contract analysis tools? evaluating smart contractstaticanalysistoolsusingbuginjection. InProceedings of the 29th ACM SIGSOFT International Symposium on Software Testing and Analysis. hardhat (2023). Ethereum development environment for professionals. https://hardhat.org/. Accessed: 2024-02-18. He, D., Deng, Z., Zhang, Y., Chan, S., Cheng, Y., and Guizani, N. (2020). Smart contract vulnerability analysis and security audit. IEEE Network, 34(5):276–282. Hicken, A. (2023). False positives in static code analysis. https://www.parasoft.com/blog/ false-positives-in-static-code-analysis/. Accessed: 2024-02-18. Hu,S.,Huang,T.,I˙lhan,F.,Tekin,S.F.,andLiu,L.(2023). Largelanguagemodel-poweredsmartcontract vulnerability detection: New perspectives. Immunefi (2021). How to poc your bug leads. https://medium.com/immunefi/ how-to-PoC-your-bug-leads-5ec76abdc1d8. Accessed: 2024-02-18. Kim, S.andRyu, S.(2020). Analysisofblockchainsmartcontracts: Techniquesandinsights. In2020 IEEE Secure Development (SecDev), pages 65–73. |
Kushwaha, S. S., Joshi, S., Singh, D., Kaur, M., and Lee, H.-N. (2022). Ethereum smart contract analysis tools: A systematic review. IEEE Access, 10:57037–57062. Manticore (2023). Project is in maintenance mode. https://github.com/trailofbits/manticore. Ac- cessed: 2024-02-18. Ruskin, L. (1980). Mythril# 8. Mythril, 2(4):1. sabocrypto (2022). Top 10 crypto hacks of 2022. https://cn.cointelegraph.com/news/ 10-of-the-biggest-hacks-in-2022-crypto. Accessed: 2024-02-18. 20Sayeed,S.,Marco-Gisbert,H.,andCaira,T.(2020). Smartcontract: Attacksandprotections. IEEE Access, 8:24416–24427. Securify (2023). Securify v2.0. https://github.com/eth-sri/securify2. Accessed: 2024-02-18. Sun, Y., Wu, D., Xue, Y., Liu, H., Wang, H., Xu, Z., Xie, X., and Liu, Y. (2023). When gpt meets program analysis: Towards intelligent detection of smart contract logic vulnerabilities in gptscan. arXiv preprint arXiv:2308.03314. TEAM, C. (2023). Vulnerability in curve finance vyper code leads to multi-million dollar hack affecting several liquidity pools. https://www.chainalysis.com/blog/curve-finance-liquidity-pool-hack/. Accessed: 2024-02-18. team, O. (2023). Openai. https://openai.com/. Accessed: 2024-02-18. Tikhomirov,S.,Voskresenskaya,E.,Ivanitskiy,I.,Takhaviev,R.,Marchenko,E.,andAlexandrov,Y.(2018). Smartcheck: Staticanalysisofethereumsmartcontracts. InProceedings of the 1st international workshop on emerging trends in software engineering for blockchain, pages 9–16. Wei,J.,Wang,X.,Schuurmans,D.,Bosma,M.,Xia,F.,Chi,E.,Le,Q.V.,Zhou,D.,etal.(2022). Chain-of- thoughtpromptingelicitsreasoninginlargelanguagemodels. Advances in Neural Information Processing Systems, 35:24824–24837. Wood, G. et al. (2014). Ethereum: A secure decentralised generalised transaction ledger. Ethereum project yellow paper, 151(2014):1–32. Wu, T., He, S., Liu, J., Sun, S., Liu, K., Han, Q.-L., and Tang, Y. (2023a). A brief overview of chatgpt: The history, status quo and potential future development. IEEE/CAA Journal of Automatica Sinica, 10(5):1122–1136. Wu, T., He, S., Liu, J., Sun, S., Liu, K., Han, Q.-L., and Tang, Y. (2023b). A brief overview of chatgpt: The history, status quo and potential future development. IEEE/CAA Journal of Automatica Sinica, 10(5):1122–1136. You, W., Zong, P., Chen, K., Wang, X., Liao, X., Bian, P., and Liang, B. (2017). Semfuzz: Semantics-based automatic generation of proof-of-concept exploits. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, CCS ’17, page 2139–2154, New York, NY, USA. Association for Computing Machinery. 21Yu, X. L. (2023). Oyente - an analysis tool for smart contracts. https://github.com/enzymefinance/ oyente. Accessed: 2024-02-18. Zhang, Z., Zhang, B., Xu, W., and Lin, Z. (2023). Demystifying exploitable bugs in smart contracts. ICSE. 22 |
2402.12222 CovRL: Fuzzing JavaScript Engines with Coverage-Guided Reinforcement Learning for LLM-based Mutation JueonEom SeyeonJeong TaekyoungKwon YonseiUniversity SuresofttechInc. YonseiUniversity jueoneom@yonsei.ac.kr best6653@gmail.com taekyoung@yonsei.ac.kr ABSTRACT on producing inputs that are grammatically accurate [2, 19, 20, Fuzzingisaneffectivebug-findingtechniquebutitstruggleswith 40,41,51,53,54],whiletoken-levelfuzzingoffersamoreflexible complexsystemslikeJavaScriptenginesthatdemandprecisegram- method. Token-level fuzzing transforms inputs into a sequence maticalinput.Recently,researchershaveadoptedlanguagemodels oftokensandthensubstitutescertaintokenswithoutadhering forcontext-awaremutationinfuzzingtoaddressthisproblem.How- strictly to grammar rules [44]. Coverage-guided fuzzing is also ever,existingtechniquesarelimitedinutilizingcoverageguidance widelyappliedinfuzzingJSengines,encompassingbothgrammar- forfuzzing,whichisratherperformedinablack-boxmanner. levelandtoken-levelfuzzingmethods[20,40,44,54]. ThispaperpresentsanoveltechniquecalledCovRL(Coverage- However,duetothecontinuousevolutionoftheJavaScriptlan- guidedReinforcementLearning)thatcombinesLargeLanguage guage,thegrammarinJSenginesisalsobeingconsistentlyupdated Models(LLMs)withreinforcementlearningfromcoveragefeed- tomatchthesechanges.Consequently,grammar-levelfuzzingfaces back.Ourfuzzer,CovRL-Fuzz,integratescoveragefeedbackdirectly the challenge of needing to add new grammar rules frequently. intotheLLMbyleveragingtheTermFrequency-InverseDocument Token-levelfuzzingoffersmoreflexibilitycomparedtogrammar- Frequency(TF-IDF)methodtoconstructaweightedcoveragemap. levelfuzzing.Nevertheless,asmutationsevolvefromtheinitial Thismapiskeyincalculatingthefuzzingreward,whichisthen seed,maintainingsyntacticalcorrectnessbecomeschallenging,of- appliedtotheLLM-basedmutatorthroughreinforcementlearn- tenleadingtosyntaxerrors.Thislimitationhinderstheabilityto ing.CovRL-Fuzz,throughthisapproach,enablesthegeneration uncoverdeeperbugswithoutinducingerrors.Therefore,fuzzing oftestcasesthataremorelikelytodiscovernewcoverageareas, JSenginesrequiresmutatinghighlystructuredinputs,ataskthat thusimprovingvulnerabilitydetectionwhileminimizingsyntax traditionalheuristicmutationsalonefinddifficultinproducingwell- and semantic errors, all without needing extra post-processing. formedinputs.Toovercomethis,recentadvancementshaveintro- OurevaluationresultsindicatethatCovRL-Fuzzoutperformsthe ducedfuzzingtechniquesthatutilizeCode-LLMs,capableofgener- state-of-the-artfuzzersintermsofcodecoverageandbug-finding atingwell-formedinputs(i.e.,thosethataresyntacticallyinformed) capabilities:CovRL-Fuzzidentified48real-worldsecurity-related forcompilers,deeplearninglibraries,andJSengines[11,12,58,60]. bugsinthelatestJavaScriptengines,including39previouslyun- Amongthesedevelopments,TitanFuzz[11]andFUZZ4ALL[58] knownvulnerabilitiesand11CVEs. arenotablefortheiruseofCode-LLMsinmutationprocesses.Pre- trainedCode-LLMs,alreadytrainedonextensivedatasetsacross variousprogramminglanguages,canbedirectlyemployedforLLM- 1 INTRODUCTION basedmutationwithouttheneedforfurtherfinetuning.Moreover, JavaScript(JS)enginesarecomplexsoftwarecomponentsforpars- thesemodelsinherentlyunderstandthecontextofthelanguage. ing,interpreting,compiling,andexecutingJavaScriptcodeinmod- Thismeanstheyarecapableofcomprehendingthegrammarofthe ernwebbrowsers.Theseenginesareessentialforaccessingtoday’s codeandgeneratinginputsthatreflectbothgrammaticalaccuracy interactivewebandembeddedapplications.Accordingtoarecent andcontextualrelevance.Theireffectivenessisevidentintheir survey,asofJanuary2024,JavaScriptisemployedasaclient-side abilitytogenerateseedsthatareabundantinedgecases[12,60]. programminglanguageby98.9%ofwebbrowsers[52].Giventheir While pretrained LLM-based mutators have proven to be ef- extensiveuseandTuring-completenature,securingJSenginesis fectiveforfuzzing[11,58],itisimportanttonotethatallcurrent acriticalrequirement.Forinstance,vulnerabilitiesinJSengines LLM-basedfuzzingapproachesarecategorizedasblack-boxfuzzing canleadtovariousattackpatterns,encompassingthreatssuchas anddonotincorporateinternalprograminformationsuchascode informationdisclosureandthepotentialforbypassingwebbrowser coverage.InTitanFuzz[11],althoughafitnessfunctionisused, securitymeasures[18,35].Consideringthehighstakes,theneedfor itdoesnotinvolvecoverage-relatedinformationlikeincoverage- continuousandautomatedtesting,suchasfuzzing,iscrucialforJS guidedfuzzing.Instead,itutilizesstaticanalysisinformationofthe engines,despitechallengesfromtheirstrictinputgrammarrequire- generatedinput,suchasthenumberofuniquefunctioncalls,depth, ments.Fuzzinginvolvesprovidinginvalid,unexpected,orrandom and iteration counts, as the fitness function’s input. Therefore, inputs,e.g.,bymutation,toaprogramtodetectbugs.Coverage- TitanFuzzisnotbasedoncoverage-guidedfuzzing,asitdoesn’t |
guidedfuzzing,e.g.,AFL[36],standsoutasaneffectivemethodby directlyutilizeexecutioncodecoverageinformationofthefuzzing usingcodecoveragetoguidethefuzzingprocess,ensuringamore target. thoroughexaminationofthecodepathsandtherebyincreasingthe Differingfromblack-boxfuzzing,coverage-guidedfuzzinglever- chancesofuncoveringhiddenbugs[36]. agesinternalprogramdata.Thismethodoffuzzingusesanevo- PreviousresearchonfuzzingJSenginescanbroadlybedivided lutionarystrategytocreateinterestingseedsthataimtoenhance intotwomainapproaches:grammar-levelandtoken-levelfuzzingto thetargetprogram’scoverage.Itconsiderstheimpactofmutated dealwithstrictgrammar.Grammar-levelfuzzingtechniquesfocus 4202 beF 91 ]RC.sc[ 1v22221.2042:viXraJueonEom,SeyeonJeong,andTaekyoungKwon • WeintroduceCovRL,anoveltechniquethatcombinesLLMs withreinforcementlearningfromcoveragefeedback.Thisisa uniqueapproachthatdirectlyintegratescoveragefeedbackinto theLLMusingTF-IDFforadvancedcoverage-guidedfuzzing. • WeimplementCovRL-Fuzz,acoverage-guidedfuzzeremploying theCovRLtechnique,focusedonJSenginefuzzing.Ourexper- imentsshowthatCovRL-Fuzzoutperformsexistingfuzzersin termsofcodecoverageandbug-findingcapabilities.Thisad- vancementunderscoresCovRL-Fuzz’sefficiencyinnavigating Figure1:Overview:CovRLisapioneeringapproachinintegrating thecomplexitiesofJSenginefuzzing. anLLM-basedmutatorintocoverage-guidedfuzzing. • CovRL-Fuzzsuccessfullyidentified48real-worldsecurity-related bugs,including39previouslyunknownbugs(11CVEs)inthe inputsontheprogram’scoverage.AccordingtoMiller’sresearch,a latestJSengines. 1%increaseincodecoveragecorrelatestoa0.92%higherprobability • Tofosterfutureresearch,wewillopen-sourceourworkatpubli- ofdiscoveringbugs[6].Applyingcoveragedataincoverage-guided cationtime. fuzzingenhancescoveragemoreeffectivelythanblack-boxfuzzing, increasingthelikelihoodofdiscoveringmorebugs.However,as 2 BACKGROUNDANDRELATEDWORK wedescribebelow,thisissurprisinglychallenging. Problem.WhenAFL’sheuristicmutatorisreplacedwithapre- 2.1 JSEngineFuzzing trained LLM-based mutator in coverage-guided fuzzing, we ob- Fuzzingisapowerfulautomatedbug-findingtechniquethatgen- serve a reduced error rate. However, this improvement did not eratesandexecutesnumerousinputstoidentifyvulnerabilities, translateintoincreasedcodecoverage.Surprisingly,someperfor- crashes,orunexpectedbehaviorsinsoftware[38].Bothacademia mancepatternsfellbelowthatofrandomfuzzing.Thispatternwas andindustryhaverecognizeditseffectivenessinuncoveringsoft- confirmedinourexperiments,wherewereplacedAFL’smutator warebugs.However,fuzzingfaceschallengeswithJSenginesthat withanLLM-basedmutator.SomeoftheLLM-basedfuzzersob- requirestrictgrammarforinput.Whentheinputisnotsyntacti- tained12-16%lowercoverageinV8and4-13%lowercoveragein callycorrect,theJSenginereturnsasyntaxerror.Ontheother JerryScriptcomparedtothebaseline(randommutation).Formore hand,semanticinconsistencies(e.g.,errorswithreference,type, detailedresults,pleaserefertoTable6inSection5.3.Ourresults range,orURI)leadtosemanticerrors[19].Inbothcases,theJS alsoalignwithfindingsfromTitanFuzz’sexperiments.Wespec- engine’scorelogic,whichmaycontainhiddenvulnerabilities,isn’t ulatethatthisphenomenonarisesbecauseLLM-basedmutators executed. makeconstrainedpredictions.WhileAFL’sinterestingseedsare Toaddressthesechallenges,researchershaveproposedgrammar- basedonincreasedcoverage,randommutatorsindiscriminately levelandtoken-levelfuzzingapproaches.Thesestrategiesemploy mutatetokensfromthedictionary,regardlessofcontext.Incontrast, heuristicmethodstotackletheissue.Thegrammar-leveltechnique LLM-basedmutatorsfocusoncontextuallyrelevanttokenpredic- transformstheseedintoanIntermediateRepresentation(IR)to tions,whichreducesdiversity(duetoanoveremphasisoncontext, producegrammaticallyaccurateinputs[2,19,20,40,41,51,53,54]. theyoftenpredictcommonsentences,diminishingdiversity).Thus, Whilethisapproachreliablyproducessyntacticallycorrectinputs, incoverage-guidedfuzzing,LLM-basedmutators’context-aware itdoesn’tconsistentlyaccountforsemanticconstraints.Moreover, mutationsreduceerrorsbutalsolimitdiversity,renderingthem itoftendemandssubstantialmanualefforttocraftthenecessary lesseffectivethanrandomfuzzing. grammarrules.Thetoken-levelfuzzingapproach[44]offersamore OurApproach.Toaddresstheaforementionedproblem,wepro- flexiblemethod,freefromtheconstraintsofgrammarrules.This poseenhancingLLM-basedmutatorstoalignbetterwithcoverage- techniquetransformsinputsintoasequenceoftokensandsub- guidedfuzzing.Thisinvolvesanovelstrategyofprovidingdirect stitutescertainones.Althoughthismethodhasdemonstratedef- coveragefeedbacktotheLLM-basedmutatorformoreeffectiveJS fectivenessinbugdetection,itsstrategyofrandomlyreplacing enginefuzzingasillustratedinFigure1.Keytothisapproachisthe tokens—withoutaccountingforinter-tokenrelationships—places |
useofacoverage-basedweightmap,whereweightsareassigned asignificantdependencyonthequalityoftheinitialseed.Conse- accordingtotheinversefrequencyofeachcoverageoccurrence.By quently,theapproachoftenresultsininputsthatarenotsyntacti- leveragingTermFrequency-InverseDocumentFrequency(TF-IDF) callycorrect. forweightingthecoveragemap,thecoverage-weightedrewardis Recently,therehasbeenagrowinginterestamongresearchers directlyappliedinreinforcementlearning,enablingtheLLM-based inutilizingdeeplearning-basedLanguageModels(LM)infuzzing, mutatortogeneratetestcasesthatcanachievenewcoverage.This aimingtoovercomethelimitationsoftraditionalfuzzingmeth- methodenhancesthemodel’sabilitytodiscoverunknownvulnera- ods.EarlyendeavorsleveragedRNN-basedLanguageModelsto bilitiesandreducessyntaxandsemanticerrorswithouttheneedfor mutateportionsofinputs[10,17,26,32].Morerecently,there’s additionalpost-processing.WecallthisapproachCovRL-fuzz,and beenadiscernibletrendtowardstheadoptionofLargeLanguage unlikeotherLLM-basedfuzzingtechniques[11,12,58,60],CovRL Models(LLMs)forseedgenerationandmutation[11,12,58,60]. isthefirstmethodthatproperlyintegratesanLLM-basedmutator TitanFuzz[11]isablack-boxfuzzingtechniquetouseLLMsfor tocoverage-guidedfuzzing. mutation,demonstratingtheeffectivenessofpretrainedLLMsnot Tosumup,thispapermakesthefollowingcontributions: onlyforseedgenerationbutalsoformutation.WhileTitanFuzzCovRL:FuzzingJavaScriptEngineswithCoverage-GuidedReinforcementLearningforLLM-basedMutation didn’tuseinternaltargetinformationlikecoverageinfuzzing,it Algorithm1:Coverage-RateRewarding(CRR) didutilizestaticanalysismetricssuchasthenumberofunique Input:testcase𝑇 functioncalls,depth,anditerationcounts.Thesemetrics,gathered Output:reward𝑅 𝑐𝑜𝑣 fromstaticallyanalyzingmutatedinputs,helpedselectinteresting 𝑇𝑜𝑡𝑎𝑙 :TotalCoverageMap(Accumulated) 1 𝑐𝑜𝑣 seedsforfuzzingprocesses.Theysubstitutedportionsofinputs withamasktokenforthemutation(wereferto“MaskMutation”). 2 FunctionGetReward(𝑇): Conversely,FUZZ4ALL[58]introducedmutationbyaddingmu- 3 ifJS_Engine(𝑇)is𝑆𝑦𝑛𝑡𝑎𝑥𝐸𝑟𝑟𝑜𝑟 then tationprompts,suchas‘Please create a mutated program’ 4 return-1.0 insteadofusingmasktokens. 5 elseifJS_Engine(𝑇)is𝑆𝑒𝑚𝑎𝑛𝑡𝑖𝑐𝐸𝑟𝑟𝑜𝑟 then Coverage-guidedFuzzing.Byleveragingcoveragefeedbacktoex- 6 return-0.5 plorediversecodepaths,coverage-guidedfuzzinghasconsistently 7 else outperformedtraditionalblack-boxfuzzinginitsabilitytodiscover 8 /*JS_Engine(𝑇)isPassed*/ softwarebugs[6].ToolsliketheAmericanFuzzyLop(AFL)have 9 𝑇𝑐𝑜𝑣=GetCov(𝐼𝑛𝑝𝑢𝑡) notablyshiftedtestingparadigmsbyfocusingonmaximizingcode 10 𝑇𝑜𝑡𝑎𝑙𝑐𝑜𝑣+=𝑇𝑐𝑜𝑣 ⊕𝑇𝑜𝑡𝑎𝑙𝑐𝑜𝑣 coverage,mutating,andgeneratinginputsequences.Suchtools 11 returnCount(𝑇𝑐𝑜𝑣)/Count(𝑇𝑜𝑡𝑎𝑙𝑐𝑜𝑣) havebeenremarkablyeffective,uncoveringaplethoraofsecurity issuesandtherebyvalidatingtheirabilitytodetectvulnerabilities 12 FunctionRewardModelingProcess(𝑇): acrossawiderangeofsoftware[36].Ithasalsobeenextensively 13 𝑅 𝑐𝑜𝑣 =GetReward(𝑇) appliedinJSenginefuzzing,suchasgrammar-levelandtoken-level 14 𝑂𝑢𝑡𝑝𝑢𝑡 ←𝑅 𝑐𝑜𝑣 fuzzing[20,40,44,54]. 15 return𝑂𝑢𝑡𝑝𝑢𝑡 ItisillustratedinFigure1.Unlikeblack-boxorwhite-boxfuzzing, coverage-guidedfuzzingutilizescodecoverageinformationofthe targetsoftwaretoexplorevariedcodepaths.Thismethodinitially requirestheinstrumentationofthesoftware,leadingtothecreation approachhasinherentchallenge,notablyitsfailuretodifferenti- ofacoveragemap—amatrixthattracksthefrequencyofspecific atebetweennewlydiscoveredandpre-existingcoverage,which codepathsbeingaccessed. meansitawardshighscoresforMANYCOVERED,includingthose Followingthissetup,thefuzzingprocedurebeginswiththese- previouslycovered.Inotherwords,evenifatestcasedoesnot lectionofaseedfromtheseedqueueforthemutation(1).This findnewcoverage,itcanstillreceiveahighscoreifitcoversa selectedseedisthenmutatedtogenerateanewtestcase(2).Sub- substantialamountofexistingcoverage.Inthiswork,wepropose sequently,theexecutorrunsthetargetsoftwarewiththistestcase, anewrewardingapproachthataddressesthisproblem. measuringitsassociatedcodecoverage(3).Ifcoverageisnotal- readyrecordedinthecoveragemap,itisdeemedasnewcoverage. 2.2 FinetuningLLMsforcode Identifyingsuchnewcoverageelevatesthemutatedtestcasetothe FollowingthesuccessofLLMsinNaturalLanguageProcessing statusofan‘interestingseed’(4),whichisthenqueuedbackinto (NLP) tasks [5, 8, 9], the field of programming languages is ad- theseedpool(5).Bycontinuouslyreintroducingsuchinteresting vancingwithsignificantcontributionsfromLargeLanguageMod- seedsandencouragingthediscoveryofnovelcoveragesthrough elsforcode(Code-LLMs)suchasPLBART[1],CodeT5[55,56], iterativemutations,coverage-guidedfuzzingeffectivelygenerates Codex[7]andInCoder[16].Theseadvancementsarefacilitating awidearrayofdiversetestcases,eachtargetingtheexplorationof variousdownstreamtasks,includingcodecompletion[61],pro- unexploredcodeareasofthetargetsoftware. gramsynthesis[3,24,31,47],programrepair[15,59],andmany others. RL-based Fuzzing. Unlike coverage-guided fuzzing, RL-based |
MethodsforfinetuningLLMs,includingCode-LLMsarecatego- fuzzingapproaches[4,29,30]seektoimproveperformancenot rizedintosupervisedfinetuning(SFT),instructionfinetuning[57], byutilizingcoverageforseedselection,butbyincorporatingcode andRL-basedfinetuning[25,39,43].Whilepromptengineering coveragefeedbackintodeeplearningmodels,suchasdeepneural controlstheoutputofLLMsatinferencetimethroughinputmanip- networks (DNNs) and recurrent neural networks (RNNs). They ulationalone,SFT,instructionfinetuning,andRL-basedfinetuning providefeedbackusingeachcodecoverageasareward,andfor aimtosteerthemodelduringtrainingtimebylearningfromspe- thispurpose,theyprocessthecodecoverageintoaquantifiedre- cific datasets tailored to particular tasks. Particularly, RL-based ward,whichistheratioofthecurrentcoveragerelativetothetotal finetuninghasbeenproveneffectiveinguidingLLMsusingfeed- cumulativecoverage.WerefertothisasCoverage-RateRewarding backtooptimizefactualconsistencyandreducethetoxicgenera- (CRR). tion[25,39,43].Recently,therehasalsobeenproposedforapplying Thedetailsofthecoverage-raterewardingprocedureareshown RL-basedfinetuningtoCode-LLMsaimedatgeneratingunittests inAlgorithm1.Inthecaseofasyntaxerrororsemanticerrorin thatarenotonlygrammaticallycorrectbutalsocapableofsolving theJSengine,afixedpenaltyisgiven(Lines3-6).Thesepenalty complexcodingtasks[24,31,47]. approachisalsocommonlyseeninotherRLmethodstargeting Code-LLM[24,31,47].WhenpassedthroughtheJSengine,aCRR RL-based finetuning. RL-based finetuning consists of the fol- iscalculated(Line7-11).TheCRRiscalculatedasaratioofthe lowingphases:rewardmodeling,andreinforcementlearning.In currentcoveragerelativetothetotalcumulativecoverage.This rewardmodeling,anLLM-basedrewarderistrainedtoevaluateJueonEom,SeyeonJeong,andTaekyoungKwon Figure2:WorkflowofCovRL-Fuzz:Thegray-shadedareaillustratestheoperationofCovRL. thesuitabilityofoutputresultswheninputisprovidedtotheLLM frommakingsyntaxorsemanticerrorsandinducespredictionto created in the previous phase. There are various approaches to findnewcoverage(3). feedbackdependingonhowtherewarderistrained:utilizingan Notethatwedonotperformanyheuristicpost-processingonthe oracle[24,31,47],usingdeeplearningmodels[25],andusinghu- LLM-basedmutator,saveforCovRL-basedfinetuning.Wedemon- manfeedback[39].Wealsoadoptthestrategyofemployingthe stratedaminimalerrorrateinusingsolelyCovRLthatiscompara- JSengineasafeedbackoracle.Inreinforcementlearning,train- bletootherlatestJSenginefuzzingtechniquesonSection5.2. ingcommonlyemploysKullback-Leibler(KL)divergence-based optimization.Thismethodisdesignedtooptimizethebalancebe- 3.1 Phase1.MaskMutation tween maximizing rewards and minimizing deviation from the WeusethemaskmutationasabasictypeofLLM-basedmutation initialtrainingdistribution. thatcanbedonewithoutanyfurtherprompts. Masking.Tomutatetheselectedseed,CovRL-Fuzzperformsa maskingstrategyformaskmutation(1 inFigure2).Giventhein- 3 DESIGN putsequence𝑊 ={𝑤1,𝑤2,..,𝑤 𝑛},CovRL-Fuzzusesthreemasking techniques:insert,overwrite,andsplice.Thestrategyresultsinthe Inthissection,wedescribethedesignofCovRL-Fuzz.Thekey masksequence𝑊𝑀𝐴𝑆𝐾 = {[𝑀𝐴𝑆𝐾],𝑤3,..,𝑤 𝑘} andthemasked accomplishmentofCovRL-Fuzzis:Weensureeffectivefuzzingby sequence𝑊\𝑀𝐴𝑆𝐾 ={𝑤1,𝑤2,[𝑀𝐴𝑆𝐾],..,𝑤 𝑛}.Thedetailedopera- limitingirrelevantmutationsthroughcontext-awaremutationsuti- tionsaredescribedasfollows: lizinganLLM-basedmutatorandguidingthemutatorwithCovRL toobtainawiderangeofcoverage. Insert Randomlyselectpositionsandinsert[MASK]tokens intotheinputs. Figure2isaworkflowofCovRL-Fuzz,whichconsistsofthree phases.First,CovRL-Fuzzselectsaseedfromtheseedqueue.The Overwrite Randomlyselectpositionsandreplaceexistingtokens seedundergoesamaskmutationprocesswherespecifictokens withthe[MASK]token. aremaskedandsubsequentlypredicted(1).Weusemasktokens Splice Statementswithinaseedarerandomlydividedinto andpredicttheirreplacementsusingamaskedlanguagemodel segments.Aportionofthesesegmentsisreplacedwith task[13,42].Aftermutation,thetestcaseisthenexecutedbythe asegmentfromanotherseedwith[MASK],formatted targetJSengine.Ifthetestcasediscoversnewcoveragenotseen as[MASK]statement[MASK]. before,itisconsideredaninterestingseedandisplacedbackin theseedqueueforfurthermutation.Atthesametime,CovRL-Fuzz Mutation.Aftergeneratingamaskedsequence𝑊\𝑀𝐴𝑆𝐾 viamask- storesthecoveragemapmeasuredbythetestcaseandvalidity ing,theinputismutatedbyinferringinthemaskedpositionsvia information,whetherthetestcaseledtosyntaxerrors,semantic LLM-basedmutator.ThemutationdesignofCovRL-Fuzzisbased errors,orpassedsuccessfully.Ourrewardingapproachusesvalidity onaspan-basedmaskedlanguagemodel(MLM)thatcanpredict informationtoimposepenaltiesoninputsthatresultinsyntaxor variable-lengthmasks[16,42].Thus,theMLMlossweutilizefor semanticerrors.Followingthis,itproducesarewardingsignals mutationcanberepresentedasfollows: bymultiplyingthecurrentcoveragemapwithacoverage-based weightmap(2).Aftercompletingamutationcycle,weproceed 𝑘 |
tofinetunetheLLM-basedmutatorusingCovRLbyutilizingthe 𝐿 𝑀𝐿𝑀(𝜃)=∑︁ −𝑙𝑜𝑔𝑃 𝜃(𝑤 𝑖𝑀𝐴𝑆𝐾 |𝑤\𝑀𝐴𝑆𝐾,𝑤 <𝑀 𝑖𝐴𝑆𝐾 ) (1) gatheredinterestingseedsandrewardingsignals.Wedefinethe 𝑖=1 notionofonecycleasapredeterminednumberofmutations.The 𝜃representsthemodel’strainableparametersthatareoptimized CovRLemploysthePPO[45]algorithm,amethodthatseeksto during the training process, and 𝑘 is the number of tokens in improvethecurrentmodelwhileadheringcloselytotheprevious 𝑊𝑀𝐴𝑆𝐾.𝑤\𝑀𝐴𝑆𝐾 denotesthemaskedinputtokenswherecertain model’sframework.ThesignalduringtrainingpreventstheLLM tokensarereplacedbymasktokens.𝑤𝑀𝐴𝑆𝐾 referstotheoriginalCovRL:FuzzingJavaScriptEngineswithCoverage-GuidedReinforcementLearningforLLM-basedMutation tokensthathavebeensubstitutedwiththemasktokensintheinput coverage.Thevariable𝑀denotestheoverallsizeofthecoverage sequence. map,whichweutilizedasascalefactortoadjusttheweightvalue. Therewardisacquiredbytakingtheweightedsumof𝑇𝐹𝑐𝑜𝑣 3.2 Phase2.Coverage-WeightedRewarding and𝐼𝐷𝐹𝑐𝑜𝑣 tocreatetheweightedcoveragemap,whichisthen weightedtoobtainas CovRLdesignsrewardingsignalcalledCoverage-WeightedReward- ing(CWR)forguidingthemutator.Thesignalisweightedusing 𝑀 ∑︁ TF-IDF[48]toprioritizethediscoveryofnewcoverage(2 inFig- 𝑅 𝑇𝐹𝐼𝐷𝐹 =𝑙𝑜𝑔( 𝑡𝑓 𝑖,𝑡 ·𝑖𝑑𝑓 𝑖,𝑡−1) (5) ure2).TheTF-IDFknownasthestatisticalterm-specificitymethod, 𝑖=1 computestheimportanceofawordtoadocumentinacorpus, where𝑡 representsthecurrentcycle.𝑡𝑓 referstoanelementin 𝑖,𝑡 accountingforthefactthatspecificwordsappearmorefrequently 𝑇𝐹 𝑡𝑐𝑜𝑣,and𝑖𝑑𝑓 𝑖,𝑡−1referstoanelementin𝐼𝐷𝐹 𝑡𝑐 −𝑜𝑣 1attheprevious ingeneral.Itisoftenusedasaweightvectorininformationre- timestepbeforeupdatingtheweights.Afterward,weproceedto trievalandtextminingsearches.Weutilizeitinconstructingthe normalizethefindingsas: coverage-basedweightmap. (cid:40) R tue nw ina grd min etg h. oE dn sh ua sn ic ni gng Ct oh de ec -Lon Lc Mep [t 2s 4f ,r 3o 1m ,4p 7r ]e ,v wio eu es xR teL n-b da ts hed eifi dn ee a- 𝑅 𝑐𝑜𝑣 = 𝜎 +0(𝑅 .5𝑇𝐹𝐼𝐷𝐹) i of t𝑅 h𝑇 er𝐹 w𝐼𝐷 is𝐹 e>0 (6) ofusingsoftwareoutputtoapplyarewardingsignal.Notably,er- where 𝜎 is a sigmoid function used to map 𝑅 to a value rorsintheJSenginecanbebroadlygroupedintosyntaxerrors 𝑇𝐹𝐼𝐷𝐹 between0and1.Ifthere’snonewcoverageandnoerror,weset andsemanticerrors,whichincludereference,type,range,andURI 𝑅 to0.5whenit’szeroorlesstogivetheminimumreward.The errors.Giventhat𝑊∗istheconcatenationofthemaskedsequence 𝑐𝑜𝑣 𝑅 iscalculatedonlyifthetestcaseisfreefromanysyntaxor 𝑊\𝑀𝐴𝑆𝐾 andthemasksequence𝑊𝑀𝐴𝑆𝐾,thefollowingreturns 𝑐𝑜𝑣 semanticproblems.OurrewardingschemeincentivizestheLLM- canbededucedbasedoninputtothetarget: basemutatortoexploreawiderrangeofcoveragebyprovidinghigh 𝑟(𝑊∗)= −− 01 .. 50 ii ff 𝑊𝑊 ∗∗ ii ss ss ey mnt aa nx tie crr eo rr ror (2) p Ua py do au tt esf Wor et ie gs ht tca Mse as pth wat ita hch Mie ov meu en nc to um mm .o Fn oll le ov wel is ngof ec ao cv her ca yg ce le. , +𝑅 𝑐𝑜𝑣 if𝑊∗ispassed CovRLupdatestheIDFweightmap.Tomitigatedramaticchangesin rewarddistribution,weusemomentumatarateof𝛼toincorporate InordertoassisttheLLM-basedmutatorindiscoveringnew thepriorweightwhilerecalculatingthemap.Theupdatedweight coverage, we provide an additional rewarding signal alongside mapisasfollows: Eq.2.Ourapproachfocusesonthefrequencyofeachcoverageby assigningspecificweightsinsteadofusingtheCRRcommonlyused 𝐼𝐷𝐹 𝑡𝑐𝑜𝑣 =𝛼𝐼𝐷𝐹 𝑡𝑐 −𝑜𝑣 1+(1−𝛼)𝐼𝐷𝐹 𝑡𝑐𝑜𝑣 (7) intraditionalRL-basedfuzzing.Therewardingprocedureinvolves where𝐼𝐷𝐹𝑐𝑜𝑣 meansnewweightmapand𝐼𝐷𝐹𝑐𝑜𝑣 meansprevious adjustingthecoveragemapbyutilizingtheTF-IDFweightmap, 𝑡 𝑡−1 weightmap. calculatingtheweightedsumforeachcoverageinformation,and normalizingittogetscores. Coverage-Weighted Rewarding Algorithm. Algorithm 2 de- scribestheoverallprocedureofCWR.Theinputisthemutatedtest Atfirst,wenotedthatthecoveragemapissimilartotheTerm case𝑇,andtheoutputisthereward𝑅 measuredfrom𝑇.Our Frequency(TF)inthatitcalculatesthefrequencyatwhichaspecific 𝑐𝑜𝑣 objectiveistocomputethereward𝑅 byassigningweightsto coveragelocationisreached.However,withaJSengine,certain 𝑐𝑜𝑣 eachcoverageusingtheTF-IDFapproach.Thus,weadheretothe codesinthetestcasecantriggerthesamecodecoveragemultiple subsequentcourseofaction: times.Atypicalexampleiswhentheinputincludesrepetitions First,toevaluatetherewardforthemutatedtestcase𝑇,weassess suchas’a=1;a=1;’.Thiscanresultinduplicatetriggersforthesame forsyntaxerrors,semanticerrors,andwhetherthetestcasewas coveragearea.Insuchcases,theimportanceofrepetitivecoverage successfullyexecutedintheJSengine.Ifanerroroccurs,weimpose isreduced.Itemphasizestheneedtodifferentiatebetweendifferent apredeterminedpenalty,asshowninequationEq.2(Lines5-8). types of coverage rather than merely focusing on how often it occurs.Therefore,wedefinetheterm𝑇𝐹𝑐𝑜𝑣 asamapofunique ImposingthepredeterminedpenaltyallowsLLM-basedmutatorto focusonminimizingerrors,whichisconsistentwiththestrategy coverage: usedinpreviousstudiesthatapplyRLtoCode-LLMs[24,31,47]. When𝑇 ispassedthroughtheJSengine,wemeasurethecoverage 𝑇𝐹𝑐𝑜𝑣 =uniquecoveragemap (3) 𝑇 of𝑇 andcalculatetherewardbasedonthiscoverage(Lines 𝑐𝑜𝑣 Wedefinethecoverage-basedweightmap𝐼𝐷𝐹𝑐𝑜𝑣 usingthecover- 9-12).TheprocedureforconstructingCWRisbasedontheTF- agemapofeachseedasfollows: IDFweightmap(Lines14-21).𝑇𝐹𝑐𝑜𝑣 iscreatedbygeneratingthe 𝑡 uniquecoveragemapof𝑇 (Line14).Additionally,itcalculates 1 𝑁 𝑐𝑜𝑣 𝐼𝐷𝐹𝑐𝑜𝑣 = √ 𝑀𝑙𝑜𝑔( 1+𝐷𝐹𝑐𝑜𝑣) (4) the𝐼𝐷𝐹 𝑡𝑐𝑜𝑣 usingthevalueof𝑇 𝑐𝑜𝑣 (Line15). |
Subsequently, the TF-IDF-based reward𝑅 is calculated by 𝑐𝑜𝑣 where𝑁 denotesthetotalnumberofuniquecoverageobtained. usingtheweightmap𝐼𝐷𝐹𝑐𝑜𝑣thatwascreatedinthepreviouscycle 𝑡−1 𝐷𝐹𝑐𝑜𝑣 denotesthenumberofseedsthathaveachievedthespecific and𝑇𝐹𝑐𝑜𝑣 (line16).Thepurposeofapplyingtheweightmapfrom 𝑡 coveragelocation.Theweightmap𝐼𝐷𝐹𝑐𝑜𝑣 isobtainedbytaking thepreviouscycletomeasuretherewardistoassignhigherscores theinverseof𝐷𝐹𝑐𝑜𝑣,resultingingreaterweightsforlesscommon tothenewlyobtainedrewardsbasedonthecoveragedistributionJueonEom,SeyeonJeong,andTaekyoungKwon Algorithm2:Coverage-WeightedRewarding(CWR) Algorithm3:FuzzingwithCovRL Input:testcase𝑇 Input:finetuningdataset𝐷 𝑇 Output:reward𝑅 𝑐𝑜𝑣 1 R𝑝𝑟𝑒𝑣 :PreviousLLM-basedrewarder 1 𝑇𝐹 𝑡𝑐𝑜𝑣 :UniqueCoverageMap 2 R𝑐𝑢𝑟 :CurrentLLM-basedrewarder 2 𝐼𝐷𝐹 𝑡𝑐 −𝑜𝑣 1:PreviousWeightMap 3 M𝑝𝑟𝑒𝑣 :PreviousLLM-basedmutator 3 𝐼𝐷𝐹 𝑡𝑐𝑜𝑣 :WeightMap 4 M𝑐𝑢𝑟 :CurrentLLM-basedmutator 4 FunctionGetReward(𝑇): 5 FunctionFuzzOne(𝑠𝑒𝑒𝑑_𝑞𝑢𝑒𝑢𝑒): 5 ifJS_Engine(𝑇)is𝑆𝑦𝑛𝑡𝑎𝑥𝐸𝑟𝑟𝑜𝑟 then 6 for𝑖 =1to𝑖𝑡𝑒𝑟_𝑐𝑦𝑐𝑙𝑒do 6 return-1.0 7 𝑠𝑒𝑒𝑑←SelectSeed(𝑠𝑒𝑒𝑑_𝑞𝑢𝑒𝑢𝑒) 7 elseifJS_Engine(𝑇)is𝑆𝑒𝑚𝑎𝑛𝑡𝑖𝑐𝐸𝑟𝑟𝑜𝑟 then 8 𝑇 ←MaskMutation(M𝑐𝑢𝑟,𝑠𝑒𝑒𝑑) 8 return-0.5 9 ifIsInteresting(𝑇)then 9 else 10 𝑇 𝑖𝑛𝑡𝑒𝑟𝑒𝑠𝑡 ←𝑇 10 /*JS_Engine(𝑇)isPassed*/ 11 𝑠𝑒𝑒𝑑_𝑞𝑢𝑒𝑢𝑒.append(𝑇 𝑖𝑛𝑡𝑒𝑟𝑒𝑠𝑡) 11 𝑇𝑐𝑜𝑣=GetCov(𝑇) 12 𝑅 𝑐𝑜𝑣 =RewardingProcess(𝑇 𝑖𝑛𝑡𝑒𝑟𝑒𝑠𝑡) 12 returnCalcCovReward(𝑇𝑐𝑜𝑣) 13 𝑑𝑎𝑡𝑎←𝑇 𝑖𝑛𝑡𝑒𝑟𝑒𝑠𝑡,𝑅 𝑐𝑜𝑣 𝐷 .append(𝑑𝑎𝑡𝑎) 13 FunctionCalcCovReward(𝑇 𝑐𝑜𝑣): 14 𝑇 14 𝑇𝐹 𝑡𝑐𝑜𝑣=GetUniqueCov(𝑇 𝑐𝑜𝑣) 15 FinetuneCovRL(𝐷 𝑇) 15 𝐼𝐷𝐹 𝑡𝑐𝑜𝑣 ←CalcIDF(𝑇 𝑐𝑜𝑣) 16 𝑅 𝐼𝐷𝑐𝑜 𝐹𝑣 𝑐𝑜← 𝑣 ←Cal 𝛼cT *F 𝐼I 𝐷DF 𝐹𝑐(𝑇 𝑜𝑣𝐹 𝑡𝑐 +𝑜𝑣 (1,𝐼 -𝐷 𝛼𝐹 )𝑡𝑐 *−𝑜 𝐼𝑣 1 𝐷) 𝐹𝑐𝑜𝑣 1 16 7 Fun Rct 𝑝i 𝑟o 𝑒n 𝑣,F Min 𝑝e 𝑟t 𝑒u 𝑣ne ←Cov RR 𝑐L 𝑢( 𝑟𝐷 ,𝑇 M) 𝑐: 𝑢𝑟 17 𝑡−1 𝑡−1 𝑡 18 R𝑐𝑢𝑟 ←FinetuneRewarder(R𝑝𝑟𝑒𝑣,𝐷 𝑇) 18 if𝑅 𝑐𝑜𝑣 >0then 19 M𝑐𝑢𝑟 ←FinetuneMutator(M𝑝𝑟𝑒𝑣,R𝑐𝑢𝑟,𝐷 𝑇) 19 return𝑅 𝑐𝑜𝑣 20 else 21 return0.5 ForCovRL-basedfinetuningwithPPO,wedefinetheCovRLloss 22 FunctionRewardingProcess(𝑇): asfollowingmanner: 2 2 23 4 5 𝑂𝑅 re𝑐 𝑢𝑜 t𝑡 u𝑣 𝑝 r= 𝑢 n𝑡G 𝑂←et 𝑢R 𝑡𝑅e 𝑝𝑐w 𝑢𝑜a 𝑡𝑣rd(𝑇) 𝐿 𝐶𝑜𝑣𝑅𝐿(𝜃)=−E (𝑥,𝑦)∼𝐷𝑡 (cid:34) 𝑅(𝑥,𝑦)𝑡 ·log(cid:32) 𝜋𝜋 𝑡𝜃 −𝑡 1(𝑦 (𝑦|𝑥 |𝑥) )(cid:33)(cid:35) (8) where𝑅(𝑥,𝑦)representstherewardofCovRL,and𝐷 𝑡 referstothe finetuningdatasetthathasbeencollecteduptotimestep𝑡.𝜋𝑡(𝑦|𝑥) 𝜃 withparameters𝜃isthetrainableRLpolicyforthecurrentmutator, and𝜋𝑡−1(𝑦|𝑥)representsthepolicyfromthepreviousmutator. achievedinthepreviouscycle.Ifthereward𝑅 isgreaterthan TomitigatetheoveroptimizationandmaintaintheLLM-based 𝑐𝑜𝑣 0,itisreturnedasis;otherwise,afixedvalueisreturned(Lines mutator’smaskpredictionability,wealsouseKLregularization. 18-21).Tomitigatesignificantchangesintherewarddistribution, TherewardafteraddingtheKLregularizationis westabilizetherewardbyusing𝐼𝐷𝐹 𝑡𝑐 −𝑜𝑣 1usingamomentumrate (cid:32) 𝜋𝑡(𝑦|𝑥) (cid:33) o inf𝛼 Se( cL ti in oe n1 57 .3). .WedemonstratetheeffectofCWRwithmomentum 𝑅(𝑥,𝑦)𝑡 =𝑟(𝑊∗)+log 𝜋𝑡𝜃 −1(𝑦|𝑥) (9) FuzzingwithCovRLAlgorithm.Algorithm3detailsonecycleof 3.3 Phase3.CovRL-basedFinetuning thefuzzingloopwithCovRL.Thecycleiteratesforapredetermined Thefuzzingenvironmentwithmaskmutationcanbeconceptual- numberof𝑖𝑡𝑒𝑟_𝑐𝑦𝑐𝑙𝑒(Lines6-14).TheLLM-basedmutatorusesa izedasabanditenvironmentforRL.Inthisenvironment,amasked seedchosenfromthe𝑠𝑒𝑒𝑑_𝑞𝑢𝑒𝑢𝑒toproducethetestcase𝑇 (Lines sequence𝑊\𝑀𝐴𝑆𝐾 isprovidedasinput(𝑥),andtheexpectedout- 7-8).If𝑇 isdeemedanoteworthyseed,itisaddedtotheseedqueue put is a mask sequence𝑊𝑀𝐴𝑆𝐾 (𝑦). Inspired by previous stud- andtherewardfortheparticular𝑇 iscalculatedandadded 𝑖𝑛𝑡𝑒𝑟𝑒𝑠𝑡 ies[24,31,47],wefinetuneourmodelusingthePPOalgorithm[45], to𝐷 (Lines9-14).Aftercompletingthese𝑖𝑡𝑒𝑟_𝑐𝑦𝑐𝑙𝑒iterations,the 𝑇 anactor-criticreinforcementlearning(3 inFigure2).Inoursitu- gathered𝐷 𝑇 isutilizedastrainingdatatocalltheFinetuneCovRL ation,itcanbeimplementedbyfinetuningtwoLLMsintandem: function,whichcarriesoutCovRL-basedfinetuning(Line15).The oneLLMactsasamutator(actor),whiletheotherLLMservesas procedureof FinetuneCovRLinvolvesthefinetuningoftheLLM- arewarder(critic).WeutilizeapretrainedLLMtoinitializethe basedRewarderR andtheLLM-basedMutatorM (Lines17-19). parametersbothofmutatorandrewarder.Therewarderistrained Initially,wedesignatetheexistingmodelasR𝑝𝑟𝑒𝑣andM𝑝𝑟𝑒𝑣(Line usingtheEq.2.Itplaysacrucialroleintrainingthemutator. 17).Followingthat,R𝑝𝑟𝑒𝑣 isfinetunedusingthefinetuningdatasetCovRL:FuzzingJavaScriptEngineswithCoverage-GuidedReinforcementLearningforLLM-basedMutation 𝐷 𝑇 togenerateanewrewarderR𝑐𝑢𝑟 (Line18).Atthispoint,there- Table1:BenchmarksofJSengineswiththeirversionsandlinesof warderhasbeentrainedtopredicttherewardingsignalasdescribed code inEq.2.ByutilizingthefinetunedR𝑝𝑟𝑒𝑣 and𝐷 𝑇,wefinetunethe mutatorM𝑝𝑟𝑒𝑣 togenerateM𝑐𝑢𝑟 (Line19).Forfinetuningthemu- JSEngine Version #ofLines tator,weapplyrewardorpenaltytothemodelusingtheCovRL V8 11.4.73 1,087,873 JavaScriptCore(JSC) 2.38.1 566,262 lossfromEq.8. ChakraCore(Chakra) 1.13.0.0-beta 782,996 JerryScript(Jerry) 3.0.0 122,048 QuickJS(QJS) 2021-03-27 75,257 4 IMPLEMENTATION Jsish 3.5.0 58,143 WeimplementedaprototypeofCovRL-FuzzusingPytorchv1.8 escargot bd95de3c,Jan202024 311,473 Espruino 2v20 26,945 andafl2.52b[36]. Dataset.Wecollecteddatafromregressiontestsuitesinseveral Table2:BaselinefuzzerstargetingJSengines.CGFindicatesthe repositoriesincludingV8,JavaScriptCore,ChakraCore,JerryScript, useofcoverage-guidedfuzzing,LLMdenotestheusageofLLMs,and Test262[50],andjs-vuln-db[21]asofDecember2022.Wethen |
MutationLevelreferstotheunitofmutation. simplypre-processedthedatafortrainingdataandseeds,resulting inacollectionof55,000uniqueJavaScriptfilesforourexperiments. Mutation Post Fuzzer CGF LLM Pre-Processing.Weperformedasimplepre-processingonthe Level Processing regressiontestsuitesoftheJSenginesmentionedabovetoremove HeuristicBaselines comments,filteroutgrammaticalerrors,andsimplifyidentifiers. AFL(w/Dict)[36] ✓ Bit/Byte Superion[54] ✓ Grammar We then used the processed data directly for training. The pre- Token-LevelAFL[44] ✓ Token processingwasconductedutilizingthe-mand-boptionsofthe LMBaselines Montage[26] Grammar ✓ UglifyJStool[37]. COMFORT[60] ✓ Grammar ✓ CovRL-Fuzz ✓ ✓ Token Training.WeutilizethepretrainedCode-LLM,CodeT5+(220m)[55], asboththerewarderandthemutator.FortheprocessofCovRL- basedfinetuning,wetrainedtherewarderandmutatorfor1epoch Benchmarks. We tested it on four JS engines, using the latest each mutation cycle. We used a batch-size of 256 and learning versionsasofJanuary2023:JavaScriptCore(2.38.1),ChakraCore rateof1e-4.TheoptimizationutilizedtheAdamWoptimizer[33] (1.13.0.0-beta),V8(11.4.73),JerryScript(3.0.0).Wealsoconducted togetherwithalearningratelinearwarmuptechnique.Related additionalexperimentsonQuickJS,Jsish,escargotandEspruino experimentscanbefoundinTable7.TheLLM-basedrewarderuses forrealbugdetectionexperiments.Table1presentstheJSengine theencoderfromCodeT5+topredictrewardingsignalthrougha benchmarksusedinourexperiments.Itdisplaysboththeversion classificationapproach.wealsoemployedthecontrastivesearch andthenumberoflines. technique[49],applyingapenaltyfactor𝛼 of0.6andsettingthe WebuilteachtargetJSenginewithAddressSanitizer(ASAN)[46] top-kof32.Theanalysisoftheoptimalepochand𝛼 selectionin todetectbugsrelatedtoabnormalmemoryaccessandwithdebug CovRLcanbefoundinSection5.3.Inaddition,wealignthecover- modetofindbugsrelatedtoundefinedbehavior. agemapsizewithAFL’srecommendationsbysettingthescaling FuzzingCampaign.Forafairevaluation,weusedthesamesetof factor𝑀forthemapsize.Thisensuresthattheinstrumentational 100validseeds.ForRQ1,weoperatedon3CPUcoresconsidering capacityisoptimized.Formoderate-sizedsoftware(approx.10K otherfuzzingapproaches,andforRQ2,weusedasingleCPUcore. lines),weemployedamapsizeof216.Forlargersoftwareexceeding ForRQ3,wealsoused3CPUcoresandconductedexperimentsfor 50Klines,weusedamapsizeof217,strikingabalancebetween aweekincludingfourmoreJSenginesapartfromthefourmajor granularityandperformance.ThenumberoflinesinthetargetJS ones.Toconsidertherandomnessoffuzzing,weexecutedeach enginethatweusedcanbelocatedinTable1. fuzzerfivetimesandthenaveragedthecoverageresults.Also,to ensurefairnessinfuzzing,theresultsofeachexperimentweremea- 5 EVALUATION sured,includingthefinetuningtimethroughCovRL.Theaverage ToevaluateCovRL-Fuzz,wesetthreeresearchquestions. finetuningtimeis10minutes,occurringevery2.5hours. • RQ1: Is CovRL-Fuzz more effective and efficient than other Baselines.ForRQ1andRQ2,wecompareCovRL-Fuzzwithstate- state-of-the-artfuzzers? of-the-artJSenginefuzzers,whichincludeheuristicfuzzingtech- • RQ2: HowdoeseachcomponentcontributetoCovRL-Fuzz’s niquessuchasbit/byte-levelfuzzing(AFL(w/Dict)[36]),grammar- effectiveness? level fuzzing (superion [54]), token-level fuzzing (Token-Level • RQ3: CanCovRL-Fuzzfindreal-worldbugsinJSengines? AFL[44]), andlanguagemodel-basedfuzzingtechniques(Mon- tage[26],COMFORT[60]). InthecaseofMontage,itimportscodefromitstestsuitecorpus, 5.1 ExperimentalDesign whichmightaffectcoveragebyincreasingtheamountofexecuted Experimentalsetup.Oursetupincludeda64-bitUbuntu20.04 code.Asaresult,weincludedaversionofMontage(w/oImport) LTSOSonanIntel(R)Xeon(R)Gold6134CPU@3.20GHz(64-core). inourexperimentalstudy,whichdoesnotimporttheothertest Additionally,weharnessedthreeNVIDIAGeForceRTX3090GPUs suites.InthecaseofCOMFORT,weevaluateditsolelywiththe forbothtrainingandmutation. black-boxfuzzer,excludingthedifferentialtestingcomponent.EachJueonEom,SeyeonJeong,andTaekyoungKwon toolwasrunonfourJSengineswithdefaultconfigurationswhich Table3:ComparisonwithotherJSenginefuzzersinTable2. detailscanbefoundinTable2.Also,aspartofourablationstudy inRQ2,weperformedanexperimentcomparingourapproachto Coverage Improv Target Fuzzer Error(%) LLM-basedfuzzers[11,58]thatarenotspecificallydesignedfor Valid Total Ratio(%) JSengines.ThisexperimentinvolvedusingpretrainedLLM-based AFL(w/Dict) 96.90% 29,929 33,531 134.79% Superion 77.35% 33,812 36,985 112.87% mutationtechniques,includingmaskmutationfromTitanFuzz[11] Token-LevelAFL 84.10% 39,582 42,303 86.11% andpromptmutationfromFUZZ4ALL[58]. V8 Montage 56.24% 38,856 40,155 96.06% Montage(w/oImport) 94.08% 33,487 36,338 116.66% Metrics.Weusethreemetricsforevaluation. COMFORT 79.66% 44,324 46,522 69.23% • Code Coverage represents the range of the software’s code CovRL-Fuzz 48.68% 75,240 78,729 - |
thathasbeenexecuted.WeadoptedgecoveragefromtheAFL’s AFL(w/Dict) 74.42% 18,343 20,496 215.86% coverage bitmap, following FairFuzz [27] and Evaluate-Fuzz- Superion 72.02% 17,619 19,772 227.42% Token-LevelAFL 69.70% 52,385 53,719 20.51% Testing[23]settings.Weconductedacomparisonofcoverage JSC Montage 42.34% 55,511 56,861 13.85% intwocategories:totalandvalid.Totalreferstothecoverage Montage(w/oImport) 93.72% 43,861 47,754 35.57% acrossalltestcases,whilevalidreferstothecoverageforvalid COMFORT 79.64% 36,074 36,542 77.16% testcases.WealsoemployedtheMann-WhitneyU-test[34]to CovRL-Fuzz 48.59% 61,137 64,738 - assessthestatisticalsignificanceandverifiedthatallp-values AFL(w/Dict) 81.32% 83,038 87,587 27.30% werelessthan0.05. Superion 42.63% 92,314 94,237 18.32% • ErrorRatemeasurestherateofsyntaxerrorsandsemantic Chakra Token-LevelAFL 90.64% 92,621 95,677 16.54% Montage 82.21% 101,470 103,589 7.63% errors in the generated test cases. This provides insight into Montage(w/oImport) 94.72% 90,940 98,643 13.03% howeffectivelyeachmethodexploresthecorelogicinthetarget COMFORT 79.47% 81,171 83,142 34.11% software.Fordetailedanalysis,semanticerrorsarecategorized CovRL-Fuzz 54.87% 105,121 111,498 - intotypeerrors,referenceerrors,URIerrors,andinternalerrors AFL(w/Dict) 77.32% 9,307 14,259 63.03% basedontheECMAstandard[14].Itshouldbenotedthatwhile Superion 86.23% 8,944 15,061 54.35% COMFORT[60]utilizedjshint[22]formeasurement,focusing Token-LevelAFL 80.52% 14,361 17,152 35.53% Jerry theirerrorrateonsyntaxerrors,weusedJSengines,allowing Montage 95.55% 13,114 13,285 74.98% Montage(w/oImport) 95.34% 12,662 15,598 49.03% ustomeasuretheerrorrateincludingbothsyntaxandsemantic COMFORT 79.83% 12,268 14,026 65.74% errors. CovRL-Fuzz 58.84% 20,844 23,246 - • BugDetectioniswhatthefuzzeristryingtofind,whichmeans avulnerability. 5.2 RQ1.Comparisonagainstexistingfuzzers Notethat,whileotherLMbaselinesdidnotaccountfortraining ToanswerRQ1,weranallstate-of-the-artheuristicandLM-based time,CovRL-FuzzincludedthetimerequiredforCovRLfinetuning fuzzerslistedinTable2withthesame24-hourtimeout,andwe duringtheexperiment.Additionally,weobservedthatCovRL-Fuzz repeatedtheexperimentsfivetimestoaccountfortherandomness continuestoincreasecoveragewhenitnearsthe24hourmark.It offuzzing. displaysitseffectivenessinobtainingcoverage. CodeCoverage.Table3depictsthevalidandtotalcoveragefor SyntaxandSemanticCorrectness.CovRL-Fuzzisnotagrammar- eachfuzzingtechnique.Theresultsofourevaluationdemonstrate levelfuzzingapproachthatprioritizessyntaxandsemanticvalidity. thatCovRL-Fuzzoutperformsstate-of-the-artJSenginefuzzers. However,itisassumedthatCovRL-Fuzz,whichusesreinforcement Our observation revealed that CovRL-Fuzz attained the highes learningfromarewardsignal,canachievehighervaliditycom- coverageacrossalltargetengines,resultinginanaverageincrease paredtorandomfuzzing(suchasToken-LevelAFL).Toverifythis of102.62%/98.40%/19.49%/57.11%inedgecoverage. assumption,weevaluatetheerrorrateofuniquetestcases. ToemphasizetheeffectivenessofCovRL-Fuzz,wemonitoreda The experimental results are shown in Table 3. CovRL-Fuzz growthtrendofedgecoverage,depictedinFigure3.Ineveryexperi- demonstratedalowererrorratethanToken-LevelAFLforallJS ment,CovRL-Fuzzconsistentlyachievedthehighestedgecoverage engines.Furthermore,CovRL-Fuzzshowedalowererrorratein morerapidlythananyotherfuzzer.Incontrasttoheuristicbase- comparisontomostofthefuzzers.Whileitdidnotachievethe lines,CovRL-Fuzzimmediatelyandsignificantlyachievedhigher lowesterrorrateinJavaScriptCoreandChakraCore,CovRL-Fuzz coverage.ThissuggeststhattheLLM-basedmutatorofCovRL-Fuzz stillinducedasignificantlylowerrorratecomparedtothemost hasamorepotentabilitytomutatethanheuristicmutatorsfor ofbaselines.PleasenotethatthehigherrorrateofMontage(w/o coverage-guidedfuzzing.CovRL-Fuzzalsoachievedhighcoverage Import)isduetoitsinabilitytoaccessfunctionsfromothertest comparedtoLMbaselines.However,inChakraCore,therewas suites. amarginaldifferenceincoveragebetweenMontageandCovRL- Foramoredetailedanalysisoftheerrorrate,weanalyzedthe Fuzz,attributedtoMontage’sstrategyofimportingandexecuting typesoferrorstriggeredbyfuzzersonV8,whichisthemostlargest codefromitstestsuitecorpus,resultinginhighercoverage.We anddependableJSengine,asshowninFigure4.Theresultsshowed observedthatCovRL-Fuzzobtainedsignificantlyhighercoverage thatCovRL-Fuzztriggeredfewersyntaxerrorsincomparisonto whencomparedtoMontage(w/oImport). heuristicbaselines.Furthermore,italsoproducedlesssyntaxandCovRL:FuzzingJavaScriptEngineswithCoverage-GuidedReinforcementLearningforLLM-basedMutation Figure3:NumberofedgecoveragebetweenCovRL-FuzzandotherJSenginefuzzers.Thesolidlinerepresentstheaveragecoverage,whilethe |
shadedregiondepictstherangebetweenthelowestandhighestvaluesfivetimes. Table4:UniquebugsdiscoveredbyCovRL-FuzzandcomparedJS enginefuzzers. JSEngineBugType AFLSuperionTokenAFLMontageCOMFORTCovRL-Fuzz JSC UndefinedBehavior JSC Out-of-boundsRead Chakra UndefinedBehavior Chakra OutofMemory Chakra OutofMemory Jerry UndefinedBehavior Jerry MemoryLeak Jerry UndefinedBehavior Jerry UndefinedBehavior Figure4:TheerrorrateofgeneratedtestcasesonV8.Thefourerror Jerry HeapBufferOverflow Jerry OutofMemory types,excludingSyntaxError,areclassifiedasSemanticErrors. Jerry StackOverflow Jerry UndefinedBehavior Jerry HeapBufferOverflow semanticerrorsthanLMbaselines,evenwithoutusingthepost- SUM 2 3 4 0 1 13 processing techniquesused byCOMFORT andMontage. These resultsindicatethatCovRL-Fuzzissuccessfulinreducingerror ratesexclusivelythroughCovRL,withoutrequiringheuristicpost- Table5:VariantsofAblationStudy.MutationStrategyreferstothe processing. methodofmutation.PretrainedLLMdenotestheCode-LLMused formutation.Rewardreferstothemethodofcalculatingrewardsin Findingbugs.Tostudywhetherthecoverageimprovementand CovRL. lowerrorrateachievedbyCovRL-Fuzzaidindetectingbugs,we conductedexperimentsbyJSenginescompiledindebugmodewith Variants CGF MutationStrategy PretrainedLLM CovRL Reward ASAN.WereliedontheoutputreportsgeneratedbyASANfor Baseline(TokenAFL[44]) ✓ Random stacktraceanalysistoeliminateduplicatebugs.Wealsomanually PretrainedLLM-basedMutators Incoderw/Mask ✓ Mask Incoder(1B)[16] analyzedandcategorizedtheseresultsbybugtypes. StarCoderw/Mask ✓ Mask StarCoder(1B)[28] Table4showsthenumberandtypesofuniquebugsfoundby StarCoderw/Prompt ✓ Prompt StarCoder(1B)[28] CodeT5+w/Mask ✓ Mask CodeT5+(220M)[55] the CovRL-Fuzz and compared fuzzers. CovRL-Fuzz discovered FinetunedLLM-basedMutators themostuniquebugscomparedtootherfuzzers.Indetail,CovRL- SFT ✓ Mask CodeT5+(220M)[55] CovRLw/CR ✓ Mask CodeT5+(220M)[55] ✓ CR Fuzzfound13uniquebugsand8ofthesebugswereexclusively CovRLw/CRR ✓ Mask CodeT5+(220M)[55] ✓ CRR detectedbyCovRL-Fuzz,includingstackoverflowandheapbuffer CovRL-Fuzz(w/CWR) ✓ Mask CodeT5+(220M)[55] ✓ CWR overflow.Theseresultshighlightitseffectivenessforbugdetection. Asobservedintheexperimentalresults,LM-basedfuzzers,despite achievinghighercoverage,tendtofindfewerbugs,whileheuristic onLLM-basedmutation,wedidnotincludeLLM-basedgenera- fuzzers,althoughachievinglowercoverage,generallyfindmore torsinourstudy’sscope,andthustheywerenotconsideredfor bugs.However,irrespectiveofthistrend,CovRL-Fuzzdemonstrated comparison. superiorperformanceineffectivelydiscoveringthemostbugs. (2)FinetunedLLM-basedmutators,westudiedtheimpactof CovRL-basedfinetuningonCovRL-Fuzz,focusingontheuseof 5.3 RQ2.Ablationstudy reward.Thedetailedconfigurationforthesubjectisasshownin ToanswerRQ2,weconductedanablationstudyontwokeycom- Table5.Itisimportanttonotethatweonlyconsidertheimpact ponentsofCovRL-Fuzz:(1)WecomparedpretrainedLLM-based ofLLM-basedmutatorsinthecontextofcoverage-guidedfuzzing. mutatorsandCovRL-FuzzintermsofthetypeofLLMsandmu- Therefore,allvariationshavebeenevaluatedwithonlythemu- tationstrategies.Forthiscomparison,weutilizedthreeLLMsand tationcomponentbeingreplaced,basedonAFL.Table6shows employedthreemutationstrategies.Notethat,asourfocusissolely thecoverageanderrorrateofourstudiedvariants,whichwereJueonEom,SeyeonJeong,andTaekyoungKwon Table6:TheablationstudywitheachvariantisdetailedinTable5.TheImprov(%)referstotheimprovementratiocomparedtothebaseline. Target V8 JavaScriptCore ChakraCore JerryScript Coverage Improv Coverage Improv Coverage Improv Coverage Improv Variants Error(%) Error(%) Error(%) Error(%) Valid Total (%) Valid Total (%) Valid Total (%) Valid Total (%) Baseline(TokenAFL[44]) 88.79% 44,705 53,936 - 87.45% 35,406 37,461 - 78.98% 81,393 83,785 - 87.39% 12,312 14,795 - PretrainedLLM-basedMutators Incoderw/Mask 49.08% 46,427 47,385 -12.15% 50.31% 44,191 44,643 19.17% 40.96% 86,590 87,105 3.96% 77.63% 11,977 12,851 -13.14% StarCoderw/Mask 82.76% 53,779 56,256 4.30% 83.11% 42,007 43,136 15.15% 82.34% 86,988 88,842 6.04% 90.56% 12,174 13,817 -6.61% StarCoderw/Prompt 82.72% 41,331 45,034 -16.50% 87.90% 45,545 47,568 26.98% 83.85% 84,597 87,351 4.26% 82.35% 13,777 15,413 4.18% CodeT5+w/Mask 62.68% 55,459 56,576 4.89% 55.40% 41,523 42,385 13.14% 45.25% 86,043 86,858 3.67% 78.48% 12,833 14,068 -4.91% FinetunedLLM-basedMutators SFT 74.91% 58,230 61,947 14.85% 65.57% 41,211 43,959 17.34% 69.96% 92,022 95,334 13.78% 74.76% 15,927 18,688 26.31% |
CovRLw/CR 71.77% 55,678 57,735 7.04% 53.00% 47,116 49,083 31.02% 67.50% 92,465 94,145 12.36% 73.35% 16,689 18,629 25.91% CovRLw/CRR 74.15% 57,401 61,331 13.71% 69.57% 37,230 43,369 15.77% 65.47% 91,427 94,785 13.13% 75.34% 16,118 18,584 25.61% CovRL-Fuzz(w/CWR) 61.53% 71,319 74,574 38.26% 49.60% 56,370 58,340 55.74% 58.42% 96,257 98,221 17.23% 58.59% 17,481 19,855 34.20% conductedbyrunningthemfivetimesforfivehourseach,andthe Table7:Ablation:Impactforfinetuningepochs resultswereaveraged. PretrainedLLM-basedMutators.Weanalyzedtheresultsde- V8 pendingonthepretrainedLLMandmutationstrategiesusedfor Coverage Epoch Error(%) Valid Total LLM-basedmutators.WestudiedCodeT5+w/Mask,whichisuti- lizedinCovRL-Fuzz.Forcomparison,weconductedastudyus- 0Epoch 74.91% 58,230 61,947 ing two other LLMs and mutation strategies: Incoder w/Mask, 1Epoch 61.53% 71,319 74,574 2Epoch 59.43% 66,017 69,764 StarCoder w/Mask, and StarCoder w/Prompt. These two LLMs 3Epoch 56.93% 67,079 69,517 wereusedaspretrainedLLM-basedmutatorsintheTitanFuzz[11] andFuzz4ALL[58]respectively,ascontrolgroups.Wesimplyim- plementedthepromptmutationbyaddingmutationinstructions Table8:Ablation:Impactfor𝛼 (e.g.Please create a mutated program that modifies the previous generation.). 𝛼 0.0 0.2 0.4 0.6 0.8 1.0 Intheexperimentalresults,weobservedthattheapplication Cov. Valid 68,248 68,635 69,247 71,319 69,955 69,330 ofapretrainedLLM-basedmutator,incomparisontothebaseline Total 71,906 71,692 72,415 74,574 72,623 72,218 whichmutatesrandomly,resultedinanotabledecreaseinerror rates.Ontheotherhand,finetunedLLM-basedmutators,including CovRL-Fuzz,consistentlyshowedsignificantlyhighercoverageim- rateby28.62%.Thisisanotableimprovement,especiallywhen provementscomparedtothebaseline.Thissuggeststhatfinetuning comparedtotheothertworewardingprocesses,whichwerealmost apretrainedLLM-basedmutatorismoreeffectiveforcoverage- similartoSFT.Particularlyconsideringthattrainingtimeisnot guidedfuzzingthanusingitasis.Additionally,weobservedthat includedforSFT,thisdemonstratesthatCovRL-Fuzzcontributes thetypeandsizeofLLMdidnothaveasubstantialimpactonthe notonlytoguidingtheLLMtoobtainmorecoveragebutalsoto increaseincoverage.AlthoughthepretrainedLLMofCodeT5+ decreasingtheerrorrate. w/Maskisjustone-fifththesizeoftheothertwomodels,thedegree ImpactComponents.WefurtherconductedexperimentsonV8 ofimprovementincoveragewasnotmarkedlydifferent. tostudytwomajorimpactcomponentsforCovRL-Fuzz:theCovRL- FinetunedLLM-basedMutators.Todemonstratetheeffective- basedfinetuningepochsandalpha.Aswiththeearlierablation nessofCovRL-Fuzz,wefrozethetypeandsizeofLLMandmutation studies, we conducted each experiment for 5 hours, repeated 5 strategytoexaminetheimpactofdifferentcoveragerewards.The times.Inordertoensurefairness,anytrainingtimethatexceeded variantsstudiedinclude:SFT,CovRLw/CR,CovRLw/CRR,and oneepochwasnotincludedintheexperimentdurationforthe CovRL-Fuzz(w/CWR).ForSFT,wetrainedwithourtrainingdataset CovRL-basedfinetuningepochs. forthemaskedlanguagemodeltask.Theexperimentdidnotac- Table7comparesthecoveragebasedonfinetuningepochs.Our countforthetrainingtime,whichwasconductedindependently. observationrevealedanegativecorrelationbetweenthenumber Thetrainingconsistedof10epochs. ofepochsandtheerrorrate,indicatingthatastheepochsrose, Forrewarding,weadditionallyhavedesignedasimplebinary theerrorratedecreased.However,thisdecreaseinerrorratewas rewardingprocess,termed“CoverageReward(CR)”.Underthispro- alsofollowedbyadecreaseincoverage.Itindicatesthatoverfitting cess,arewardof1isgiventotestcasesthatachievenewcoverage, startsatthesecondepoch,whichmayrestrictthegenerationof whileapenaltyof0isassignedtothosethatdonot. diversetestcases. Intheexperimentalresults,CovRL-Fuzzachievedthehighest Table8representsthecomparisonofcoveragebasedondifferent coverage, both valid and total, compared to all control groups, valuesof𝛼.𝛼 referstothemomentumrateinEq.7,whichadjusts and exhibited a low error rate. Furthermore, compared to base- theweightbetweenthepreviousandcurrent𝐼𝐷𝐹𝑐𝑜𝑣.Theexperi- line,CovRL-Fuzzshowedanaverageincreaseof36.36%intotal mentalresultsdemonstratedthatapplyingamomentumrateof0.6 coverageand44.75%invalidcoverage,whilereducingtheerror ledtobetterresultscomparedtotheabsenceofmomentum.CovRL:FuzzingJavaScriptEngineswithCoverage-GuidedReinforcementLearningforLLM-basedMutation 1 function i ( t ) { } causedthelogictocallawait n();repeatedly,whichultimately 2 async function n ( t ) { ledtothebug. 3 if ( t instanceof i ) { Listing2representsaminimizedtestcasegeneratedbyCovRL- 4 let c = await i ( ) ; Fuzz,causingaheapbufferoverflowinthereleaseversionofJer- 5 await c >> i ( n ) ; ryScript3.0.0.Thebugoccurswhenafunctiondeclarationcomes 6 } else { |
onthelinefollowingthedeclarationofastaticinitializationblock 7 var c = await n ( ) ; inaclass.Whentheparserreadthestatement,itdidn’tcorrectly 8 } distinguishthestaticinitializationblockrange.Asaresult,mem- 9 } orycorruptionoccurredwhenparsingthefunctionstatement.In 10 n ( true ) ; contrasttootherfuzzingtools,CovRL-Fuzzisgrammaticallysome- Listing1:Thetestcasethattriggersout-of-boundsreadonChakra- whatfreeandallowsforcontext-awaremutation.thisfeatureled Core1.13.0.0-beta(#13). tothediscoveryofthisbug.Ourcasestudyconfirmedthatthese bugscanbeonlytriggeredbyCovRL-Fuzz.Thisdemonstratesthe effectivenessofCovRL-Fuzzindetectingreal-worldbugs. 1 class s extends WeakMap { 2 static {} ; 3 } 6 DISCUSSION 4 function f ( ) WediscussthreepropertiesofCovRL-Fuzzinthefollowing: Listing2:ThetestcasethattriggersheapbufferoverflowonJer- Diversityandvalidity.Toensurediversity,weconductedexperi- ryScript3.0.0(#23). mentswithsevenfuzzerstargetingfourmajorJSenginessuchas V8,JavaScriptCore,ChakraCore,JerryScript.Theoretically,adher- ingtosyntaxandsemanticsimpliesmoreconstraintsinmutation, 5.4 RQ3.Real-WorldBugs whichcanmakeitmorechallengingtoincreasecoverage.However, In this section, we evaluated the ability of CovRL-Fuzz to find CovRL-Fuzzachievedhighercoveragewhilemaintaininglowerror real-worldbugsduringacertainperiodoffuzzing.Specifically,we rateoftestcases(asshowninFigure3andTable3).Itallowed investigatedhowmanyreal-worldbugsCovRL-Fuzzcanfindand thatCovRL-Fuzzexploredeepercodeareasanddetectmorebugs whetheritcandiscoverpreviouslyunknownvulnerabilities.Thus, comparedtoexistingfuzzers. weevaluatedwhetherCovRL-Fuzzcanfindreal-worldbugsfor1 TimespentbetweenfuzzingandCovRL-basedfinetuning.As weekforeachtarget.Wetestedthelatestversionofeachtarget mentionedintheexperimentalsetup,wecalculatedthefuzzingtime engineasofJanuary2023andfoundatotalof48bugs,including39 forthefairnessoffuzzing,includingthetimespentonfinetuning previouslyunknownvulnerabilitieswith11CVEs,someofwhich intheexperimentalresults.Onaverage,finetuningoccursfor10 wereinternallyfixedinthenewerversions. minutesevery2.5hoursoffuzzing.Despiteincludingthefinetuning Table9illustratesadescriptionofthediscoveredbugs.'Reported' timeintheexperiment,CovRL-Fuzzachievedhighcoveragewhile intheStatuscolumnmeansthatCovRL-Fuzzwastheonlyfuzzer alsodecreasingtheerrorrate. thatdiscoveredthebug,anditwasreportedbecauseitpersisted inthelatestversion.'InternalFixed'referstoabugthatexisted Supportingothertargets.Throughfinetuning,thecoreideaof inacertainversionbutwasnotreportedseparatelyasavulner- guidingcoverageinformationdirectlywiththeLLM-basedmutator abilityandwasfixedinthenextversion.Ifabugwasfixedafter isactuallylanguage-agnostic,whichsuggestsitsapplicabilityto itwasreported,itislabeledas'Reported/Fixed'.Additionally,if otherlanguageinterpretersorcompilers.However,ourfocuswas abugwasinthelatestversiondespitebeingpreviouslyreported, moreonanalyzingthesuitabilityofourideatoexistingtechniques itislabeledas'Confirmed'.CovRL-Fuzzfoundavarietyofbugs thansupportingvariouslanguages.Therefore,weconductedex- including undefined behaviors like assertion failures as well as perimentsonlyonJSengines,whichwedeemedtohavethemost memoryvulnerabilitiessuchasbufferoverflowanduseafterfree. impact.Extendingtoothertargetsisleftasfuturework. Notethat,theexperimentwascarriedoutusingonly3coresand forarelativelyshortduration.Incontrast,otherfuzzingtechniques 7 CONCLUSION haveutilizedanaverageofaround30coresandhavedonetheir WeintroducedCovRL-Fuzz,anovelLLM-basedcoverage-guided experimentsforawholemonth[26,44,54,60]. fuzzingframeworkthatintegratescoverage-guidedreinforcement Despitethesesignificantconstraints,CovRL-Fuzzwasstillable learning for the first time. This approach enhances LLM-based tofindasubstantialnumberofunknownbugs.Thissuggeststhat fuzzingbyleveragingcoveragefeedbacktogenerateinputsthat CovRL-Fuzzdemonstratedtheeffectivenessinfindingreal-world achieve broader coverage and deeper exploration of code logic bugsonJSengines. without syntax limitations. Our evaluation results affirmed the CaseStudy.Listing1representsaminimizedtestcasegeneratedby superiorefficacyoftheCovRL-Fuzzmethodologyincomparison CovRL-Fuzz.Thiscodetriggeredanout-of-boundsreadbuginthe toexistingfuzzingstrategies.Impressively,itdiscovered48real- ChakraCore1.13.0,causinganabnormalterminationoftheJSen- worldsecurity-relatedbugswith11CVEsinJSengines—among gine.Theoriginalseeddoesnotassignawaittovar c.CovRL-Fuzz these, 39 were previously unknown vulnerabilities. We believe changedittovar c=await n();andaddedtheawaitstatement thatourmethodologypavesthewayforfuturestudiesfocusedon online5,andalsochangedtheconditionoftheifconditional.This harnessingLLMswithcoveragefeedbackforsoftwaretesting.JueonEom,SeyeonJeong,andTaekyoungKwon Table9:SummaryofDetectedReal-WorldBugs:Thistabledetails48bugsidentifiedinJavaScriptenginesbyourstudy(CovRL-Fuzz),including |
11thatwereclassifiedasCVEs.Notably,39ofthesebugswerepreviouslyunknown. # JSEngine BuggyFunction BugType Status BugID 1 V8 NewFixedArray Invalidsizeerror Confirmed Issue* 2 V8 Builtins_ArrayPrototypeSort OutofMemory Confirmed Issue* 3 JSC isSymbol Out-of-boundsRead InternalFixed Bug* 4 JSC allocateBuffer Crashbyload() Confirmed Bug* 5 JSC fixupArrayIndexOf UseAfterFree InternalFixed Bug* 6 Chakra DeleteProperty UndefinedBehavior Confirmed Issue* 7 Chakra PreVisitFunction OutofMemory Confirmed Issue* 8 Chakra ParseDestructuredObjectLiteral UndefinedBehavior Reported Issue* 9 Chakra RepeatCore OutofMemory Reported Issue* 10 Chakra GetSz OutofMemory Reported Issue* 11 Chakra UtcTimeFromStrCore UndefinedBehavior Reported Issue* 12 Chakra ToString UndefinedBehavior Reported Issue* 13 Chakra TypePropertyCacheElement Out-of-boundsRead Reported Issue* 14 Jerry parser_parse_class UndefinedBehavior Confirmed Issue* 15 Jerry jmem_heap_finalize UndefinedBehavior Confirmed Issue* 16 Jerry parser_parse_statements UndefinedBehavior Reported Issue* 17 Jerry ecma_builtin_typedarray_prototype_sort HeapBufferOverflow Reported CVE-*-* 18 Jerry ecma_regexp_parse_flags UndefinedBehavior Reported CVE-*-* 19 Jerry vm_loop UndefinedBehavior Reported CVE-*-* 20 Jerry ecma_big_uint_div_mod UndefinedBehavior Reported CVE-*-* 21 Jerry jmem_heap_alloc OutofMemory Reported CVE-*-* 22 Jerry scanner_literal_is_created HeapBufferOverflow Reported CVE-*-* 23 Jerry parser_parse_function_statement HeapBufferOverflow Reported CVE-*-* 24 Jerry ecma_property_hashmap_create UndefinedBehavior Reported CVE-*-* 25 Jerry parser_parse_for_statement_start UndefinedBehavior Reported CVE-*-* 26 Jerry jmem_heap_alloc StackOverflow Reported Issue* 27 Jerry scanner_is_context_needed HeapBufferOverflow Reported CVE-*-* 28 QJS js_proxy_isArray StackOverflow Reported/Fixed CVE-*-* 29 Jsish jsiEvalCodeSub Out-of-boundsRead Reported Issue* 30 Jsish IterGetKeysCallback StackOverflow Reported Issue* 31 Jsish Jsi_DecrRefCount UseAfterFree Reported Issue* 32 Jsish SplitChar UseAfterFree Reported Issue* 33 escargot parseLeftHandSideExpression UndefinedBehavior Reported Issue* 34 escargot generateExpressionByteCode UndefinedBehavior Reported Issue* 35 escargot generateStatementByteCode UndefinedBehavior Reported Issue* 36 escargot hasRareData Out-of-boundsRead Reported Issue* 37 escargot readPointerIsNumberEncodedValue Out-of-boundsRead Reported Issue* 38 escargot TightVector Out-of-boundsRead Reported Issue* 39 escargot setupAlternativeOffsets StackOverflow Reported Issue* 40 escargot setMutableBindingByBindingSlot UndefinedBehavior Reported Issue* 41 escargot redefineOwnProperty UndefinedBehavior Reported Issue* 42 escargot asPointerValue UndefinedBehavior Reported Issue* 43 escargot addOptionalChainingJumpPosition UndefinedBehavior Reported Issue* 44 escargot lastFoundPropertyIndex StackOverflow Reported Issue* 45 escargot setMutableBindingByIndex UndefinedBehavior Reported Issue* 46 escargot VectorCopier memcpy-param-overlap Reported Issue* 47 Espruino jsvStringIteratorPrintfCallback Out-of-boundsRead Reported Issue* 48 Espruino jspeFactorFunctionCall StackOverflow Reported Issue*CovRL:FuzzingJavaScriptEngineswithCoverage-GuidedReinforcementLearningforLLM-basedMutation REFERENCES (USENIXSecurity20)(2020),USENIXAssociation,pp.2613–2630. [1] Ahmad,W.,Chakraborty,S.,Ray,B.,andChang,K.-W.Unifiedpre-training [27] Lemieux,C.,andSen,K.Fairfuzz:Atargetedmutationstrategyforincreasing forprogramunderstandingandgeneration.InProceedingsofthe2021Conference greyboxfuzztestingcoverage.InProceedingsofthe33rdACM/IEEEinternational oftheNorthAmericanChapteroftheAssociationforComputationalLinguistics: conferenceonautomatedsoftwareengineering(2018),pp.475–485. HumanLanguageTechnologies(2021),pp.2655–2668. [28] Li,R.,Allal,L.B.,Zi,Y.,Muennighoff,N.,Kocetkov,D.,Mou,C.,Marone, [2] Aschermann,C.,Frassetto,T.,Holz,T.,Jauernig,P.,Sadeghi,A.-R.,and M.,Akiki,C.,Li,J.,Chim,J.,etal.Starcoder:maythesourcebewithyou!arXiv Teuchert,D.Nautilus:Fishingfordeepbugswithgrammars.InProceedings preprintarXiv:2305.06161(2023). 2019NetworkandDistributedSystemSecuritySymposium(2019). [29] Li,X.,Liu,X.,Chen,L.,Prajapati,R.,andWu,D.Alphaprog:reinforcement [3] Austin,J.,Odena,A.,Nye,M.,Bosma,M.,Michalewski,H.,Dohan,D.,Jiang, generationofvalidprogramsforcompilerfuzzing.InProceedingsoftheAAAI E.,Cai,C.,Terry,M.,Le,Q.,etal. Programsynthesiswithlargelanguage ConferenceonArtificialIntelligence(2022),pp.12559–12565. models,2021. [30] Li,X.,Liu,X.,Chen,L.,Prajapati,R.,andWu,D.Fuzzboost:Reinforcement [4] Böttinger,K.,Godefroid,P.,andSingh,R.Deepreinforcementfuzzing.In compilerfuzzing.InInformationandCommunicationsSecurity:24thInternational |
2018IEEESecurityandPrivacyWorkshops(SPW)(2018),IEEE,pp.116–122. Conference,ICICS2022,Canterbury,UK,September5–8,2022,Proceedings(Berlin, [5] Brown,T.,Mann,B.,Ryder,N.,Subbiah,M.,Kaplan,J.D.,Dhariwal,P., Heidelberg,2022),Springer-Verlag,p.359–375. Neelakantan,A.,Shyam,P.,Sastry,G.,Askell,A.,etal.Languagemodels [31] Liu,J.,Zhu,Y.,Xiao,K.,Fu,Q.,Han,X.,Yang,W.,andYe,D.Rltf:Reinforcement arefew-shotlearners. InAdvancesinneuralinformationprocessingsystems learningfromunittestfeedback,2023. (2020),vol.33,pp.1877–1901. [32] Liu,X.,Li,X.,Prajapati,R.,andWu,D.Deepfuzz:Automaticgenerationof [6] CharlieMiller.Fuzzbynumber.https://www.ise.io/wp-content/uploads/2019/ syntaxvalidcprogramsforfuzztesting.InProceedingsoftheAAAIConference 11/cmiller_cansecwest2008.pdf,2008.Accessed:2024-01-12. onArtificialIntelligence(2019),pp.1044–1051. [7] Chen,M.,Tworek,J.,Jun,H.,Yuan,Q.,Pinto,H.P.d.O.,Kaplan,J.,Edwards, [33] Loshchilov,I.,andHutter,F.Decoupledweightdecayregularization.arXiv H.,Burda,Y.,Joseph,N.,Brockman,G.,etal. Evaluatinglargelanguage preprintarXiv:1711.05101(2017). modelstrainedoncode,2021. [34] Mann,H.B.,andWhitney,D.R. Onatestofwhetheroneoftworandom [8] Chowdhery,A.,Narang,S.,Devlin,J.,Bosma,M.,Mishra,G.,Roberts,A., variablesisstochasticallylargerthantheother. Theannalsofmathematical Barham,P.,Chung,H.W.,Sutton,C.,Gehrmann,S.,etal. Palm:Scaling statistics(1947),50–60. languagemodelingwithpathways,2022. [35] MattMolinyawe,Adul-AzizHariri,J.S. $hellonearth:Frombrowserto [9] Cobbe,K.,Kosaraju,V.,Bavarian,M.,Chen,M.,Jun,H.,Kaiser,L.,Plappert, systemcompromise.InBlackHatUSA(2016). M.,Tworek,J.,Hilton,J.,Nakano,R.,etal.Trainingverifierstosolvemath [36] MichalZalewski.Afl:Americanfuzzylop.https://lcamtuf.coredump.cx/afl/, wordproblems,2021. 2013.Accessed:2023-08-15. [10] Cummins,C.,Petoumenos,P.,Murray,A.,andLeather,H.Compilerfuzzing [37] MihaiBazon. uglifyjs. https://github.com/mishoo/UglifyJS,2010. Accessed: throughdeeplearning.InProceedingsofthe27thACMSIGSOFTInternational 2023-08-14. SymposiumonSoftwareTestingandAnalysis(2018),pp.95–105. [38] Miller,B.P.,Fredriksen,L.,andSo,B.Anempiricalstudyofthereliabilityof [11] Deng,Y.,Xia,C.S.,Peng,H.,Yang,C.,andZhang,L.Largelanguagemodels unixutilities.CommunicationsoftheACM33,12(Dec.1990),32–44. arezero-shotfuzzers:Fuzzingdeep-learninglibrariesvialargelanguagemodels. [39] Ouyang,L.,Wu,J.,Jiang,X.,Almeida,D.,Wainwright,C.,Mishkin,P.,Zhang, InProceedingsofthe32ndACMSIGSOFTInternationalSymposiumonSoftware C.,Agarwal,S.,Slama,K.,Ray,A.,etal.Traininglanguagemodelstofollow TestingandAnalysis(ISSTA2023)(2023). instructionswithhumanfeedback.AdvancesinNeuralInformationProcessing [12] Deng,Y.,Xia,C.S.,Yang,C.,Zhang,S.D.,Yang,S.,andZhang,L. Large Systems35(2022),27730–27744. languagemodelsareedge-casefuzzers:Testingdeeplearninglibrariesviafuzzgpt, [40] Park,S.,Xu,W.,Yun,I.,Jang,D.,andKim,T.Fuzzingjavascriptengineswith 2023. aspect-preservingmutation.In2020IEEESymposiumonSecurityandPrivacy [13] Devlin,J.,Chang,M.-W.,Lee,K.,andToutanova,K. Bert:Pre-trainingof (SP)(2020),IEEE,pp.1629–1642. deepbidirectionaltransformersforlanguageunderstanding,2018. [41] Patra,J.,andPradel,M.Learningtofuzz:Application-independentfuzztesting [14] ECMAInternational.Ecmascriptlanguagespeicification.https://www.ecma- withprobabilistic,generativemodelsofinputdata.Tech.rep.,TUDarmstadt, international.org/ecma-262/,1997.Accessed:2023-08-15. DepartmentofComputerScience,2016. [15] Fan,Z.,Gao,X.,Mirchev,M.,Roychoudhury,A.,andTan,S.H.Automated [42] Raffel,C.,Shazeer,N.,Roberts,A.,Lee,K.,Narang,S.,Matena,M.,Zhou, repairofprogramsfromlargelanguagemodels.In2023IEEE/ACM45thInterna- Y.,Li,W.,andLiu,P.J.Exploringthelimitsoftransferlearningwithaunified tionalConferenceonSoftwareEngineering(ICSE)(2023),IEEE,pp.1469–1481. text-to-texttransformer. TheJournalofMachineLearningResearch21,1(Jan. [16] Fried,D.,Aghajanyan,A.,Lin,J.,Wang,S.,Wallace,E.,Shi,F.,Zhong,R., 2020),5485–5551. |
Yih,S.,Zettlemoyer,L.,andLewis,M.Incoder:Agenerativemodelforcode [43] Roit,P.,Ferret,J.,Shani,L.,Aharoni,R.,Cideron,G.,Dadashi,R.,Geist,M., infillingandsynthesis. InTheEleventhInternationalConferenceonLearning Girgin,S.,Hussenot,L.,Keller,O.,etal.Factuallyconsistentsummarization Representations(2022). viareinforcementlearningwithtextualentailmentfeedback,2023. [17] Godefroid,P.,Peleg,H.,andSingh,R.Learn&fuzz:Machinelearningforinput [44] Salls,C.,Jindal,C.,Corina,J.,Kruegel,C.,andVigna,G. {Token-Level} fuzzing.In201732ndIEEE/ACMInternationalConferenceonAutomatedSoftware fuzzing.In30thUSENIXSecuritySymposium(USENIXSecurity21)(2021),USENIX Engineering(ASE)(2017),ASE2017,IEEE,pp.50–59. Association,pp.2795–2809. [18] Google. Chrominumissue729991. https://bugs.chromium.org/p/chromium/ [45] Schulman,J.,Wolski,F.,Dhariwal,P.,Radford,A.,andKlimov,O.Proximal issues/detail?id=729991,2017.Accessed:2023-08-14. policyoptimizationalgorithms,2017. [19] Han,H.,Oh,D.,andCha,S.K.Codealchemist:Semantics-awarecodegeneration [46] Serebryany, K., Bruening, D., Potapenko, A., and Vyukov, D. tofindvulnerabilitiesinjavascriptengines. InProceedings2019Networkand {AddressSanitizer}: A fast address sanity checker. In 2012 USENIX an- DistributedSystemSecuritySymposium(2019). nualtechnicalconference(USENIXATC12)(2012),pp.309–318. [20] He,X.,Xie,X.,Li,Y.,Sun,J.,Li,F.,Zou,W.,Liu,Y.,Yu,L.,Zhou,J.,Shi,W., [47] Shojaee,P.,Jain,A.,Tipirneni,S.,andReddy,C.K. Execution-basedcode etal.Sofi:Reflection-augmentedfuzzingforjavascriptengines.InProceedings generationusingdeepreinforcementlearning,2023. [48] SparckJones,K.Astatisticalinterpretationoftermspecificityanditsapplication ofthe2021ACMSIGSACConferenceonComputerandCommunicationsSecurity (2021),ACM,pp.2229–2242. inretrieval.Journalofdocumentation28,1(1972),11–21. [21] hoongwooHan.js-vuln-db.https://github.com/tunz/js-vuln-db,2010.Accessed: [49] Su,Y.,Lan,T.,Wang,Y.,Yogatama,D.,Kong,L.,andCollier,N.Acontrastive 2023-08-15. frameworkforneuraltextgeneration.AdvancesinNeuralInformationProcessing [22] JSHint.Jshint:Ajavascriptcodequalitytool.https://jshint.com/,2013.Accessed: Systems35(2022),21548–21561. 2023-08-15. [50] TechnicalCommittee39ECMAInternational.Test262.https://github.com/ [23] Klees,G.,Ruef,A.,Cooper,B.,Wei,S.,andHicks,M.Evaluatingfuzztesting.In tc39/test262,2010.Accessed:2023-08-15. [51] Veggalam,S.,Rawat,S.,Haller,I.,andBos,H.Ifuzzer:Anevolutionaryinter- Proceedingsofthe2018ACMSIGSACconferenceoncomputerandcommunications security(2018),ACM,pp.2123–2138. preterfuzzerusinggeneticprogramming.InComputerSecurity–ESORICS2016: [24] Le,H.,Wang,Y.,Gotmare,A.D.,Savarese,S.,andHoi,S.C.H. Coderl: 21stEuropeanSymposiumonResearchinComputerSecurity,Heraklion,Greece, Masteringcodegenerationthroughpretrainedmodelsanddeepreinforcement September26-30,2016,Proceedings,PartI21(2016),Springer,Cham,pp.581–601. learning.AdvancesinNeuralInformationProcessingSystems35(2022),21314– [52] W3Techs.Usagestatisticsofjavascriptasclient-sideprogramminglanguage 21328. onwebsites.https://w3techs.com/technologies/details/cp-javascript,2024.Ac- [25] Lee,H.,Phatale,S.,Mansoor,H.,Lu,K.,Mesnard,T.,Bishop,C.,Carbune, cessed:2024-01-17. V.,andRastogi,A.Rlaif:Scalingreinforcementlearningfromhumanfeedback [53] Wang,J.,Chen,B.,Wei,L.,andLiu,Y.Skyfire:Data-drivenseedgeneration withaifeedback,2023. forfuzzing.In2017IEEESymposiumonSecurityandPrivacy(SP)(2017),IEEE, [26] Lee,S.,Han,H.,Cha,S.K.,andSon,S.Montage:Aneuralnetworklanguage pp.579–594. {Model-Guided}{JavaScript}enginefuzzer.In29thUSENIXSecuritySymposium [54] Wang,J.,Chen,B.,Wei,L.,andLiu,Y. Superion:Grammar-awaregreybox fuzzing.In2019IEEE/ACM41stInternationalConferenceonSoftwareEngineeringJueonEom,SeyeonJeong,andTaekyoungKwon (ICSE)(2019),IEEE,pp.724–735. [59] Xia,C.S.,andZhang,L.Lesstraining,morerepairingplease:revisitingauto- [55] Wang,Y.,Le,H.,Gotmare,A.D.,Bui,N.D.,Li,J.,andHoi,S.C. Codet5+: matedprogramrepairviazero-shotlearning.InProceedingsofthe30thACMJoint Opencodelargelanguagemodelsforcodeunderstandingandgeneration,2023. EuropeanSoftwareEngineeringConferenceandSymposiumontheFoundationsof [56] Wang,Y.,Wang,W.,Joty,S.,andHoi,S.C.Codet5:Identifier-awareunified SoftwareEngineering(2022),pp.959–971. |
pre-trainedencoder-decodermodelsforcodeunderstandingandgeneration.In [60] Ye,G.,Tang,Z.,Tan,S.H.,Huang,S.,Fang,D.,Sun,X.,Bian,L.,Wang, Proceedingsofthe2021ConferenceonEmpiricalMethodsinNaturalLanguage H.,andWang,Z. Automatedconformancetestingforjavascriptenginesvia Processing(2021),pp.8696–8708. deepcompilerfuzzing.InProceedingsofthe42ndACMSIGPLANInternational [57] Wei,J.,Bosma,M.,Zhao,V.,Guu,K.,Yu,A.W.,Lester,B.,Du,N.,Dai,A.M., ConferenceonProgrammingLanguageDesignandImplementation(2021),ACM, andLe,Q.V.Finetunedlanguagemodelsarezero-shotlearners.InInternational pp.435–450. ConferenceonLearningRepresentations(2021). [61] Ziegler,A.,Kalliamvakou,E.,Li,X.A.,Rice,A.,Rifkin,D.,Simister,S., [58] Xia,C.S.,Paltenghi,M.,Tian,J.L.,Pradel,M.,andZhang,L. Universal Sittampalam,G.,andAftandilian,E.Productivityassessmentofneuralcode fuzzingvialargelanguagemodels.arXivpreprintarXiv:2308.04748(2023). completion.InProceedingsofthe6thACMSIGPLANInternationalSymposiumon MachineProgramming(2022),pp.21–29. |
2402.12642 Rampo: A CEGAR-based Integration of Binary Code Analysis and System Falsification for Cyber-Kinetic Vulnerability Detection Kohei Tsujio, Mohammad Abdullah Al Faruque, Yasser Shoukry Department of Electrical Engineering and Computer Science University of California, Irvine Abstract—Cyber-physical systems (CPS) play a pivotal role in physical consequences. That is, cyber kinetic vulnerabilities modern critical infrastructure, spanning sectors such as energy, blur the lines between digital and physical security, requiring transportation, healthcare, and manufacturing. These systems aholisticapproachthatcananalyzesoftwarecontrollingphys- combine digital and physical elements, making them susceptible ical processes from the point-of-view of the induced physical to a new class of threats known as cyber kinetic vulnerabilities. Such vulnerabilities can exploit weaknesses in the cyber world or kinetic behavior. to force physical consequences and pose significant risks to both Unfortunately, there is currently a disconnect within the humansafetyandinfrastructureintegrity.Thispaperpresentsa field of CPS formal verification and analysis. On the one noveltool,namedRampo,thatcanperformbinarycodeanalysis side,thecelebratedachievementsinthesoftwareandhardware to identify cyber kinetic vulnerabilities in CPS. The proposed tool takes as input a Signal Temporal Logic (STL) formula that verification field [4]–[8] are inadequate for analyzing the describes the kinetic effect—i.e., the behavior of the “physi- dynamics of the underlying physical components of CPS. On cal” system—that one wants to avoid. The tool then searches the other hand, the techniques developed in the control theory the possible “cyber” trajectories in the binary code that may community have focused mainly on the formal verification lead to such “physical” behavior. This search integrates binary of the dynamics of physical/mechanical systems [9]–[13] code analysis tools and hybrid systems falsification tools using a Counter-Example Guided Abstraction Refinement (CEGAR) while ignoring the complexity of the computational and cyber approach. In particular, Rampo starts by analyzing the binary components. This paper aims to provide a comprehensive code to extract symbolic constraints that represent the different framework for tackling cyber kinetic vulnerabilities, one that pathsinthecode.Thesesymbolicconstraintsarethenpassedtoa not only identifies potential threats but also evaluates their SatisfiabilityModuloTheories(SMT)solvertoextracttherange physicalconsequences.Inarapidlyevolvingdigitallandscape, of control signals that can be produced by each of the paths in the code. The next step is to search over possible “physical” this integrated approach is crucial for protecting critical in- trajectoriesusingahybridsystemsfalsificationtoolthatadheres frastructure and ensuring the safety and reliability of essential to the behavior of the “cyber” paths and yet leads to violations services for society at large. of the STL formula. Since the number of “cyber” paths that This paper delves into the compelling realm of binary need to be explored increases exponentially with the length of “physical” trajectories, we iteratively perform refinement of the code analysis and falsification as an indispensable facet of “cyber”pathconstraintsbasedonthepreviousfalsificationresult safeguarding CPS against cyber kinetic vulnerabilities. Binary and traverse the abstract path tree obtained from the control code, the low-level representation of software, is the critical program to explore the search space of the system. To illustrate interface where the virtual and physical worlds intersect. By the practical utility of binary code analysis in identifying cyber subjecting binary code to rigorous scrutiny, researchers and kinetic vulnerabilities, we present case studies from diverse CPS domains,showcasinghowtheycanbediscoveredintheircontrol securityprofessionalscanunearthvulnerabilitiesthatareinvis- programs.Inparticular,comparedtooff-the-shelftools,ourtool ibleathigherabstractionlevelsandthuslaythefoundationfor could compute the same number of vulnerabilities while leading enhancingtheresilienceofcriticalinfrastructure.Inparticular, to a speedup that ranges from 3× to 98×. compared to code-level, binary code analysis provides sev- eral advantages, including (i) Visibility into Compiled Code: I. INTRODUCTION Binary code analysis involves examining the compiled form In an era where the backbone of modern society—power of a software application, providing insights into the actual grids, transportation networks, healthcare facilities, and in- instructionsexecutedbytheprocessor.Thislevelofanalysisis dustrial control systems—rely extensively on cyber-physical essentialforunderstandinghowthesoftwareinteractswiththe systems (CPS), understanding and protecting against vul- hardware, as it focuses on the executable, machine-readable nerabilities has never been more critical. In particular, the code (ii) Reverse Engineering: Reverse engineering binary convergence of physical and digital systems in critical CPS code can reveal hidden or obfuscated vulnerabilities, which infrastructure has introduced new dimensions of risk, giving might not be readily apparent at the source code level. It is a rise to a class of threats known as cyber kinetic vulnerabil- valuable technique for understanding proprietary or closed- ities [1]–[3]. These vulnerabilities represent a unique breed source software, and (iii) Legacy Systems: In several CPS of cyber-physical security challenges, where the exploitation applications, only the binary code is available, especially for of software vulnerabilities can lead to potentially catastrophic older or proprietary software. Binary code analysis is the 4202 beF 02 ]RC.sc[ |
1v24621.2042:viXra!"#$%&’()*)+ " !, 2- ." #.# 31$ $" "&/ 40 .0,1.& !! !""#$%&’()*#+!" 56! +*, 1- ." #.# ,$ (+" (& "0!!#" ,-#.%-+’/%-0%$1 !! !" !# !$ / 52& -3 6"4 )$ ++# !"#$% !! !" !# !$ 7$8).& $ 9):;"%)1)#. !! !" !# !$ Fig.1. Rampotakesasinputthebinarycodeofthecontrolprogram,ablack- boxmodelthatrepresentsthephysicsofthesystem,andcyber-kineticsafety requirementscapturedinSTL.Theoutputsarethecybertrajectoriesinsidethe binarycodeandthecorrespondingphysicalsystembehaviorthatcanleadto violationoftheSTLrequirements.Moreover,Rampocanalsooutputattacks onthesystem’ssensorsthatcanalsoleadtokineticvulnerabilities. primary method for assessing the security of such systems. ThispaperintroducesanoveltoolcalledRampo.Asshown inFigure1,RampotakesasaninputaSignalTemporalLogic (STL) formula that describes the kinetic effects one would like to avoid. For example, this STL formula can encode the requirement that the voltages and currents of the electric distribution buses should be within acceptable safe ranges, drones should approach landing sites with specific velocity profiles to avoid a crash, or the machinery in the industrial control systems should operate within safe operation regimes. Theobjectiveofourtoolistosimultaneouslyidentifyvulnera- bilities in the binary code used to control this CPS along with physical attacks (e.g., changes in the sensor measurements) that can lead to a violation of the STL formula and, hence, an undesired kinetic vulnerability. In other words, Rampo is designed to simultaneously analyze the binary code and the dynamics of the physical system to identify vulnerabilities in both the binary code and the physical system that can lead to kinetic consequences. The design of a tool that can simultaneously analyze the binary code and the dynamical behavior of physical systems within CPS faces two significant complexity challenges. First, performing such analysis necessitates tools that can reason about constraints defined over discrete and real variables, re- spectively.Unfortunately,toolslikeSatisfiabilityModuloThe- ories (SMT) solvers and Mixed-Integer Programming (MIP) do not scale well for complex systems. Second, due to the dynamic behavior of CPS, one needs to analyze temporal trajectories over both the cyber and physical systems. That is, while existing binary code analysis focuses on analyzing all the possible paths inside the code within one execution of the code, in CPS, the control code is executed periodically where the path taken within the same binary code may differ from one time to another depending on the state of the physical system. Hence, consider, for the sake of an example, a simple code with three different possible paths (corresponding to differentif-elseconditionsinthecode).Aclassicalbinarycode analysis will aim to cover all these three paths. Nevertheless, considering temporal trajectories of 10-time steps in a CPS application, one must consider all 310 =59049 possible paths ! ! ! ! !! !" !# Fig.2. Anexampleofacodewiththreepaths(markedwithblue,green,and yellow). When considering the temporal evolution of a CPS, one needs to takeintoaccountthedifferentcombinationsofthesethreepathsacrosstime, whichleadstoanexponentialincreaseintheoverallnumberofpaths. that may arise from computing the control signal 10 times within a trajectory, a daunting computational problem. To solve the intertwined challenges mentioned above, Rampo integrates two state-of-the-art techniques from binary analysis and hybrid systems falsification into a unified frame- work using a Counter-Example Guided Abstraction Refine- ment (CEGAR) approach [14]. In particular, we use binary analysis tools to perform symbolic execution over the binary code to extract SMT constraints representing different paths insidethecode.UsingSMTsolvers,wesolveanoptimization problem to identify the bounds on the control signal that can be generated from every possible path in the binary code. To avoid the combinatorial explosion resulting from considering different paths along a temporal trajectory of the CPS, we abstract several possibilities of code path executions and use this abstraction to guide a falsification of the physical system againsttheSTLrequirements.Falsificationtoolsusestochastic optimization algorithms to find trajectories. If a trajectory of the physical system is found, we refine our abstraction using the information extracted from this trajectory (also referred to as an abstract counter-example). By refining the abstraction iteratively, we can ensure the coverage of all possible paths insidethebinarycodealongthetemporalevolutionoftheCPS. In summary, this paper introduces several novel contributions summarized as follows: • We propose a novel framework to identify cyber-kinetic and cyber-physical-kinetic vulnerabilities in software used to control physical systems. • We propose a Counter-Example Guided Abstraction Re- finement approach to harness the exponential growth in the search space. • We provide an ablation study that shows the benefits of the proposed framework, both in terms of the effective- ness of finding vulnerabilities and in terms of scaling more favorably to off-the-shelf tools. II. RELATEDWORKS AnalyzingCPSsoftwareforcorrectnessandsafetyhasbeen an active area in the past few decades. For example, Fuzz testing has become an instrumental technique in the domainsof software verification and security analysis. Several works wherec(i) :FPo →Bistheithpathconstraintoftheprogram address the fuzz testing of CPS software for automatic test g andg(i) :FPo →FPm isthepathfunction,i.e.,thefunction |
generation [15] and for the discovery of security vulnera- that is executed at the ith path of g. bilities [16]–[18]. Unfortunately, the work in [16]–[18] (and Example: Consider the following control program g: others) have focused either on simple physical systems or 1 double control(double y1, double y2){ vulnerabilities with no kinetic consequences. 2 if (-1 <= y1 && y1 <= -0.5){ Another prominent technique for identifying vulnerabili- 3 if (y2 > 0){ ties in software is through symbolic execution techniques. 4 u = 4 * y1 - 6; 5 }else{ KLEE [19]andangr [20]aresomeofthemostpowerfulanal- 6 u = 0; ysis tools. By utilizing symbolic execution and SMT solvers, 7 } these tools can extract path information in a binary code and 8 }else{ 9 u = -4 * y1 - 6; findseveraltypesofvulnerabilities.Unfortunately,thesetech- 10 } niques focus mostly on traditional cyber vulnerabilities and 11 return u; do not generalize to reason about cyber-kinetic vulnerabilities 12 } due to their lack of analyzing models of physical systems. This control program consists of 3 paths with the following One of the widely applied techniques to discover bugs in path constraints: CPSisFalsification[12],[21],[22].Whiletraditionalfalsifica- tionenginesdonotguaranteetocoverallpossiblepathsinside c(1)(y)=(−1≤y1)∧(y1≤−0.5)∧(y2>0), the control software, new directions in this field have shown new techniques to provide such code coverage [23], [24]. c(2)(y)=(−1≤y1)∧(y1≤−0.5)∧(y2≤0), Unliketheworkin[23],[24],thispaperfocusesoncoverageof c(3)(y)=¬((−1≤y1)∧(y1≤−0.5)). codepathsalongtemporalevolutionofthecode.Asmotivated in Figure 2, the temporal evolution of the code leads to an The corresponding path functions are: exponential growth in the space of all possible paths that a code can take over time. This challenge of dealing with the g(1)(y)=4y1−6, g(2)(y)=0, g(3)(y)=−4y1−6. temporalevolutionofcode(oraswecallitcybertrajectories), It is important to note that we do not assume prior knowledge is one of the main motivations behind our CEGAR-based of the path constraints and the corresponding path functions, approach. as they will be extracted automatically from the binary code III. PROBLEMDEFINITION representing the control program. A. Notation Given a control program g that consists of k paths, we denote by PATH:FPo →{1,...,k} the function that returns WeusethesymbolsN,R,andBtodenotethesetofnatural, the path index of an input y. That is: real, and Boolean numbers, respectively. Additionally, we use the symbol FP to denote the set of Floating Point numbers. PATH(y)=p⇐⇒c(p)(y)=True. (2) For ease of notation, we do not distinguish between 32-bit (single) or 64-bit (double) floating point numbers. We use Note that the function PATH is well defined since path ∧, ∨, and ¬ to represent the logical AND, OR, and NOT constraints are mutually exclusive, and hence, an input y can operators, respectively. Given a sequence a = {a t}N t=0, we only be processed by only one path inside the code. denote by Element(a,i) the ith element of the sequence a. Using this notation, we can now define the cyber trajectory of a system as the index of the control program’s path taken B. Cyber-Kinetic Vulnerabilities at different time instances as follows. We consider nonlinear, discrete-time dynamical systems: Definition 3.1 (Cyber Trajectories): Given a control pro- x t+1 =f(x t,u t), y t =h(x t), u t =g(y t), (1) gram g with k paths, a sequence ψ p = {p t}H t=0, with p t ∈ {1,...,k}, is called a cyber trajectory of the physical system where x ∈ Rn is the state of the system at time t ∈ N, t f if there exists a corresponding trajectory of the physical u t ∈ FPm ⊂ Rm is the action at time t, and y t ∈ FPo ⊂ Ro system ψ = {x }H with x = f(x ,g(h(x ))) (t < H) x t t=0 t+1 t t is the sensor (output) measurements at time t. As a cyber- such that p =PATH(h(x )) for all t∈{0,...,H}. t t physical system, we assume that the system is controlled by a We are interested in enumerating all cyber trajectories that control program g : FPo → FPm. Without loss of generality, can lead to physical system trajectories that violate a safety any program can be decomposed using the so-called path requirementφ.Inthispaper,weassumethatthesafetyrequire- constraints [25] as: mentφiscapturedbyaSignalTemporalLogic(STL)formula. g(y)=[c(1)(y)⇒u=g(1)(y)] For the formal definition of STL syntax and semantics, we refer the reader to [26]. Using this notation, the problem of ∧[c(2)(y)⇒u=g(2)(y)] interest can be defined as follows. . . Problem 3.2 (Enumeration of Cyber-Kinetic Vulnerabili- . ties): Given an STL formula φ, a controller program g with k ∧[c(k)(y)⇒u=g(k)(y)], paths,andaphysicalsystemmodelf.FindthesetΨφofcyber ptrajectories that may lead to cyber-kinetic vulnerabilities, i.e., kH possible cyber trajectories. To harness the combinatorial Ψφ is defined as: explosion, we resort to an iterative Counter-Example Guided p (cid:110) Abstraction Refinement (CEGAR) process. In this process, Ψφ p = {p t}H t=0 ∈{1,...,k}H+1 | cyber trajectories go through several levels of abstraction |
∃ψ ={x }H .(cid:2) x =f(x ,g(h(x ))) (t<H), that will be iteratively refined later in the process to ensure x t t=0 t+1 t t coverage of all possible cyber trajectories. (cid:3)(cid:111) ψ ̸|=φ, p =PATH(h(x )),∀t∈{0,...,H} . (3) Thefirstlevelofabstractionistoconsiderthe“range”ofthe x t t control signal u that each program path can produce. Without In other words, a cyber trajectory ψ p is considered a cyber- lossofgenerality,andforthesakeofnotation,wewillassume kinetic vulnerability if there exists an associated trajectory of that control signal u is scalar — nevertheless, our framework the physical system ψ x that violates the safety requirement φ. is designed to support multidimensional control signals. For such a case, the maximum and minimum of the control signal C. Cyber-Physical-Kinetic Vulnerabilities u within the ith path p of g, denoted by u and u , can be While Problem 3.2 focuses on finding vulnerabilities in the p p computed by solving the following optimization problem: controller program that lead to violation of safety require- ments, we are also interested in generalizing this problem u = max g(p)(y) subject to c(p)(y)=True, (6) p into scenarios when an attacker is capable of manipulating y∈PFo thephysicalsensormeasurementsh(x t)beforeitisprocessed u p = ym ∈Pin Fog(p)(y) subject to c(p)(y)=True. (7) by the control program g [27]–[32]. Typically, these sensor manipulations are of relatively small magnitude s compared ThankstothecurrentadvancesinSMTsolvers(e.g.,Z3[33]), to the sensor noise levels to avoid being detected. That is, we one can efficiently solve the optimization problem above to consider the system: obtain the path ranges R = {(u 1,u 1),...,(u k,u k)}. Given thesepathranges,wecannowabstractacybertrajectoryusing therangesofcontrolsignalsateachtimestep.Thatis,forany x t+1 =f(x t,u t), y t =h(x t)+s t, u t =g(y t) (4) cyber trajectory ψ = {p }H , the corresponding “range” of p t t=0 where s is the physical attack signal at time t with ∥s ∥≤s. control signals within this trajectory is defined as: t t In such a setup, given a trajectory of the physical system ψ x (cid:110) (cid:111)H (cid:110) (cid:111)H and a trajectory of the physical attack signal ψ s = {s t}H t=0, Range(ψ p)= u pt t=0, Range(ψ p)= u pt t=0 (8) the corresponding cyber trajectory ψ is defined as ψ = p p The second level of abstraction combines multiple cyber {PATH(h(x )+s )}H . t t t=0 trajectories into an “abstract cyber trajectory”. Such abstrac- Problem 3.3 (Enumeration of Cyber-Physical-Kinetic Vul- tionisdonebycombiningtherangesofcontrolsignalsacross nerabilities): Given an STL formula φ, a controller program different cyber trajectories. g with k paths, a physical system model f, and a limit on the In particular, the first iteration of Rampo abstracts all cyber physical attack signal s. Find the set Ψφ of cyber-physical- p,s trajectories into one. The ranges of control signals along all kinetic vulnerabilities defined as: cyber trajectories are used to augment the STL-based safety (cid:110) Ψφ = {(p ,s )}H ∈{1,...,k}H+1×FPH+1 | requirements φ as follows: p,s t t t=0 ∃ψ ={x }H .(cid:2) x =f(x ,g(h(x )+s )) (t<H), (cid:34) H (cid:35) x t t=0 t+1 t t t (cid:94) ¬ϕ=¬φ∧ min u ≤u ≤ max u . (9) ψ ̸|=φ, p =PATH(h(x )+s ), p t p x t t t p∈{1,...,k} p∈{1,...,k} (cid:3)(cid:111) t=0 ∥s ∥≤s,∀t∈{0,...,H} . (5) t The logical negation ¬ is motivated by the fact that falsifi- cation engines aim to find a violation of ϕ, and hence, the IV. FRAMEWORK encoding above forces the control signal to be within the A. Overview of the Rampo Framework ranges of interest. Next, Rampo uses these abstractions to guide a system As pictorially shown in Figure 3, given a binary-level falsificationengine.Systemfalsificationreferstoasetoftools control program g, our first step is to extract the path con- straints c(1),...,c(k) from g. To this end, we use off-the- thattakeasinputadynamicalsystemf andanSTLformulaϕ andfindaphysical-leveltrajectoryψ ={x }H thatviolates shelf symbolic code execution engines that can extract SMT x t t=0 ϕ, i.e., constraints representing different paths in the control program (Line 5 in Algorithm 1). ψ =Falsify(f,ϕ). (10) x The next step is to analyze the extracted paths to find cybertrajectoriesthatcanleadtocyber-kineticvulnerabilities. If ψ is not empty, then the computed trajectory ψ is x x Nevertheless, and as motivated in Figure 2, the number of guaranteed to violate the STL requirements ϕ, i.e., ψ ̸|= ϕ. x cyber trajectories increases exponentially as a function of the Note that the bar in ψ is used to emphasize the fact that ψ x x number of paths k and the length of the trajectory H. In isnotaconcretevulnerabilitysinceitwascomputedbasedon particular, for trajectories of length H and a control program theabstractionofcybertrajectoriesusingcontrolsignalranges |
with k paths, our tool is now challenged with analyzing all (Lines 11-12, Algorithm 1).:%*"031;-<-#1 ="(?12)*$(0"%*($1D1="(?1!7*’(%)*$ ="(?18"*,-$ ./$(0"’(1.##123/-0140"5-’()0%-$ 2)*(0)#1=0),0"> @3>/)#%’1 ! +C-’7(%)* !+,- @B41@)#<-0 ! #$%&!%#’() "+,- # " #* ’! ()% %* &% !%* ’! () $$ %% $$ "! " *"%*%*" $%$# " "’ *( #) %% *&! %% *$ # $%$$ @"&-(3 8-&%*-9123/-0140"5-’()0311! !# ./$(0"’(123/-0140"5-’()03111! ! ./$(0"’(167#*-0"/%#%(3111! " 8-A7%0->-*( !"#$%&%’"(%)* " +*,%*- ./$(0"’(%)* 8-&%*->-*( =?3$%’"# B)9-# # !! !" !# !$ !! !" !# !$ Fig. 3. A pictorial representation of the proposed framework. Binary-level control programs are analyzed using symbolic execution tools to extract the pathconstraints,pathfunctions,andtherangeofthecontrolsignalsassociatedwitheachpathintheprogram.Next,aCounter-ExampleGuidedAbstraction Refinement(CEGAR)isusedtoabstractallcybertrajectoriesforahorizonH,andafalsificationengineisusedtosearchfortrajectoriesofthephysicalsystem thatviolatethesafetyrequirements.Thecybertrajectoriesarethenrefinedaroundthefalsifyingtrajectoriesuntilallconcretecyber-kineticvulnerabilitiesare found. Since the abstraction of cyber trajectories using range of where φ is the original safety requirements and Ψ is the p controlsignalsisasoundapproximation,thenwheneverψ is set of the abstract trajectories explored so far (Lines 25-30, x empty,onecanconcludethatthesystemisfreefromallcyber- Algorithm 1). kinetic vulnerabilities. On the other hand, if ψ is not empty, By refining the abstractions of the explored cyber vulner- x then our tool will extract the cyber trajectory corresponding abilities and searching over unexplored abstract trajectories, to ψ as follows: Rampo will iteratively identify new abstract cyber trajectories x and search for concrete vulnerabilities within those abstract ψ =Extract-Path(ψ ) cyber trajectories until all cyber trajectories are covered. This p x (cid:110) (cid:111)H process is summarized in Algorithm 1 and Figure 3. = PATH(h(Element(ψ ,t))) . (11) x t=0 B. Correctness of Algorithm 1 The soundness and completeness of Algorithm 1 relies on We refer to ψ as an abstract vulnerability (Lines 13-16, p the soundness and completeness of the falsification engine Algorithm 1). (Falsify). This follows from the fact that Rampo uses Since the abstract vulnerability ψ is extracted from a p sound abstraction of the cyber trajectories and relies on the physical trajectory that violated the safety requirements, it is falsification engine to discard abstract trajectories when no likely a concrete vulnerability may exist in the system. To abstract physical-level trajectory violates the safety require- that end, Rampo will explore ψ by iteratively refining the p ments. Unfortunately, off-the-shelf falsification engines rely controlsignalrangesaroundψ .Inparticular,Rampoprovides p on stochastic optimization to reason about non-convex con- two different strategies for the iterative abstraction refine- straints, and hence, these falsification engines are sound but ment of ψ that are explained in detail in Section V and p not complete, rendering Rampo to be sound but not complete. Algorithm 1. These abstraction refinement approaches can eitherfindconcretecybervulnerabilitiesψ p,findotherabstract C. Extension to Cyber-Physical-Kinetic Vulnerabilities ∗ cyber trajectories ψ that need to be subsequently refined, or p Algorithm 1 can be directly extended to enumerate cyber- certify that a set of abstract trajectories ψ will not lead to a p physical-kinetic vulnerabilities. As captured by Problem 3.3, vulnerability (Lines 19-23, Algorithm 1). the main difference between cyber-kinetic vulnerabilities and Finally, Rampo will search for new abstract vulnerabilities cyber-physical-kinetic vulnerabilities is the need to find ad- over the unexplored abstract trajectories. This is achieved by ditional sensor manipulation signals {s }H that can lead to t t=0 forcingthefalsificationenginetosearchoverthecontrolsignal violationofsafetyrequirements.Tothatend,onecantreatthe rangesthatarenotconsideredyetbyencodingthemintheSTL signalsasanadditionalinputsignaltothefalsificationengine formula as: and rely on the falsification engine to simultaneously search for ψ and ψ . An example of this extension is provided in H x s (cid:104) (cid:94) (cid:94) (cid:0) (cid:1)(cid:105) the numerical examples in Section VI-D. ¬ϕ=¬φ∧ ¬ Element(Range(ψ ),t)≤u p t ψp∈Ψ pt=0 V. ABSTRACTIONREFINEMENTOFABSTRACT (cid:104) (cid:94) (cid:94)H (cid:105) VULNERABILITIES ∧ ¬(u ≤Element(Range(ψ ),t)) , t p As shown in Algorithm 1, whenever an abstract vulnera- ψp∈Ψ pt=0 bility ψ p is found, Rampo needs to refine this abstraction to (12) identify concrete vulnerabilities, find other potential abstractAlgorithm 1 Rampo Input: Safety specification φ, System dynamics f, binary- level control program g |
Output: Set of Cyber-Kinetic Vulnerabilities Ψφ p 1: Initialize Ψφ :=Empty Set !"# !&# !$# p 2: Initialize Ψ p :=Empty Set 3: Initialize Ψ :=Empty Set p 4: Step 1: Extract path constraints & ranges: 5: (c(1),g(1)),..,(c(k),g(k))=Symbolic-Execution(g) !%# !’# !(# 6: for Each path p in {1,...,k} do 7: u p =max y∈PFog(p)(y) subject to c(p)(y)=True Fig.4. ApictorialrepresentationoftheCounter-ExampleGuidedAbstraction 8: u p =min y∈PFog(p)(y) subject to c(p)(y)=True Refinementprocess.(a)Atreeofallpossiblecybertrajectoriesforacontroller 9: end for codewith3pathsandH=3.(b)Inthefirstiteration,Rampowillabstractall thetrajectoriesinthesearchtreebyconsideringthecontrolsignalsthatcan 10: Step 2: Abstract all trajectories & falsify: beproducedbyalltrajectories.(c)Thefalsificationenginefindsaphysical- 11: ¬ϕ ← Equation (9) leveltrajectorythatviolatesthesafetyrequirementsandRampoextractsthe 12: ψ =Falsify(f,ϕ) correspondingtrajectory.(d)Thecontrolsignalrangesarerefined/concretized x around the trajectory found from the previous step except for the last time 13: if Not-EMPTY(ψ x) then step. The falsification engine could not find a corresponding physical-level 14: ψ =Extract-Path(ψ ) trajectorythatviolatestherequirements,andhence,thissub-treeisremoved p x 15: Ψ p =Ψ p∪{ψ p} f thro em cyt bh ee rs tre aa jr ec ch tos rip ea sce a. ro( ue n) dR ta hm ep to runb ca ac tk et dra tc rk as jeco tn oe ryle fv oe ul ndup bea fn od re.ab (fs )tra Tc ht es 16: end if falsificationenginereportedaphysical-levelvulnerabilityinthesearchtree, 17: while Not-EMPTY(Ψ p) do andthevulnerabilitywascheckedtobeaconcreteone. 18: Step 3: Refine abstract trajectories: 19: Pop one ψ p from Ψ p this does not account for a concrete vulnerability since the ∗ 20: (ψ p,ψ p,ψ p)=Abstraction-Refinment(f,φ,ψ p) behavior of the control program g is still abstracted as a range of control signals (recall the two levels of abstractions 21: Ψφ p =Ψφ p ∪ψ p in Section 4.1). This requires another call to the Falsification ∗ 22: Ψ p =Ψ p∪{ψ p} engine, this time using the physical model f augmented with 23: Ψ =Ψ ∪{ψ } the control program g constrained to the path constraints p p p 24: Step 4: Explore unexplored abstract trajectories: c(Element(ψ p,1)),...,c(Element(ψ p,l)). If a concrete vulnerabil- 25: ¬ϕ ← Equation (12) ity ψ x was found, then it is added to the set of concrete 26: ψ =Falsify(f,ϕ) cyber-kineticvulnerabilities.Regardlessofwhetheraconcrete x 27: if Not-EMPTY(ψ ) then vulnerability is found, the abstract trajectory ψ is added to x p 28: ψ =Extract-Path(ψ ) the set of explored trajectories ψ (Lines 2-11, Algorithm 2). p x p 29: Ψ p =Ψ p∪{ψ p} Ifaconcretevulnerabilityisnotfoundbytheprocessabove, 30: end if we iteratively reduce the abstraction refinement by removing 31: end while therangeconstraintsinϕ(aspictoriallyillustratedinFigure4). 32: return Ψφ That is, starting from i = 1 until i = l, we will iteratively p call the falsification engine to find a trajectory of the physical systemthatviolatesthesafetyrequirementsbyencodingthese range constraints in ϕ as follows: vulnerabilities, or exclude some abstract trajectories from (cid:34)l−i (cid:35) the search space. In this section, we provide two alternative (cid:94) ¬ϕ=¬φ∧ Element(Range(ψ ),t)≤u p t approaches for the iterative abstraction refinement. t=0 (cid:34)l−i (cid:35) A. Linear-Search Based Refinement (cid:94) ∧ u ≤Element(Range(ψ ),t) . (14) t p In the first abstraction refinement approach, given an ab- t=0 stractvulnerabilityψ oflengthl≤H,ourtoolconcretizes/re- p Whenever the falsification engine fails to find a violation fines the ranges around this specific ψ . This can be encoded p of ϕ (for i = 1,...,l), the truncated version of ψ (i.e., p in the STL formula ϕ as follows: {p }i ) is added to the set of explored trajectories ψ . (cid:34) l (cid:35) t t=0 p (cid:94) Similarly, whenever the falsification engine finds an abstract ¬ϕ=¬φ∧ Element(Range(ψ ),t)≤u p t vulnerability, the newly discovered abstract vulnerability is t=0 ∗ (cid:34) l (cid:35) added to ψ p. This process is summarized in Figure 4 and (cid:94) Algorithm 2. ∧ u ≤Element(Range(ψ ),t) . (13) t p t=0 B. Binary-Search Based Refinement The falsification engine then uses ϕ to search for a trajectory While Algorithm 2 relaxes the range constraints along of the physical system. Note that even if a trajectory is found, ψ in a linear fashion, our second approach for abstraction pAlgorithm 2 Abstraction-Refinement-Linear A. Experiment1:ComparisonAgainstBlack-BoxFalsification Input: Safety requirements φ, Physical System Model f, We start by comparing Rampo to S-TaLiRo, one of the Abstract cyber trajectory ψ p state-of-the-artCPSfalsificationtools.WhileweuseS-TaLiRo Output: Set of concrete cyber vulnerabilities ψ , p internally within Rampo to find falsifying trajectories of the ∗ Set of unexplored abstract vulnerabilities ψ p, open-loop system f (up to specified path constraints), S- |
Set of explored abstract trajectories ψ TaLiRo can also be used to falsify the entire closed-loop p ∗ 1: Initialize ψ p,ψ p,ψ p :=Empty Set system (the dynamical system f along with the controller g). 2: Step 1: Search for concrete vulnerability In this experiment, we show that analyzing the controller g 3: ¬ϕ ← Equation (13) away from the dynamical system f (using our approach) will 4: ψ =Falsify(f,ϕ) leadtoidentifyingamoresignificantnumberofvulnerabilities x 5: if Not-EMPTY(ψ ) then in the system compared to the as-is use of S-TaLiRo. x 6: f cl =Form-Closed-Loop-Model(f,g) To that end, we consider a model of a drone moving 7: ψ x =Falsify(f cl,ϕ) vertically.Weuseasimpledoubleintegratordynamicalsystem 8: if Not-EMPTY(ψ x) then f to capture the vertical movement of the drone. The safety 9: ψ p =ψ p requirement is for the drone’s position to be always below 10: ψ =ψ ∪ψ 0.98. We also consider the following control program, which p p p implements a gain scheduling controller: 11: end if 12: else 1 double control(double x1){ 13: for i=1 to l do 2 if (x1 < -1) x1 = -1; 3 if (x1 > 1) x1 = 1; 14: ¬ϕ ← Equation (14) 4 15: ψ =Falsify(f,ϕ) 5 if (-1 <= x1 && x1 <= -0.5){ x 16: if Not-EMPTY(ψ ) then 6 u = 4 * x1 - 6; 17: ψ∗ p =ψ∗ p∪(cid:8) {Elx ement(ψ p,t)}l t− =i 0(cid:9) 7 8 }e uls =e 2if( *-0 x. 15 +< 9x ;1 && x1 < 0.5){ 18: Exit the FOR loop 9 }else if(0.5 <= x1 && x1 <= 1){ 19: else 10 u = -4 * x1 - 6; 20: ψ∗ p =ψ∗ p∪(cid:8) {Element(ψ p,t)}l t− =i 0(cid:9) 1 11 2 } return u; 21: end if 13 } 22: end for Listing1. ControlprogramforExperiments1-3 23: end if 24: return ψ p,ψ∗ p,ψ p Note that S-TaLiRo aims to find only one falsifying tra- jectory while our aim in Problem 3.2 is to enumerate all possible cyber-kinetic vulnerabilities. To that end, we ran S- TaLiRo 50 times with random seeds to force it to explore refinement is to relax the range constraints in a binary-search different vulnerabilities. We ran Rampo while restricting the fashion.Thatis,wereplacetheloopinStep14ofAlgorithm2 number of calls to the Falsify function to 50. Our experi- with one that starts with i = l/2 and then selects the next ments show that Rampo was able to identify 14 cyber-kinetic value of i based on the success/failure of finding an abstract vulnerabilities while S-TaLiRo-alone was able to find only 9 vulnerability. Compared to the linear-search-based approach, of those vulnerabilities. The execution time of Rampo was this binary search is more suitable for traversing trees with 191seconds,whileforS-TaLiRo-alonewas147seconds.This high values of l (hence the height of the tree being explored resultshowsoneofthemainbenefitsofcombiningfalsification by the abstraction refinement). and code analysis tools. By splitting the analysis between the two domains (cyber and physical domains), one can achieve VI. EXPERIMENTALEVALUATION bettercoverageofthedifferentcybertrajectoriesinthesystem. WeimplementedRampoinPythonwhereweusedangr[20] B. Experiment 2: Scalability with respect to complexity of the for symbolic code execution, Z3 [33] to compute the ranges physical system model of the control signal in each path, and S-TaLiRo [12] as the falsification engine. In this section, we perform a series Our second experiment aims to evaluate the scalability of of numerical analyses to evaluate the effectiveness and the our tool with respect to the number of states in the dynamical scalability of our tool. First, we conduct a series of studies model f. To that end, we use the same control program in to compare against state-of-the-art falsification of closed-loop Listing1,andwevarythenumberofintegratorsinthesystem systemsandtostudytheeffectofvaryingdifferentparameters from 1 to 10. (e.g.,thecomplexityofthephysicalsystemandthecomplexity To obtain a baseline for comparison, we perform an ex- of the cyber system). We utilize two metrics for this study: (i) haustivesearchbypartitioningthespaceofinitialstates(used the execution time and (ii) the number of identified cyber- to find falsifying trajectories by S-TaLiRo) into hypercubes. kinetic vulnerabilities. Next, we will study the ability to By constraining S-TaLiRo to find falsifying trajectories that identify both cyber-kinetic and cyber-physical-kinetic attacks start from different initial states, we can increase the number using a sophisticated model for engine control systems. of identified vulnerabilities. Nevertheless, and as shown in75 50 25 0 1 2 4 6 8 10 fo rebmuN citenik-rebyc ytilibarenluv Baseline Linear-search based Binary-search based n_partition 1 100*1 100*1 2 100*3 100*3 4 100*5 100*5 6 8 500*1 500*1 10 1000*3 1000*3 1 2 4 6 8 10 1 2 4 6 8 10 2000 1500 1000 500 0 1 2 4 6 8 10 Number of integrator ]s[ emit noitucexE n_partition 1 100*1 100*1 2 100*3 100*3 4 100*5 100*5 6 500*1 500*1 8 10 1000*3 1000*3 1 2 4 6 8 10 1 2 4 6 8 10 Number of integrator Number of integrator Fig. 5. The number of cyber-kinetic vulnerabilities found and the execution time. n partition refers to the number of partitions among each dimension ofthestatespaceinthemodel.Inthebrute-force(baseline)approach,asthenumberofpartitionsincreases,theexecutiontimeincreasesexponentially.For boththelinear-andbinary-search-basedabstractionrefinementapproachesofRampo,theexecutiontimeisalmostconstantasthenumberofintegratorsin |
themodelincreases.Thisleadstoaspeedupthatrangesfrom3×to98×whilecomputingthesamenumberofvulnerabilities. Figure 5 (left), this brute force search scales poorly as the number of states increases and as the granularity of the 175 partitioning increases. Ontheotherhand,asshowninFigure5(middleandright), 150 our tool with the linear- and binary-search-based abstraction 125 refinement shows a similar trend to the baseline in terms 100 of their ability to identify cyber-kinetic vulnerabilities while avoiding the exponential increase in the execution time. In 75 general, the execution time of Rampo remained almost con- 50 stant as the number of states increased. We repeated the same 25 experimentmultipletimeswhilechangingsomeoftheinternal 0 parameterstotheFalsifyfunctiononRampo(whichasex- 2 3 4 5 6 7 8 9 10 plained above, is implemented using S-TaLiRo but with open- Number of time horizons loop dynamics and path constraints). In particular, different plots in Figure 5 show different settings where the number before the multiplication operator indicates the number of optimization steps taken in one falsification, and the latter numbers are the number of calls to S-TaLiRo per one call to the Falsify function. For the same number of identified vulnerabilities, Rampo achieved a speedup that ranges from 3× to 98×. C. Experiment3:Scalabilitywithrespecttothecomplexityof the control software In this experiment, we study the scalability of Rampo as we increase the length of the cyber trajectories H. Recall that increasing H results in an exponential growth in the number of cyber trajectories that need to be explored. Fig- ure 6 (top) shows the number of identified vulnerabilities as H increases for both the linear- and binary-search-based abstractionrefinementapproaches.Asexpected,thenumberof vulnerabilitiesincreasesasH increases.Moreover,thenumber ofvulnerabilitiesfoundbythelinear-andbinary-search-based abstraction refinement approaches are comparable among the different settings for the Falsify function. Ontheotherhand,6(bottom)showstheexecutiontimefor boththelinear-andbinary-search-basedabstractionrefinement approaches.Asexpected,thebenefitsofthebinarysearchstart to grow as the depth H increases, resulting in an average of 2×speedupforthebinary-search-basedabstractionrefinement seitilibarenluv citenik-rebyc fo rebmuN Linear-search based Binary-search based 100*1 100*1 100*3 100*3 100*5 100*5 500*1 500*1 1000*1 1000*1 1000*3 1000*3 2 3 4 5 6 7 8 9 10 Number of time horizons 12000 10000 8000 6000 4000 2000 0 2 3 4 5 6 7 8 9 10 Number of time horizons emit noitucexE Linear-search based Binary-search based 100*1 100*1 100*3 100*3 100*5 100*5 500*1 500*1 1000*1 1000*1 1000*3 1000*3 2 3 4 5 6 7 8 9 10 Number of time horizons Fig.6. Numberofcyber-kineticvulnerabilitiesfoundandexecutiontimesin differentlengthsoftimehorizons compared to the linear-search-based abstraction refinement approach. Moreover, and most importantly, the execution time of both the linear- and binary-search-based abstraction refinement approaches does not increase exponentially with H, albeit with the exponential growth in the number of cyber trajectories. We hypothesize that this pattern is due to the CEGAR approach used by Rampo which can reason about several cyber trajectories simultaneously, which harnesses the exponential growth in the number of cyber trajectories.8)#<-% "! #$ !! =..$4>’8"0#$+’& ! (")*#+% ,*-./,*0*1#2 "! #$ !! %&! 89))7:’;/6 ! ! !"#$%&’()*)+ 2)3"4+)’5#0"#)’ ?3%-..+) ,-#.%-+’/%-0%$1 6-7)+ !"#$%& !"#$%’ !"#$%’ !"#$%’ ’! #( "! Fig.7. Vehicleenginemodel D. Experiment 4: Enumerating Cyber-Kinetic and Cyber- Physical-Kinetic Vulnerabilities In this experiment, we evaluate the ability of Rampo to identify both cyber-kinetic and cyber-physical-kinetic vulner- Fig.8. Cyber-kineticvulnerabilityfoundinthevehicleenginemodel abilitiesincomplexsystems.Tothatend,weuseamodelfora vehiclecontrolsignal(showninFigure7).Themodelhastwo inputs, Speed and RPM, and the one output, Throttle. The For the first two settings (attacking either Speed alone or control program for this model is shown in Listing 2 which RPM alone), our tool was not able to find any new cyber hasatotalofk =4paths.Thesafetyrequirementφis“Either trajectory that can lead to vulnerability (other than the one the Speed or the RPM is below a specified threshold.” showninFigure8).Ontheotherhand,byallowingtheattacker to corrupt both the Speed and RPM, our tool identified 1 double control(double RPM, double Speed){ 2 if (RPM > 3300){ another cyber trajectory that (p 0 =1,p 1 =4,p 2 =2,p 3 =4) 3 if (Speed > 80){ that can lead to vulnerabilities. The cyber-physical-kinetic 4 Throttle = -RPM*0.002 - Speed*1.1 + 183.0; vulnerability is shown in Figure 9. 5 } else { 6 Throttle = - RPM*0.001 + Speed*0.6 + 19.0; ByanalyzingthethirdtimestepinFigure9,onecannotice 7 } that the ψ trajectory is close to violating the constraints of 8 } else { x2 p=2.Thatis,asmallperturbationinthesensorsignal(either 9 if (Speed > 80){ 10 Throttle = RPM*0.001 - Speed*1.7 + 216.0; intentional by an attacker or by noise) can force the code to 11 } else { execute a different path in the control program g. This shows |
12 Throttle = - RPM*0.001 - Speed*0.6 + 139.0; the effectiveness of our tool to identify these corner cases, 13 } 14 } whichcanbeusedtofixthecontrolprogramtobemorerobust 15 return Throttle; and secure. 16 } Listing2. Controlprogramforthevehicleenginemodel VII. CONCLUSION We started by searching for cyber-kinetic vulnerabilities This paper proposed a novel tool, named Rampo, that using Rampo. Figure 8 shows the only cyber-kinetic vulnera- can perform binary code analysis to identify cyber kinetic bility found by the binary-search-based abstraction refinement vulnerabilities in CPS. Our tool analyzes the binary level approach. This vulnerability corresponds to the cyber trajec- control program to extract the ranges of control signal inputs tory (p = 1,p = 4,p = 4,p = 4). The corresponding to the physical system. It uses this information to abstract the 0 1 2 3 trajectoriesofψ andψ areshowninFigure8wherecross code and guide falsification tools to identify possible kinetic x1 x2 the bars that indicate corresponding paths. vulnerabilities. The tool iteratively refines the abstraction Next,weexaminewhetherasmallamountofphysicalattack until concrete cyber-kinetic vulnerabilities are found or the signals can lead to other kinetic attacks, i.e., can a small entire space of cyber trajectories is covered. We provided amount of physical attack signal force the control program different approaches for the abstraction refinement process to go through a different cyber trajectory that can lead to a and generalized the framework to find cyber-physical-kinetic newkineticattack?Tothatend,werestrictthephysicalattack vulnerabilities. Our numerical analysis shows that our tools signal ψ to be ≤2.0% of the original sensor signals (Speed scale more favorably compared to off-the-shelf tools and are s and RPM). We considered three different configurations. In able to harness the exponential growth in the search space thefirstone,theattackercanchangeonlySpeed,inthesecond whenthetemporalevolutionofthecodeisconsideredleading one, the attacker can only change RPM, while in the last to 3×-98× speed up in execution time while identifying the configuration, the attacker can affect both Speed and RPM. same number of vulnerabilities.Conference,CAV2010,Edinburgh,UK,July15-19,2010.Proceedings 22. Springer,2010,pp.167–170. !"#$%& !"#$%’ !"#$%( !"#$%’ [14] E.Clarke,O.Grumberg,S.Jha,Y.Lu,andH.Veith,“Counterexample- guided abstraction refinement,” in Computer Aided Verification: 12th International Conference, CAV 2000, Chicago, IL, USA, July 15-19, 2000.Proceedings12. Springer,2000,pp.154–169. [15] S. Sheikhi, E. Kim, P. S. Duggirala, and S. Bak, “Coverage-guided fuzz testing for cyber-physical systems,” in 2022 ACM/IEEE 13th InternationalConferenceonCyber-PhysicalSystems(ICCPS). IEEE, 2022,pp.24–33. [16] D. Serpanos and K. Katsigiannis, “Fuzzing: Cyberphysical system testing for security and dependability,” Computer, vol. 54, no. 9, pp. 86–89,2021. [17] L.J.Moukahal,M.Zulkernine,andM.Soukup,“Vulnerability-oriented fuzz testing for connected autonomous vehicle systems,” IEEE Trans- actionsonReliability,vol.70,no.4,pp.1422–1437,2021. !""#$%&’()*#+’&+,#-&".&#*."/,0& [18] D.S.Fowler,J.Bryans,S.A.Shaikh,andP.Wooderson,“Fuzztesting $12,03%(*,"($&45+*,0#2(+("1 for automotive cyber-security,” in 2018 48th Annual IEEE/IFIP Inter- national Conference on Dependable Systems and Networks Workshops (DSN-W). IEEE,2018,pp.239–246. [19] C. Cadar, D. Dunbar, and D. Engler, “Klee: Unassisted and automatic generation of high-coverage tests for complex systems programs,” in Proceedings of the 8th USENIX Conference on Operating Systems DesignandImplementation,ser.OSDI’08. USA:USENIXAssociation, 2008,p.209–224. [20] Y. Shoshitaishvili, R. Wang, C. Salls, N. Stephens, M. Polino, Fig. 9. Another cyber-kinetic vulnerability found only when attack signals A. Dutcher, J. Grosen, S. Feng, C. Hauser, C. Kruegel, and G. Vigna, areinjected “SoK: (State of) The Art of War: Offensive Techniques in Binary Analysis,”inIEEESymposiumonSecurityandPrivacy,2016. [21] T. Yamaguchi, B. Hoxha, D. Prokhorov, and J. V. Deshmukh, “Specification-guidedsoftwarefaultlocalizationforautonomousmobile REFERENCES systems,”in202018thACM-IEEEInternationalConferenceonFormal Methods and Models for System Design (MEMOCODE), 2020, pp. 1– [1] S.D.Applegate,“Thedawnofkineticcyber,”in20135thInternational 12. ConferenceonCyberConflict(CYCON2013),2013,pp.1–15. [22] T. Yamaguchi, T. Kaga, A. Donze´, and S. A. Seshia, “Combining [2] S. R. Chhetri, J. Wan, and M. A. Al Faruque, “Cross-domain security requirement mining, software model checking and simulation-based ofcyber-physicalsystems,”in201722ndAsiaandSouthPacificDesign verificationforindustrialautomotivesystems,”in2016FormalMethods AutomationConference(ASP-DAC),2017,pp.200–205. inComputer-AidedDesign(FMCAD),2016,pp.201–204. [3] S. R. Chhetri, A. Canedo, and M. A. Al Faruque, “Kcad: Kinetic [23] Q.Thibeault,T.Khandait,G.Pedrielli,andG.Fainekos,“Searchbased cyber-attackdetectionmethodforcyber-physicaladditivemanufacturing |
systems,” in 2016 IEEE/ACM International Conference on Computer- testingforcodecoverageandfalsificationincyber-physicalsystems,”in 2023 IEEE 19th International Conference on Automation Science and AidedDesign(ICCAD),2016,pp.1–8. Engineering(CASE),2023,pp.1–8. [4] J. Burch, E. Clarke, K. McMillan, D. Dill, and L. Hwang, “Symbolic modelchecking:10/sup20/statesandbeyond,”in[1990]Proceedings. [24] A. Dokhanchi, A. Zutshi, R. T. Sriniva, S. Sankaranarayanan, and FifthAnnualIEEESymposiumonLogicinComputerScience,1990,pp. G.Fainekos,“Requirementsdrivenfalsificationwithcoveragemetrics,” 428–439. in 2015 International Conference on Embedded Software (EMSOFT), [5] D. Kroening and M. Tautschnig, “Cbmc–c bounded model checker: 2015,pp.31–40. (competition contribution),” in Tools and Algorithms for the Construc- [25] F. S. de Boer and M. Bonsangue, “Symbolic execution formally ex- tion and Analysis of Systems: 20th International Conference, TACAS plained,”FormalAspectsofComputing,pp.1–20,2021. 2014, Held as Part of the European Joint Conferences on Theory and [26] O. Maler and D. Nickovic, “Monitoring temporal properties of con- PracticeofSoftware,ETAPS2014,Grenoble,France,April5-13,2014. tinuous signals,” in International Symposium on Formal Techniques in Proceedings20. Springer,2014,pp.389–391. Real-TimeandFault-TolerantSystems. Springer,2004,pp.152–166. [6] L. Cordeiro, B. Fischer, and J. Marques-Silva, “Smt-based bounded [27] Y. Shoukry, P. Martin, P. Tabuada, and M. Srivastava, “Non-invasive model checking for embedded ansi-c software,” IEEE Transactions on spoofingattacksforanti-lockbrakingsystems,”inCryptographicHard- SoftwareEngineering,vol.38,no.4,pp.957–974,2011. ware and Embedded Systems-CHES 2013: 15th International Work- [7] G. J. Holzmann, “The model checker spin,” IEEE Transactions on shop, Santa Barbara, CA, USA, August 20-23, 2013. Proceedings 15. softwareengineering,vol.23,no.5,pp.279–295,1997. Springer,2013,pp.55–72. [8] J.-L. Boulanger, Industrial use of formal methods: formal verification. [28] Y.Shoukry,P.Martin,Y.Yona,S.Diggavi,andM.Srivastava,“Pycra: JohnWiley&Sons,2013. Physical challenge-response authentication for active sensors under [9] P. Tabuada, Verification and control of hybrid systems: a symbolic spoofingattacks,”inProceedingsofthe22ndACMSIGSACConference approach. SpringerScience&BusinessMedia,2009. onComputerandCommunicationsSecurity,2015,pp.1004–1015. [10] X.Chen,E.A´braha´m,andS.Sankaranarayanan,“Flow*:Ananalyzer [29] Y.Shoukry,P.Nuzzo,A.Puggelli,A.L.Sangiovanni-Vincentelli,S.A. for non-linear hybrid systems,” in Computer Aided Verification: 25th Seshia, and P. Tabuada, “Secure state estimation for cyber-physical InternationalConference,CAV2013,SaintPetersburg,Russia,July13- systemsundersensorattacks:Asatisfiabilitymodulotheoryapproach,” 19,2013.Proceedings25. Springer,2013,pp.258–263. IEEE Transactions on Automatic Control, vol. 62, no. 10, pp. 4917– [11] S. Bansal, M. Chen, S. Herbert, and C. J. Tomlin, “Hamilton-jacobi 4932,2017. reachability:Abriefoverviewandrecentadvances,”in2017IEEE56th [30] Y.ShoukryandP.Tabuada,“Event-triggeredstateobserversforsparse AnnualConferenceonDecisionandControl(CDC). IEEE,2017,pp. sensornoise/attacks,”IEEETransactionsonAutomaticControl,vol.61, 2242–2253. no.8,pp.2079–2091,2015. [12] Y.Annpureddy,C.Liu,G.Fainekos,andS.Sankaranarayanan,“S-taliro: [31] A.BaruaandM.A.AlFaruque,“Hallspoofing:Anon-invasive{DoS} A tool for temporal logic falsification for hybrid systems,” in Tools attackongrid-tiedsolarinverter,”in29thUSENIXSecuritySymposium and Algorithms for the Construction and Analysis of Systems, P. A. (USENIXSecurity20),2020,pp.1273–1290. AbdullaandK.R.M.Leino,Eds. Berlin,Heidelberg:SpringerBerlin [32] A. Barua, Y. G. Achamyeleh, and M. A. A. Faruque, “A wolf in Heidelberg,2011,pp.254–257. sheep’s clothing: Spreading deadly pathogens under the disguise of [13] A. Donze´, “Breach, a toolbox for verification and parameter synthesis popular music,” in Proceedings of the 2022 ACM SIGSAC Conference ofhybridsystems,”inComputerAidedVerification:22ndInternational onComputerandCommunicationsSecurity,2022,pp.277–291.[33] L. De Moura and N. Bjørner, “Z3: An efficient smt solver,” in Pro- ceedings of the Theory and Practice of Software, 14th International ConferenceonToolsandAlgorithmsfortheConstructionandAnalysis of Systems, ser. TACAS’08/ETAPS’08. Berlin, Heidelberg: Springer- Verlag,2008,p.337–340. |
2402.16043 IEEEINTERNETOFTHINGSJOURNAL,VOL.14,NO.8,AUGUST2021 1 LuaTaint: A Static Taint Analysis System for Web Interface Framework Vulnerability of IoT Devices Jiahui Xiang, Wenhai Wang, Tong Ye, Peiyu Liu, Staff, IEEE Abstract—IoT devices are currently facing continuous mali- accounting for 32% and 12%, respectively. Therefore, it is cious attacks due to their widespread use. Among these IoT of utmost significance to find a comprehensive and accurate devices, web vulnerabilities are also widely exploited because automated detection method to address these vulnerabilities. of their inherent characteristics, such as improper permission Many techniques exist for identifying firmware vulnerabili- controls and insecure interfaces. Recently, the embedded system webinterfaceframeworkhasbecomehighlydiverse,andspecific ties at the level of web services. For example, SATC [8] uses vulnerabilities can arise if developers forget to detect user input shared front-end and back-end keywords to start static taint parameters or if the detection process is not strict enough. analysisandidentifywebservicevulnerabilities.SRFuzzer[9] Therefore,discoveringvulnerabilitiesinthewebinterfacesofIoT deploys an automated fuzzy testing framework to analyze the devices accurately and comprehensively through an automated web server module of SOHO routers. However, they are not method is a major challenge. This paper aims to work out the challenge. We have developed an automated vulnerability focused on the web interface of the firmware but more on detection system called LuaTaint for the typical web interface services. framework, LuCI. The system employs static taint analysis to Costin et al. [12] propose a scalable and fully automated address web security issues on mobile terminal platforms to dynamicanalysisframeworktodiscoverwebinterfacevulner- ensure detection coverage. It integrates rules pertaining to page abilitiesinfirmware,whiletherearepotentialbiasesduetothe handlercontrollogicwithinthetaintdetectionprocesstoimprove itsextensibility.Wealsoimplementedapost-processingstepwith dataset’scompositionandthecomplexityofemulatingdiverse theassistanceoflargelanguagemodelstoenhanceaccuracyand firmware architectures. WMIFuzzer [14] is designed for dis- reducetheneedformanualanalysis.Wehavecreatedaprototype covering vulnerabilities in commercial off-the-shelf (COTS) ofLuaTaintandtestediton92IoTfirmwaresfrom8well-known IoT devices by fuzzing their web interfaces. However, it vendors. LuaTaint has discovered 68 unknown vulnerabilities. faces challenges in efficiently identifying interactive elements Index Terms—Article submission, IEEE, IEEEtran, journal, on diverse and complex IoT device web interfaces. We find LATEX, paper, template, typesetting. that the current work on security analysis for IoT devices web interfaces is relatively limited, only 7% of vulnerability I. INTRODUCTION detection tools include web interface checks [11]. Most of the recent works rely on existing web analysis tools and use AS the Internet of Things (IoT) industry experiences dynamicanalysistosimulatethefirmware[12]–[16],andasis rapid expansion, IoT devices are being utilized in many well-known, dynamic analysis methods often make coverage areas closely tied to human life and productivity [3]. The impossible to guarantee. inherent non-uniformity in the system architecture of IoT Overall, current methods cannot detect those vulnerabilities devices, coupled with the absence of conventional security in firmware web interface frameworks effectively. Thus, this mechanisms, results in the security of IoT devices inevitably paper focuses on taint-based web vulnerabilities in IoT de- becoming a key issue to be considered [1]. In recent years, vices triggered by web interface frameworks and develops an firmware web vulnerabilities have been frequently disclosed, automated vulnerability detection system based on static taint making IoT devices popular targets for malicious attacks [4]– analysis.WetakeLuCI,atypicalwebframeworkwidelyused [6]. Moreover, research shows that in over 81% of the cases, in firmware, as an example to establish a dedicated system the web servers were configured to run as privileged users. called LuaTaint to streamline our analysis. Our research com- Thus, exploiting only one vulnerability in the web component menceswithanexaminationofthesyntacticaspectsoftheLua will lead to compromising the whole device, which can pose programming language. We transform the source code into an significant security risks to the devices [2]. abstractsyntaxtreetolocatesensitiveinformationandconduct Wecarryoutstatisticsonthevulnerabilitiesinthefirmware control flow analysis. To delve deeper into the program’s be- webin§??andfindthatthetotalnumberofvulnerabilitiesas- havior,weapplyflow-sensitiveandcontext-sensitivedataflow sociatedwithwebinterfaceframeworkshasincreasedinrecent analysis to derive data flow constraints for the control flow years.Amongthem,thetaint-basedvulnerabilitiesaccountfor graph nodes. Subsequently, we utilize a framework-adapted a relatively large proportion, with code injection/remote code tainted analysis to identify vulnerabilities in the program and executionvulnerabilitiesandcross-sitescriptingvulnerabilities report the relevant vulnerability information. We filter false alarms efficiently through an abortive design of questions ThispaperwasproducedbytheIEEEPublicationTechnologyGroup.They areinPiscataway,NJ. for Large Language Models (LLMs). In the final steps, we ManuscriptreceivedApril19,2021;revisedAugust16,2021. manually generate Proof of Concepts to analyze the detected suspicious alarms for vulnerability validation. |
4202 beF 52 ]RC.sc[ 1v34061.2042:viXraIEEEINTERNETOFTHINGSJOURNAL,VOL.14,NO.8,AUGUST2021 2 Front-end LuCI Model View Browser IoT Device CBI Send Malicious Command Controller Dispatcher User Web Server Uhttpd Requests URL RequestDecode parameters Attacker Response Encode Responses Results Fig. 1. Example of Command Injection Vulnerability in LuCI. After the user performs an operation in the front-end interface of the left browser, the web serveracceptstheHTTPrequestfromthefront-endanddecodesthemessage,thenenterstheLuCIframeworkforfurtherprocessing,andfinallyencodesthe resultfromLuCIintoaresponsemessageandsendsitbacktothefront-endinterface. Our system consists of four components: a parsing and A. Observation and Motivation control flow analyzer, a reaching definitions analyzer, a Often, IoT devices provide a web interface for system framework-adapted taint analyzer, and a postprocessor using configuration or external environment interaction. The web LLMs.Theprototypeisimplementedinapproximately12,000 server accepts HTTP requests from the front-end and calls lines of Python code. It supports the security analysis in the back-end binary to process them, and an attacker may OpenWrt’s and other systems’ web configuration interface construct malicious inputs to the front-end to corrupt the frameworks based on Lua. For different frameworks, taint corresponding back-end binary. analysis rules can be modified according to their dispatching Vulnerability Example. Figure 1 shows an example rules, and then LuaTaint can be applied to more firmware of how the above process is executed in an OpenWrt web vulnerability analysis. To validate the effectiveness of system. When an end user wants to view the real-time LuaTaintindetectingvulnerabilitiesinembeddedsystems,we bandwidth status or the wireless network status of the current applieditto92firmwaresamplesfrom8vendors.LuaTainthas system, the user can access them using the web interfaces found 68 unknown vulnerabilities in these firmwares, includ- "admin/status/realtime/bandwidth_status" ing command injection and code injection vulnerabilities. We and "admin/status/realtime/wireless_status" also conducted experiments on the computational overhead of via a URL. Unfortunately, when the user is using the current LuaTaint.TheresultsshowthatLuaTaintcanperformpractical interfaces, these two endpoints correspond to the functions security analysis of the web interface of IoT firmware. action_bandwidth and action_wireless, which do In summary, this research makes the following contribu- not perform any cleanup checks when generating executable tions: commands for logging traffic. An attacker can add additional • We highlight a vital research direction for the security commands to the URL to execute arbitrary commands. For analysis of IoT firmware web interface frameworks. example, if an attacker sends a malicious reboot command • WeproposeLuaTaint,whichisthefirstautomatedvulner- directly to the back-end via the URL "http://IP:Port ability detection system for Lua based on a framework- /cgi-bin/luci/admin/status/realtime/bandwidth_status/ adapted static taint analysis. eth0$(reboot)", the device would be remotely attacked. • We employ LLMs to effectively prune false alarms, This command injection vulnerability (CVE-2019-12272) demonstrating the excellent performance of using LLMs exists in OpenWrt LuCI 0.10 and earlier. in evaluating program security. Some scholars found that many IoT devices are misconfig- • We evaluate LuaTaint on 92 real firmware scenarios ured in the web interface through a large-scale static analysis andfind68unknownvulnerabilities,includingcommand of firmware [2]. Among these, we find that taint-based vul- injection and code injection vulnerabilities. nerabilities are the most common. Taint-based vulnerabilities To foster future research, we have made our source code in software occur when untrusted input, often from a user or publicly available.1 external source, influences the behavior of a program in an undesirable way. Common examples of such vulnerabilities II. BACKGROUNDANDMOTIVATION include command injection, remote code execution, cross-site scripting (XSS), SQL injection and Path traversal. Inthissection,wefirstconductanobservationandstatistics We made a statistical survey of the web vulnerabilities of the vulnerabilities in embedded systems to verify the that exist in the firmware to get an intuitive sense of these necessity of the current work. Then, we provide an overview vulnerabilities. By collecting and analyzing the firmware of the OpenWrt system and its web interface, LuCI. At last, web vulnerabilities reported on the Common Vulnerabilities we enumerate the challenges inherent in our ongoing work. and Exposures (CVE) platform, we selected 189 firmware 1https://anonymous.4open.science/r/LuaTaint-649F/README.md web-related vulnerabilities from six vendors, including D-IEEEINTERNETOFTHINGSJOURNAL,VOL.14,NO.8,AUGUST2021 3 50 200 30 45 180 25 40 160 IC .luV fo rebm uN 12233 50505 111 68024 000 00 luV fo rebm n uu i NL . 112 050 5 0 10 40 CI/RCE XSS Information Disclosure Others 5 20 0 0 D-Link TP-Link Tenda NetGear TOTOLink Linksys Total Fig. 3. The Statistics on the Types of Vulnerabilities Related to the LuCI XSS BoF CI/RCE Framework. Authentication Bypass DoS Information Disclosure Path Traversal Hard-coded Password Access Control CSRF Administration Authority Others telephones, miniature robots, smart homes, routers, and VOIP Fig.2. NumberofFirmwareWebVulnerabilitiesbyType. devices [17]. The system is equipped with the opkg package management tool, which allows for the installation of thou- sands of packages from the software repository. Additionally, Link, TP-Link, Tenda, NetGear, TOTOLink, and Linksys, |
OpenWrt uses the Unified Configuration Interface (UCI) to for sampling statistics. Figure 2 shows that in the firmware centralize the whole configuration of a device running on it. web vulnerabilities, cross-site scripting vulnerabilities (XSS) WhetheritisLua,PHP,Shell,orCprograms,theexecutionof and buffer overflow vulnerabilities (BoF) accounted for a thecommandtransmissionparameterscanachievethepurpose relativelylargeproportionofthestatistics,bothaccountingfor of modifying system parameters. OpenWrt-enabled hardware 16.9%; command injection vulnerabilities and code execution devicescurrentlyincludevariousmodelversionsfrompopular vulnerabilities (CI/RCE) are also very typical, accounting for brands like 8devices, ASUS, D-Link, GL.iNet, Huawei, TP- about 14.3%. In addition, there are also a large number of Link, and more [19]. authenticationbypassingvulnerabilitiesandsensitiveinforma- WealsoconductedrelevantstatisticsontheCVEvulnerabil- tion leakage vulnerabilities in the firmware web services. itiesrelatedtotheLuCIframework,andtheresultsshowabout Furthermore, insights derived from the OWASP TOP10 75 related vulnerabilities. The volume of vulnerability reports underscorethesignificantrelevanceandprevalentexploitation has exhibited a noticeable upturn since approximately 2018. of taint-based vulnerabilities in real-world attacks [21]. The Upon categorizing these vulnerabilities, it becomes apparent prevalence of taint-based vulnerabilities within firmware web thatCodeInjection/RemoteCodeExecution(CI/RCE)vulner- interfaces and their significant potential for harm underscores abilities, cross-site scripting vulnerabilities, and information the necessity of this research. leakage vulnerabilities collectively constitute approximately 36%, 12%, and 12% of the overall tally, respectively, as B. LuCI illustrated in Figure 3. These data demonstrate the breadth Lua Configuration Interface (LuCI) is a unified config- of threats facing the current firmware LuCI framework. uration interface based on Lua which is used to manage At present, there is a lack of detection techniques that can OpenWrt’s diverse range of web configuration interfaces [18]. effectively identify the vulnerabilities of the web interface Luaisalightweightlanguagethatishighlyextensibleandeasy framework based on Lua. Therefore, this research aims to toembed.TheofficialversionofLuacomprisesastreamlined develop an automatic vulnerability detection system for web core and minimal libraries, which makes it portable, fast, interface frameworks by taking LuCI as an example. It aims and suitable for embedding in other software programs. LuCI toprovideahighlyefficientandscalablemethodfordetecting utilizestheModel-View-Controller(MVC)architecturepattern web interface vulnerabilities. [24], which allows for efficient management of the interface’s components. C. Challenges TheLuCIframeworkisahighlyextensibleandreusablesys- tem that opens the possibility of introducing web vulnerabili- The vulnerability detection tool designed for firmware web ties in the development process. Among these vulnerabilities, interfaces has the following three challenges when dealing RemoteCommandInjection(CI)andRemoteCodeExecution with real embedded systems: (RCE) stand out as two of the most dangerous security Program Analysis for Lua.Currently,thereisnocommon threats. An romote injection vulnerability allows an attacker tool specifically designed to analyze the information flow of to remotely control the back-end system by writing system theLualanguage.Luaisadynamicallytypedlanguage,which commandsorcodedirectlytotheserver.Thesevulnerabilities means that the types and values of variables can change at typically arise when the application system is designed to runtime. It also supports complex data structures, such as provideaspecifiedremotecommandinterface,suchastheweb tables, which can be nested and dynamically extended. Large interfaces of routers, firewalls, intrusion detection equipment, applicationsoftenconsistofmultipleLuafiles,anddeveloping and other devices. astaticanalysistoolrequirestheabilitytoanalyzecrossfilesto OpenWrtisessentiallyahighlymodularandautomatedem- find vulnerabilities distributed across different files. These are bedded Linux system often used in industrial control devices, the points we need to consider when analyzing the program.IEEEINTERNETOFTHINGSJOURNAL,VOL.14,NO.8,AUGUST2021 4 Over- and Under-taint Problems. Over-taint and under- taint problems are usually due to too much or insufficient Ignoring labeling of data as sensitive or untrusted, respectively. Static Unpack LuCI N taint analysis is usually difficult to solve these problems, and Lua Parsing Information Taint Analysis path-sensitivedataflowanalysisoftengeneratespathexplosion AST Nodes CFG Alarm? problems, so efficiency and accuracy cannot be combined in Filter Y static analysis. How to effectively balance the soundness and Fixed Point F completeness of designing vulnerability detection tools is one Sources Sinks Algorithm Verification with of the most critical challenges. Data Flows LLM Assistance Start point End point Web Framework Rules Collection. The LuCI framework, T due to its high usage and extendibility, provides a complex True Bugs set of dispatching rules. Moreover, different vendors may have varying dispatching rules, which makes it necessary to Fig.4. ArchitectureofLuaTaint.LuaTaintanalyzestheLuCIframeworkand conduct a comprehensive analysis to identify the sources, performs control flow and data flow analysis, formulates rules based on the characteristicsofthewebinterfaceitself,andemploysstatictaintanalysisto sinks, sanitizer functions, and dispatching rules to facilitate |
automatethedetectionofwebvulnerabilities. the exploitation of vulnerability detection tools. Inthispaper,weaddresstheabovechallengesbydesigning theASTstructure,pinpointssuspectedvulnerabilitylocations, LuaTaint for the LuCI framework to detect common vulnera- allowing for precise vulnerability reporting. Furthermore, we bilities in firmware web interfaces. also use LLMs for post-processing of false positive pruning to improve the accuracy of the detection. III. LUATAINT The four components in our system correspond to four analysisprocesses:parsingandcontrolflowanalysis,reaching In this section, we present the fundamental principles of definitions analysis, framework-adapted taint analysis, and LuaTaint. Initially, we provide a detailed overview of our sys- post-processingwithLLMassistance,whichwillbeexplained tem.Subsequently,weexplorein-depththekeycomponentsof below. our automated vulnerability detection framework. These four components comprise the parsing and control flow analyzer, the reaching definitions analyzer, the framework-adapted taint B. Parsing and Control Flow Analysis analyzer, and the postprocessor using LLMs. To perform static analysis, the LuCI framework source code in the IoT firmware first needs to be transformed into an AST through syntax analysis. Each node within the A. Overview of LuaTaint AST corresponds to a distinct structure within the source Figure4providesanoverviewofLuaTaint,whichtakesthe code, facilitating the precise identification of relationships firmware web framework source code as input and ultimately such as function calls, assignment statements, and operational outputs and reports information about errors. statements. Compared with intermediate representation, AST First, we should unpack the firmware image using an ex- is closer to the syntax structure and suitable for fast type isting firmware unpacker (e.g., binwalk [22]). Then, LuaTaint checking. recognizes the web interface framework from the unpacked In practice, we found that there are some syntax exceptions folder based on the file type. The LuCI framework gen- during the parsing of the source code, which are caused by erally exists in "/user/lib/lua/luci" under the root the unnormal syntax format in the firmware. It hinders AST directory, and the core of the framework is implemented by generation and subsequent analysis. To tackle this issue, we Lua code, except for the front-end web pages, which are assembled a collection of non-parsable syntax formats. Prior implemented by HTML templates. to parsing the source code, we executed a comprehensive pre- LuaTaint processes Lua files within the LuCI framework scan of the entire codebase. In the course of this pre-scan, we by first converting the source code into an Abstract Syntax substituted the problematic formats with regular expressions Tree(AST)andthencreatingaControlFlowGraph(CFG).It to facilitate effective parsing. enhances the CFG using inter-process analysis, incorporating AST typically varies based on the programming language LuCI’s dispatching rules and integrating key functions not and often lacks information on control flow. Therefore, it directly invoked for comprehensive analysis. Subsequent data necessitates conducting a separate control flow analysis based flowanalysisontheCFGinvolvesassessingdataflowreacha- on the AST. We build a CFG node by traversing the AST to bility and identifying constraints at CFG nodes using fixed- analyzenoderelationshipsandaddingprecursorandsuccessor point algorithm and reaching definitions analysis. LuaTaint properties to each AST node. Distinct traversal rules are identifies critical keywords for sources, sinks, and sanitizers defined for various node structures, establishing connection within the web framework. The analysis begins by defining relationships between CFG nodes during the analysis of Lua sinks from a reverse perspective, followed by examining data statements and expressions. Each CFG comprises an entry entering sensitive functions for potentially hazardous external node and an exit node, where the entry node exclusively pos- inputs. This input-sensitive static tainting analysis, based on sesses a successor and the exit node solely holds a precursor.IEEEINTERNETOFTHINGSJOURNAL,VOL.14,NO.8,AUGUST2021 5 Theinterconnectionofnodesisrealizedthroughprecursorand and cbi, respectively. The control flow analysis lacks the successor relations, resulting in a CFG represented as a list of ability to discern this type of call relationship as an explicit nodes composed of CFG nodes. call, so it is necessary to utilize the dispatching rules of the LuCI framework to set all call function nodes to the CFG Lua Syntax Structures. Lua provides a regular collection list. ofcontrolstructures,suchasifforconditionaljudgmentand while,repeat,forforiteration[26].Allcontrolstructure Efficiency Optimization. In generating the control flow statementshaveaterminator:theendisusedforif,while, graph, we use the LuCI framework as input and create a for, and the until is used for repeat. There are many function definition dictionary through node traversal. This variants of Lua syntax structures in different forms, which dictionary collects all function definitions in LuCI for taint should be noted, for example: analysis and addresses the challenge of detecting vulnera- bilities across different files. However, this method rapidly • Therearetwotypesofforstatements:numericforloops increases the time and space complexity for large projects. To and generalized for loops; enhance efficiency, we approximate in the dictionary by using • Nostatementsinsidethebodyofwhile,for,repeat, function node names as keys instead of the notes themselves, and if control structures should be allowed. andthenodesthemselvesasvaluesremainingunchanged.For • Lua functions can take a variable number of arguments, traversed function nodes, if a function with the same name is |
similar to C using (...) in the function argument list. encounteredatadifferentlocation,thenewnodewilloverwrite All Lua syntax structures must be considered when con- the existing node. Our experiments with real-world data show structing CFG node connections to accurately analyze control that this approximation introduces no under-tainting in the flow graphs. current task. This is because, we do not lose any information at the functional level in the current file. Context-sensitive multivariate inter-procedural analysis. Theanalysisofcontrolflowalsoencompassesanexamination of the call relationships between functions. In this regard, C. Reaching Definitions Analysis we employ multivariate inter-procedural analysis [51] for the In order to determine how dangerous data flows move treatment of function calls. Each intra-procedural analysis is along the program execution path and where sensitive data is associatedwithanabstractdomain.Whenafunctioniscalled, ultimately transferred, we perform a data flow analysis based itistreatedasanewintra-proceduralanalysis,andtheCFGof on the CFG described above. Here, we use the Reaching thecalledfunctionisassimilatedintothehigher-levelCFG.To Definitions Analysis to solve the problem. Generally speak- preserve the semantics of different function scopes, we utilize ing, definition is the assignment of a value, while reaching shadow variables, which is crucial, as variable parameters is whether a definition still exists from the beginning of may assume identical names in distinct functions [46]. This the program to the current point. The principle of data flow approach enables the program to seamlessly integrate the computation using reaching definitions analysis is as follows: function call within the current scope’s context, effectively preserving and restoring variables as required. When under- A simple way to perform data flow analysis on a program taking context-sensitive analysis, it entails distinguishing calls is to set up data flow equations for each node of the CFG and to the same function at various locations. To achieve this, we iterativelycomputetheoutputsofthelocalinputsofeachnode perform unique computations at each context-sensitive call until the whole system state is stable. We use the data flow site. This is realized by creating a duplicate of the called constraint equations to describe the constraint relationships function at every call location, thus averting interconnections betweeneachnodeoftheCFG[51].Dataflowconstraintsare between calls across diverse locations. denoted by v i , which associate the values of a node with its neighbor. Fo(cid:74)r a(cid:75)CFG containing nodes V ={v ,v ,...,v }, LuCI Dispatcher. In addition to the above-mentioned, 1 2 n we analyze it on the lattice Ln. The lattice used for this we also need to consider the characteristics of the analysis analysis is a power set lattice of all the assignment nodes object. In the controller of LuCI, there is an entry function in the program. Suppose that the relationship between a node dedicated to the display of web pages. Considering the v and its neighboring nodes is expressed as the data flow definition in the dispatcher, the prototype of the controller’s i equation: function for generating URLs is entry(path, target, v =F ( v ,..., v ) (1) title=nil, order=nil). Where path is the path i i 1 n (cid:74) (cid:75) (cid:74) (cid:75) (cid:74) (cid:75) to access, it is given as an array of strings. For example, Then the equations for all nodes can be combined into a if the path is written as "{"admin", "loogson", single function F :Ln →− Ln: "control"}", you can access this script in your browser by going to "http:// F(x 1,...,x n)=(F 1(x 1,...,x n),...,F n(x 1,...,x n)) (2) 192.168.1.1/cgi-bin/luci/admin/loogson/control". The data flow constraint system performs the analysis in The reverse by connecting the constraints of all predecessor nodes target is the call target, and there are three types of call of all nodes. This can be expressed as the following function: targets, which are executing the specified function (Action), accessing the specified page (Views), and calling the CBI JOIN(v)= (cid:91) w (3) module, corresponding to the functions call, template, (cid:74) (cid:75) w∈pred(v)IEEEINTERNETOFTHINGSJOURNAL,VOL.14,NO.8,AUGUST2021 6 For the function F :Ln →− Ln in each iteration, the update is y = x/2 = {x = io.read(),x = x+y,x = x/2,y = rules for the data flow constraints are categorized into two x/2(cid:74),z =z+(cid:75)10}. The reason the other three assignments can types for two types of nodes: assignment nodes and non- reach the third row is triggered by some condition holding in assignment nodes. the while loop. Assuming y < 6 and z ≤ 3 are satisfied in As for the assignment node, since the assignment node some while loop, then the assignments on line 5 and line 9 changes the value of the current variable, it is necessary to can reach line 3. discard the assignment value of the variable corresponding to Algorithm 1 details the process of reaching definitions the current node and replace it with the current assignment analysis. We use the Worklist Algorithm to optimize the node: iterative algorithm, and for each CFG we use a worklist q v =JOIN(v)↓id∪{v} (4) to store all nodes in the CFG. As long as the worklist q (cid:74) (cid:75) is not empty, we do the following until we reach a stable For all non-assigned nodes, just join all constraints before state: The first node in q is updated according to the type of them: node to generate a new constraint new. If the new constraint |
v =JOIN(v) (5) information is different from the old one, it means that the (cid:74) (cid:75) constraintinformationofthenodehaschanged,andthechange The ↓ function here is meant to remove all assignments to needs to be propagated to the successor of the node. Iterate the variable id from the result of the JOIN. over all successors of the node, adding them to the worklist According to the Fixed-Point Theorem, in a lattice L with q, and set their constraint information to the new constraint finite height, every monotone function f has a unique least information. After completing the inner loop, remove the first fixed point defined as [51]. node q[0] in the worklist. (cid:97) fix(f)= fi(⊥) (6) Algorithm 1 Reaching Definitions Analysis Worklist. i≥0 Input: CFG list Output: IN[n] and OUT[n] for each CFG node for which f(fix(f)) = fix(f). Where fi(⊥) means that 1: constraint table = dict() starting from the smallest element of L, the function f is (cid:96) 2: for each CFG in CFG list do applied i times. represents the least upper bound. Since 3: q=Lattice(CFG.nodes) Ln is finite and the constraints that make up F(x) are 4: while q! = [ ] do monotonous,thefunctionF :Ln →− Ln ofthesolutionalways 5: old = constraint table[q[0]] terminates according to the fixed-point theorem. Fixed-Point Algorithm: The above data flow constraint system is computed by 6: if isinstance(q[0], AssignmentNode) then iteratively solving the system until no further change occurs 7. constraint table[q[0]]=JOIN(q[0])↓id∪q[0] between two adjacent iterations of the computation, that is, 8: else the fixed point is reached. This point is also the result of the 9. constraint table[q[0]]=JOIN(q[0]) propagation of all assignments through the program, and this 9: end if information can be used to infer the presence of any possible 10: new = constraint table[q[0]] dangerous data flows. 11: if new! = old then Listing 1 is a simple example of a Lua program that can 12: for each node in q[0].outgoing do help us better understand the process of reaching definitions 13: q.append(node) analysis. 14: constraint table[q[0]] = new 1 x = io.read() 15: end for 2 while x > 2 do 16: end if 3 y = x / 2 4 if y < 6 then 17: q = q[1:] 5 x = x + y 18: end while 6 end 7 z = x + 2 19: end for 8 if z > 3 then 9 x = x / 2 10 end D. Framework-Adapted Taint Analysis 11 z = z + 10 12 end 13 prints(x) TotrackthedataflowoftaintedinformationwithintheLuCI environment, we apply a framework-adapted taint analysis. Listing1. ASampleExampleofLuaProgram. Taint analysis can be abstracted into a ternary ¡sources, sinks, Applying the above data flow constraint calculation to the sanitizers¿. Only when both sources and sinks exist and the CFG corresponding to Listing 1, we can see that there are six dataflowlinkfromsourcestosinksisthroughdoesthecurrent assignment statements in this segment, in lines 1, 3, 5, 7, 9, vulnerability exist. The processing of taint analysis can be and 11. Using line 3 as an example, we determine what the divided into three stages: identification of sources and sinks, definition is to reach this position. In the first iteration of the taint propagation analysis, and sanitization. calculation, y =x/2 ={x=io.read(),y =x/2}, this can Our primary objective is to identify vulnerabilities within beobtaineds(cid:74)implyby(cid:75)usingeq.(4).Thefinalcalculationresult theLuCIframeworkofthefirmware,necessitatingathoroughIEEEINTERNETOFTHINGSJOURNAL,VOL.14,NO.8,AUGUST2021 7 comprehension of the web service interfaces in the LuCI User ①Declare a task and framework and the corresponding code logic of their invo- considerations for ChatGPT ChatGPT cation. We adopt a reverse perspective in our vulnerability detectionapproach.Initially,wefocusonidentifyingsensitive ②Provide the alarm functionswithinthecoderesponsibleforexecutingoperations. function and its context Subsequently, we conduct a global search to determine which interfacesinvoketheseidentifiedexecutionAPIs.Thismethod- ③ChatGPTanalyzes ology allows us to pinpoint the specific code segments for the alarm function taint analysis, aiming to ascertain the potential injection of need more info enough info external hazardous data. In contrast to a forward analysis of ④Provide the ⑤ChatGPTreturns allwebinterfaces,thisapproachisnotablyefficientasitentails requested information the analysis content traversing only the sensitive functions. Next, we elucidate the process of identifying sources and sinks, conducting taint ⑥ChatGPToutputs propagationanalysis,andimplementingsanitizationmeasures. the prediction result Sinks. To identify potential tainted sinks, we conduct a global search for sensitive function call nodes in the AST. Fig.5. WorkflowofPost-processingwithLLMAssistance. Given Lua’s nature as a dynamically typed language, it offers various functions that have the ability to carry out commands on the program under luci/controller to determine orexecutecode.Notably,Luaprovidessystemexecutioncom- whetherthecurrentdataflowwillleadtovulnerabilities,where mand functions such as os.execute, io.popen, and oth- the dangerous data is likely to be injected through the URL ers. Additionally, there exist higher-level execution functions and its parameters. For those alarms in other parts of the that call these system execution command functions, such as application, if the source is verified to not come from the luci.util.exec, luci.sys.exec, and fork_exec, HTTP request interface, then it will be excluded. among others. These functions are considered sensitive sinks, |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.