text
stringlengths 64
2.99M
|
---|
udp.dstport Nulls 200and400epochs.Attheendofthelastiterationoftraining, udp.length Nulls 200epochsresultedinanaccuracyof70%,precisionof77%, udp.checksum Nulls udp.time relative Nulls recallof70%,andf1scoreof67%,while400epochsresulted udp.time delta Nulls in an accuracy of 69%, precision of 74%, recall of 69%, and an f1 score of 68% (see Figure 3a). After running our test we modified was frame.protocols. These were long character data through the model, 200 epochs resulted in an accuracy strings, so we split up the string and kept the last element. of 78% and a loss of 54%, while 400 epochs resulted in an For example, if the data point was “eth:ethertype:ip:tcp” we accuracy of 69% and a loss of 56%. manipulated it such that only “tcp” was kept. After splitting Wethentriedcomparing200vs400epochsusingasmaller theprotocol,wethencreatedvariouslabels.Thefirstlabelwe learning rate of 0.001. At the end of the last iteration of created was used to display whether a packet was malicious training,200epochsresultedinanaccuracyof91%,precision or not. After creating is malware, we then created a label dif- of92%,recallof91%,andf1scoreof90%,while400epochs ferentiatingthebotnet’saction.Todothis,wefirstdetermined resulted in an accuracy of 69%, precision of 77%, recall of whether the file was malicious or legitimate. Then, based on 69%, and an f1 score of 67% (see Figure 3b). After running thefilename,welabeleditas“spread”forpropagationtraffic, our test data through the model, 200 epochs resulted in an and “cc” for C&C communication traffic. Note that all the accuracyof67%andalossof57%,while400epochsresulted Torii files only housed propagation data given the researchers in an accuracy of 69% and a loss of 62%. did not want to risk communication with Torii’s real C&C While 400 epochs have a slightly higher accuracy during server. Once we created a feature for the botnet phase, we testingforalearningrateof0.001,itunder-performsinalmost then created a feature for the device type used. The types of allotherregardscomparedto200epochs.Atalearningrateof devices included a fan, switch, lock, light, and two raspberry 0.001,200epochsalsoseemtofluctuatemorefor30iterations, pis. suggesting we may need to increase this parameter. Since After creating these additional labels, we transformed each there was no significant improvement in increasing the epoch fine-grained csv and added “mod” in the name. For example, size past 200 for either learning rate, all further testing was mirai mal CC lock.csv became performed using 200 epochs to improve time efficiency. mirai mal CC lock mod.csv. Once the modified datasets B. Model Performance After Ten Runs were generated with the additional labels, we then combined thesefilesbasedondevicetype,butstilldifferentiatedbetween 1) Training: Usingfourclients,abatchsizeof64,50itera- malicious and legitimate traffic. After combining the datasets tions,andalearningrateof0.001,weranourmodeltentimes bydeviceandtraffictype,wethenselected2,000randomrows toaccountforsomerandomnessandgainanunderstandingof from each dataset and generated a singular dataset containing themodel’soverallperformance.Thescoresproducedbyeach a total of 24,000 rows. By doing all this, we tried to keep experiment are shown in Table II. an even amount of data as well as ensure the dataset was an Based on the Table II, E3 and E10 produce accuracies over appropriate size to process efficiently. We then proceeded to 80%,whileE7andE9produceaccuraciesbelow60%.E1,E2, do some minimal data preprocessing. and E4 produce accuracies between 60%-70%, and E5, E6,(a) Learning Rate = 0.01 (b) Learning Rate = 0.001 Fig. 3: Scores Produced While Training 200 vs 400 Epochs TABLE II: Scores Across 10 Experiments Training Testing Experiment Accuracy Precision Recall F1-Score Accuracy Loss 1 67% 79% 67% 63% 67% 58% 2 66% 79% 66% 62% 66% 63% 3 82% 82% 82% 82% 70% 61% 4 65% 74% 65% 62% 64% 60% 5 78% 78% 78% 78% 78% 49% 6 70% 71% 70% 69% 70% 57% 7 54% 76% 54% 43% 64% 63% 8 74% 79% 74% 72% 73% 53% 9 54% 76% 54% 43% 54% 66% 10 81% 81% 81% 81% 81% 51% Average 69% 78% 69% 66% 69% 58% and E8 produce accuracies between 70%-80%. When looking at the graph, E4, E7, and E9 all exhibit a sharp decrease in accuracy prior to the 20th iteration and stay stagnant on out. E1, E6, E8, and E10 stay relatively consistent throughout the iterations, and E2, E3, and E5 show an increase by the end of the iterations. BasedontheTableII,E3andE10produceprecisionvalues over 80%, while all other experiments produce precision Fig.4:AccuracyProducedThrough10RepetitionsofTraining values between 70%-80%. When looking at the graph, E4, E7, and E9 all exhibit a sharp decrease in precision prior to the 20th iteration and remain stagnant on out. E6, E8, and E10 stay relatively consistent throughout the iterations, and E1, E2, and E5 display a moderate increase in precision. E3 fluctuates, then stops close to where it started during the first iteration. The graphs for the recall and f1-scores show that these values produce the same shape as the accuracy for each experiment in Figure 4, with recall producing the same scores as accuracy by the end of the 50th iteration, and the f1 score being slightly less than accuracy. Fig. 5: Accuracy and Loss Through 10 Repetitions of Testing Overall, throughout the ten iterations, our model produces |
anaverageaccuracyof69%,precisionof78%,recallof69%, 60%-70% accuracy (E1, E2, E3, E4, E6, and E7). 20% of and f1 score of 66%. We see less fluctuation in precision than theexperimentsfallbetween70%-80%accuracy(E5andE8), accuracy,andourprecisionvaluesarehigherthanouraccuracy andonly10%oftheexperimentsachievedover80%accuracy values.Ourrecallisthesameasouraccuracy,andourf1score (E10). The remaining 10% (E9) fell between 50%-60%. is less than accuracy, precision, and recall. This fluctuation is Figure 5 also shows that while we have relatively high loss likely due to randomness. values,90%oftheexperiments(withtheexceptionofE9)have 2) Testing: Afterrunningthetestingsetthroughourmodel, a loss value less than that of its associated accuracy. Out of Figure 5 shows that 60% of the experiments fall between the 60% of experiments that fall between 60%-70% accuracy,TABLEIII:ScoresAcross10Experiments(PoisoningAttack) Training Testing Experiment Accuracy Precision Recall F1-Score Accuracy Loss 1 73% 74% 73% 72% 89% 42% 2 69% 77% 69% 67% 69% 62% 3 53% 76% 53% 40% 53% 66% 4 73% 74% 73% 72% 73% 52% 5 69% 72% 69% 68% 69% 64% 6 87% 88% 87% 87% 86% 47% 7 73% 74% 73% 72% 73% 58% 8 92% 93% 92% 92% 65% 60% 9 69% 76% 69% 67% 72% 60% 10 56% 76% 56% 45% 54% 67% Average 71% 78% 71% 68% 70% 58% halfofthemhavealossvaluewithinthesamerange,andhalf have a loss value less than 60%. We see the greatest disparity between accuracy and loss E5 and E10 with a difference of 29 and 30 respectively. On average, we achieve an accuracy of 69% and a loss of 58%. Meaning on average, we achieve the same accuracy during testing as we did during training (see Table II). Fig.6:AccuracyProducedThrough10RepetitionsofTraining C. Model Performance After Ten Runs (Poisoning Attack) (Poisoning Attack) To label-flip, we took all of the rows in the first client with a source port equal to 23 and changed the label from 1 to 0. Using one corrupted client, three clean clients, a batch size of 64, 50 iterations, and a learning rate of 0.001, we ran our model ten times to account for some randomness and gainanunderstandingofthemodel’sperformance.Thescores produced by each experiment are shown in Table III. 1) Training: Figure 6 displays the progression of accuracy through 50 iterations for the 10 experiments. Based on the graph and Table III, E6 and E8 produce accuracies over 80% and 90% respectively, while E3 and E10 produce accuracies below60%.E2,E5,andE9produceaccuraciesbetween60%- 70%, and E1, E4, and E7 produce accuracies between 70%- 80%.Whenlookingatthegraph,E1,E2,E5,andE10produce Fig. 7: Accuracy and Loss Through 10 Repetitions of Testing a noticeable decrease in accuracy prior to the 20th iteration. (Poisoning Attack) E4, and E8 experience rampant fluctuation throughout the iterations, While E3 and E9 increase and then decrease again f1 score of 68%. Compared to our results in Section IV-B1, in a less chaotic manner. E6 and E7 stay relatively consis- label-flipping achieves higher accuracy, the same precision, tent throughout the iterations. Unlike the scores produced in higher recall, and a higher f1 score. We see less fluctuation in Section IV-B1, while slightly higher accuracies seemed to be precision than accuracy, and our precision values are higher achieved, there is noticeably more fluctuation throughout the than our accuracy values. Our recall is the same as our learning process for the model. accuracy, and our f1 score is mostly less than all accuracy, Based on Table III, E6 and E8 produce precision values precision, and recall. over 80% and 90% respectively, while all other experiments 2) Testing: Afterrunningthetestingsetthroughourmodel, produce precision values between 70%-80%. Looking at the Figure7shows30%oftheexperiments(E2,E5,andE8)pro- graph, E1, E2, E5, E9, and E10 all experience a decrease in duce an accuracy between 60%-70%. 30% of the experiments precision throughout the iterations. E3 and E8 experience a fall between 70%-80% accuracy (E4, E7, and E9), and 20% minor increase in precision while fluctuating more than the of the experiments achieved over 80% accuracy (E1 and E6). other experiments. E4, E6, and E7 experience little to no Theremaining20%(E3andE10)fellbetween50%and60%. fluctuation in precision values throughout the iterations. Compared to testing results in Section IV-B2, label-flipping The recall and f1 scores produce the same shape as the ac- seemstoproducemoresporadicresultsgiven20%asopposed curacy for each experiment in Figure 6, with recall producing to10%oftheexperimentsproduceaccuraciesover80%,while the same scores as accuracy by the end of the 50th iteration, 20%, as opposed to 10%, produce accuracies below 60%. and the f1 score being slightly less than accuracy (with the Figure 7 also shows that while we have relatively high loss exception of E6 and E8, which produce the same as accuracy values, 80% of the experiments (excluding E3 and E10) have and recall). a loss value less than that of its associated accuracy. Most of Overall,throughoutthe10iterations,ourmodelproducesan the loss values fall between 60%-70% (E2, E3, E5, E8, E9, |
averageaccuracyof71%,precisionof78%,recallof71%,and and E10). The other 40% of the experiments (E1, E4, E6, andfederated learning based intrusion detection system in the hardware systems for testing purpose. REFERENCES [1] R. Sobers, “134 cybersecurity statistics and trends for 2021: Varonis,” Mar2021. [2] Sectigo, “Evolution of iot attacks: An interactive infographicd.” https://sectigo.com/uploads/resources/Evolution-of-IoT-Attacks- Interactive-IG May2020.pdf. [3] T.S.HyslipandJ.M.Pittman,“Asurveyofbotnetdetectiontechniques by command and control infrastructure,” Journal of Digital Forensics, SecurityandLaw,vol.10,2015. [4] radware, “IRC (Internet Relay Chat).” https://www.radware.com/ security/ddos-knowledge-center/ddospedia/irc-internet-relay-chat. [5] J.OikarinenandD.Reed,“Rfc1459:Internetrelaychatprotocol,”1993. [6] W. Chang, A. Mohaisen, A. Wang, and S. Chen, “Measuring Botnets in the Wild: Some New Trends,” in Proceedings of the 10th ACM Symposium on Information, Computer and Communications Security, (a) Model Performance ASIA CCS ’15, (New York, NY, USA), p. 645–650, Association for ComputingMachinery,2015. [7] S. S. Silva, R. M. Silva, R. C. Pinto, and R. M. Salles, “Botnets: A survey,”ComputerNetworks,vol.57,no.2,pp.378–403,2013. Botnet Activity:Analysis,DetectionandShutdown. [8] S.Gaonkar,N.F.Dessai,J.Costa,A.Borkar,S.Aswale,andP.Shet- gaonkar, “A survey on botnet detection techniques,” in 2020 Interna- tional Conference on Emerging Trends in Information Technology and Engineering(ic-ETITE),pp.1–6,2020. [9] L. T. Rajesh, T. Das, R. M. Shukla, and S. Sengupta, “Give and take: Federated transfer learning for industrial iot network intrusion detection,”2023. [10] H. B. McMahan, E. Moore, D. Ramage, and B. A. y Arcas, “Fed- erated learning of deep networks using model averaging,” CoRR, vol.abs/1602.05629,2016. [11] L. Li, Y. Fan, M. Tse, and K.-Y. Lin, “A review of applications in federated learning,” Computers & Industrial Engineering, vol. 149, p.106854,2020. [12] B. Ghimire and D. B. Rawat, “Recent advances on federated learning forcybersecurityandcybersecurityforfederatedlearningforinternetof things,”IEEEInternetofThingsJournal,vol.9,no.11,pp.8229–8249, (b) Model Performance During Poisoning Attack 2022. [13] C. Zhang, Y. Xie, H. Bai, B. Yu, W. Li, and Y. Gao, “A survey on Fig. 8: Difference in Model Performances federated learning,” Knowledge-Based Systems, vol. 216, p. 106775, 2021. E7) produce loss values below 60%. 20% of the experiments [14] R. Galvez, V. Moonsamy, and C. D´ıaz, “Less is more: A privacy- respecting android malware classifier using federated learning,” CoRR, (E3 and E10) produce loss values higher than their associated vol.abs/2007.08319,2020. accuracy, which is twice as much as the experiments from [15] R.-H.Hsu,Y.-C.Wang,C.-I.Fan,B.Sun,T.Ban,T.Takahashi,T.-W. Section IV-B2. Last, we see the greatest disparity between Wu, and S.-W. Kao, “A privacy-preserving federated learning system forandroidmalwaredetectionbasedonedgecomputing,”in202015th accuracy and loss for E1 and E6 with a difference of 47 and AsiaJointConferenceonInformationSecurity(AsiaJCIS),pp.128–136, 39 respectively. On average, we achieve an accuracy of 70% 2020. and a loss of 58%. Meaning on average, we achieve a slightly [16] Y. Zhao, J. Chen, D. Wu, J. Teng, and S. Yu, “Multi-task network anomalydetectionusingfederatedlearning,”inProceedingsoftheTenth loweraccuracyduringtestingthanwedidduringtraining(see International Symposium on Information and Communication Technol- TableIII).ComparedtoSectionIV-B2,wehaveslightlyhigher ogy,SoICT2019,(NewYork,NY,USA),p.273–279,Associationfor accuracy and the same loss. ComputingMachinery,2019. [17] V. Rey, P. M. Sa´nchez Sa´nchez, A. Huertas Celdra´n, and G. Bovet, V. CONCLUSION “Federated learning for malware detection in iot devices,” Computer Networks,vol.204,p.108693,2022. We proposed an intrusion detection system that utilizes [18] Wahaya IT, “Protect your organization against cyber poisoning federated learning to preserve data privacy amongst devices. attacks.” https://www.wahaya.com/cyber-poisoning-attack/#:∼:text= Types%20of%20Poison%20Attacks&text=Data%20manipulation%20% We analyze raw packet data from legitimate botnet traffic. E2%80%93%20The%20attacker%20manipulates,and%20ultimately% Sampling packets from the propagation and C&C commu- 20weaken%20the%20outcome. nication phase, we propose an online model to differentiate [19] A. Guerra-Manzanares, J. Medina-Galindo, H. Bahsi, and S. No˜mm, “Medbiot: Generation of an iot botnet dataset in a medium-sized iot between malicious and benign traffic on a per-packet basis network,”022020. without allowing the clients to share raw network data. We also examined whether poisoning attacks have an impact on model performance. The proposed method has resulted in an overall accuracy of 71%, precision 78%, recall 71%, and f1- score of 68%. In future, we plan to implement the proposed |
2311.12420 Preprint HOW FAR HAVE WE GONE IN VULNERABILITY DE- TECTION USING LARGE LANGUAGE MODELS ZeyuGao1 HaoWang1 YuchenZhou2 WenyuZhu1 ChaoZhang1 {gaozy22,hao-wang20}@mails.tsinghua.edu.cn zhouyuchen@emails.bjut.edu.cn zhuwy19@mails.tsinghua.edu.cn chaoz@tsinghua.edu.cn 1TsinghuaUniversity 2BeijingUniversityofTechnology ABSTRACT As software becomes increasingly complex and prone to vulnerabilities, auto- mated vulnerability detection is critically important, yet challenging. Given the significant successes of Large Language Models (LLMs) in various tasks, there is growing anticipation of their efficacy in vulnerability detection. However, a quantitativeunderstandingoftheirpotentialinvulnerabilitydetectionisstillmiss- ing. To bridge this gap, we introduce a comprehensive vulnerability benchmark VulBench. This benchmark aggregates high-quality data from a wide range of CTF (Capture-the-Flag) 1 challenges and real-world applications, with anno- tations for each vulnerable function detailing the vulnerability type and its root cause. Through our experiments encompassing 16 LLMs and 6 state-of-the-art (SOTA) deep learning-based models and static analyzers, we find that several LLMsoutperformtraditionaldeeplearningapproachesinvulnerabilitydetection, revealing an untapped potential in LLMs. This work contributes to the under- standingandutilizationofLLMsforenhancedsoftwaresecurity. 1 INTRODUCTION The rapid advancement of software systems has led to an increase in their complexity and sus- ceptibilitytovulnerabilities,therebyheighteningsecurityrisks. Effectivevulnerabilitydetectionis imperativeinthisscenario,necessitatingrobustandautomatedmethods. Traditionaltechniqueslike fuzzing, symbolic execution, and static analysis, while valuable, often fall short in addressing the complexitiesofmodernsoftwaresystems(AFL;Cadaretal.,2008). Inthiscontext,largelanguagemodels(LLMs)suchasGPT-3.5andGPT-4haveemergedaspromis- ingtools. Notedfortheirbroadgeneralizationandreasoningcapabilities,thesemodelshaveshown notable success in diverse applications, including the domain of vulnerability detection (OpenAI; Yangetal.,2023b). Yet,thequantitativeassessmentoftheireffectivenessinvulnerabilitydetection isstillunder-explored. AsignificantobstacleinapplyingLLMsforvulnerabilitydetectionisthelackofhigh-quality, ac- curate datasets. Prior research shows that existing vulnerability datasets often lack in quality and accuracy,achievingdetectioncorrectnessratesaslowas30%to50%(Croftetal.,2023;Chenetal., 2023). This not only hampers the ability to effectively evaluate LLMs but also fails to represent the complexities of real-world software vulnerabilities, which often arise from interactions across multiplefunctionsandrequireacomprehensiveunderstandingofthecodebase. Toaddressthesechallenges,weintroduceVulBench,acomprehensivedatasetamalgamatingdata from various sources, including CTF challenges, MAGMA (Hazimeh et al., 2020), Devign (Zhou etal.,2019),D2A(Zhengetal.,2021),andBig-Vul(Fanetal.,2020). Thisdatasetoffersablendof straightforwardCTFchallengesandmorecomplexreal-worldCVEvulnerabilities,cateringtoboth open-sourceandclosed-sourcesoftwarevulnerabilitydetectionscenarios. Alldatasetsareexpertly 1https://ctf-wiki.org/en/ 1 3202 ceD 22 ]IA.sc[ 3v02421.1132:viXraPreprint human-labeled, with the CTF and MAGMA datasets additionally providing the necessary context forvulnerabilityassessment. We have designed and conducted a comprehensive evaluation process to assess the vulnerability detectioncapabilitiesofLLMs. Ourassessmentsaregroundedonthemulti-levelvulnerabilitydata providedwithinthesedatasets. Employingtwoapproaches—binaryclassificationtodeterminethe presenceofvulnerabilitieswithinfunctions,andmulti-classclassificationtoidentifyspecifictypes offunctionvulnerabilities—wefacilitatenuancedandin-depthjudgmentsbythemodels. Thiseval- uative methodology is consistently applied to both deep learning models and static analysis tools, ensuringauniformstandardofevaluationacrossdifferentsystemsandmodels. Ourmaincontributionsareasfollows: • We conduct the first large-scale study to quantitatively measure the performance of 16 LLMs in the field of vulnerability detection, setting a benchmark against state-of-the-art deeplearningmodelsandstaticanalyzers. • The introduction of VulBench, a benchmark that addresses the quality issues prevalent inexistingdatasets,offeringacomprehensivedatasetformoreaccurateevaluations,along withthenewnaturallanguagedescriptionforthevulnerabilities. • UnveilingtheuntappedpotentialofLLMsinvulnerabilitydetection,ourfindingsprovide newinsightsandfutureresearchdirectionsinthisdomain. • We provide the dataset at https://github.com/Hustcw/VulBench to facilitate futurework. This research not only enhances our understanding of LLMs’ application in software security but alsoopensupnewavenuesforadvancementsinautomatedvulnerabilitydetection. 2 RELATED WORK 2.1 BACKGROUNDFORVULNERABILITYDETECTION Vulnerabilitydetectionisacrucialtaskinthefieldof computersecurity. Itsprimaryobjectiveistoidentify potential software security threats, thus reducing the 1 int main() { 2 char buf[10]; risk of cyber-attacks. A key resource in this effort is 3 char str[10]; theCVEdatabase(MITRE),whichactsasaplatform 4 scanf("%5s", str); |
formonitoringthesevulnerabilities. 5 gets(buf) 6 if(str[9] == ’a’) 7 system("/bin/sh"); Threeprincipaltechniquesareemployedinvulnerabil- 8 else itydiscovery: fuzzing, symbolicexecution, andstatic 9 puts("Finished!"); analysis. Fuzzing (AFL) seeks to uncover software 10 return 0; 11 } crashes and anomalies by inundating the system with diverse random inputs. Conversely, symbolic execu- Listing 1: Vulnerability example with a tion (Cadar et al., 2008) aims to detect irregularities ‘StackOverflow’Vulnerability. bysimulatingmultipleapplicationpathways. Finally, staticanalysis(Luetal.,2019;Wuetal.,2021)exam- inesthecodewithoutexecutingittoidentifypotential vulnerabilities. Integratingdeeplearningmodelsintovulnerabilityexplorationofteninvolvesfeedingsourcecode into the model for classification (Lu et al., 2021; Hanif & Maffeis, 2022). This approach, which analyzes the code without execution, is generally classified under static analysis. Nevertheless, significantadvancementshavealsobeenmadeinutilizingdeeplearningmodelstoaugmentfuzzing techniques(Shietal.,2023;Godefroidetal.,2017). Listing1isanexampleofavulnerablefunction. Thedatainchar buf[10]willoverflowinto str[10]duetotheunsafeusageofgets(buf);.Thedutyofvulnerabilitydetectionistodetect thatpotentialstackoverflow. 2Preprint 2.2 DEEPLEARNING-BASEDMODELSFORVULNERABILITYDETECTION This research builds upon recent strides in NLP-driven code analysis for vulnerability detection. CodeXGLUE(Luetal.,2021)workswithCodeBERTontheDevign(Zhouetal.,2019)datasetrep- resentsakeydevelopmentinassessingsourcecodevulnerabilityrisk. LineVul(Fu&Tantithamtha- vorn,2022)extendstheseinsightsbyapplyingthesamemodeltotheBig-Vuldatasetfornuanced detectionatbothfunctionandlinelevels. Alternatively,VulBERTa(Hanif&Maffeis,2022)inno- vateswithaRoBERTa(Liuetal.,2019)modeltailoredforcodethroughahybridBPE-tokenization scheme.TheVulDeePecker(Lietal.,2018;Zouetal.,2019;Lietal.,2021)seriesintroducesandit- erativelyrefinestheideaofutilizingsemanticallycorrelated“codegadgets”andBLSTMsforinitial vulnerability identification, then multi-class categorization, and finally precise location via LLVM IRanalysis. ReVeal(Chakrabortyetal.,2022)roundsoutthelandscapebyprovingthattheefficacy of vulnerability prediction can be enhanced through the integration of semantic information using gatedgraphneuralnetworks(GGNN),combinedwithrefineddatahandlingpractices. 2.3 LARGELANGUAGEMODELSFORVULNERABILITYDETECTION Previous research (OpenAI) has demonstrated the potential of LLMs for detecting software vul- nerabilities, although there is a lack of comprehensive vulnerability-focused data. While GPT-3.5 andGPT-4havebeentheprimarysubjectsofstudy, displayingnotablecapabilities, evaluationsof open-accessLLMsarelesscommon. Studiespresentmixedresults(Cheshkovetal.,2023); while someLLMsperformedonparwithnaiveclassifiers,others,specificallyGPT-4,haveshownsignifi- cantadvantagesoverconventionalstaticanalysistools(Noever,2023). Enhancementstoexploitthe multi-rounddialogueproficiencyofChatGPThavepromptedimproveddetectionmethods(Zhang et al., 2023), and recent advancements (Chan et al., 2023) have fine-tuned LLMs for identifying diverse vulnerability patterns, indicating a potential for reaching expert-level performance in vul- nerabilitymanagementtasks. 2.4 BENCHMARKSFORLLMS Open LLM Leaderboard (Beeching et al., 2023) and MMLU (Hendrycks et al., 2021) evaluates modelsonsciencequestions,commonsenseinference,etc. MT-bench(Zhengetal.,2023)evaluates chat assistants in aspects of writing, reasoning, code, and so on. Cheshkov et al. (2023) conducts the evaluation of vulnerability detection capability on the OpenAI GPT series. Besides, there are severalindividualdatasets(Zhengetal.,2021;Zhouetal.,2019;Chenetal.,2023)forvulnerability detection. 3 DATASET Contrarytopastworksthatassemblevastdatasetswithmultiplevulnerabilitytypesautomatically, ourfocushingesontheenhancementoftheaccuracyandvalidityofdatasets. We’vegatheredfrom multiplesourcesrelativelycondensedyetcomprehensiveCTFdatasetsdetailingallfunctionsinan executablebinary,andreal-worlddatasetsprovidingonlypartialfunctionsfromhugereal-worldpro- grams. AlthoughCTFproblemsdon’toriginatefromtherealworld,theirassociatedvulnerabilities typicallymirrorreal-worldscenarios,presentingaminiaturedepictionoftherealworld. 3.1 DATASETOVERVIEW The dataset amalgamates multiple sources of vulnerabilities, comprising CTF challenges, MAGMA (Hazimeh et al., 2020), and three previous vulnerability datasets (Zheng et al., 2021; Zhouetal.,2019;Fanetal.,2020)withextensivelycleaning. Theycanbemainlydividedintothree types of data sets as follows. The overview of the dataset is shown in Table 1 and the containing vulnerabilitytypesareshowninTable6. CTF Dataset: In CTF challenges, particularly the PWN category, participants must identify and exploitprogramvulnerabilitiestogettheFlag,whichservesasevidenceoftaskcompletion.Despite beingshorterthanvulnerabilitiesfoundinreal-worldCVEs,there’sasignificantnumberofflawed 3Preprint Type ObtainingMethod Source Features Count #Label |
Rawdecompiledcode, CTF PWNProblem BUUOJ 108 5 Reverseddecompiledcode Sourcecode, MAGMA 100 8 Rawdecompiledcode CVECommitDiff RealWorld Devign 70 7 Sourcecode Big-Vul 108 9 StaticAnalyzer D2A Sourcecode 69 4 Table1: Compositionandsourceofthedataset,alongwiththefeaturesavailableforthemodeland thecountofbinariesorfunctionsinthedatasets. IntheCTFdataset,thecountrepresentsthecount ofindividualbinary,andthereexistmultiplefunctionsinabinary. Inreal-worlddatasets,thecount representsthecountofindividualbugs,andthereexistmultiplefunctionsinMAGMA.#Labelrefers tothenumberofdifferenttypesofvulnerabilitiesusedformulti-labelclassificationineachdataset. functionswithinCTFPWNtasks,encompassingabroadrangeofpotentialmemory-relatedissues flaggedwithinCVEs. ThismakesitasuitableandbasicmethodforevaluatingtheLLM. CVE Dataset: The CVE dataset typically identifies vulnerable functions by comparing changes made to the code during CVE remediation. This information, derived from real-world software, includesnotjustmemoryleakvulnerabilitiestypicaltoCTFscenariosbutabroaderrangeofreal- worldvulnerabilities,providingamorestrenuoustestofLLM’svulnerabilitydetectioncapabilities. Static Analyser Generated Dataset: The dataset of potential vulnerabilities, garnered via static analysis, is typically produced using specific tools for this analysis. It involves examining the en- tiretyofaproject,wherepossibleweakpointsaredetectedbythepatternsandconstraints. Despite itsusefulness,thismethodusuallygeneratesarelativelyhighrateoffalsepositives. 3.2 DATASETCONSTRUCTION 3.2.1 CTF We create a dataset for CTF challenges within the PWN category, by selecting problems from the BUUOJ platform 2. In the CTF, participants are usually only provided with a binary, devoid of sourcecode. TocatertoLanguageModelssuchasLLM,whicharelessadeptathandlingassem- blycodedirectly, weuseIDA(InteractiveDisassembler)(Hex-Rays)toextractmoreinterpretable decompiledcodeformodelinput. Thisisfurthercomplicatedbythecharacteristicsofthebinaries inCTFcompetitions. Theyoftenlackessentialdebugandsymbolizationinformation,leavingthem devoidofmeaningfulstructureandvariabledefinitions. To mitigate this, we engage in manual reverse engineering to enhance the structure of the decom- piledcodebyrestoringrecognizableconstructs,renamingvariables,andannotatingthesizeofglobal variables.It’sworthnotingthat,despiteacquiringrefineddecompiledcodeviamanualreverseengi- neering,thegenerateddecompiledcodeoftenbearsintrinsicpatternsorcodesnippetscharacteristic fromthedecompiler,whicharerareintheLLM’strainingset. Assuch,oureffortsshouldfocuson optimizingthereadabilityofthedecompiledcodeforLLMandevaluatetheutilityofsuchmanual interventionsinvulnerabilitydetection,insteadofaimingtoenhancethedecompiledcodetomirror theactualsourcecodedistribution.WegiveanexampleinSectionE.1todemonstratethedifference betweenrawdecompiledcodeandmanuallyreverseddecompiledcodeandthechallengesofusing decompiledcodedirectly. Asforthemetric, asidefromthetraditionalbinaryclassificationandmulti-classclassification, we describetherootcausesofidentifiedvulnerabilitiesusingclear,descriptivenaturallanguageexpla- nationsovertheambiguouspracticeofreferencingcodelines. AnexampleisshowninTable7. 2https://buuoj.cn 4Preprint 3.2.2 MAGMA Our study also adopts the MAGMA fuzzing dataset, a specialized collection created to assess the abilityoffuzzingtoolstotriggerhiddenvulnerabilities.Inthefuzzingprocess,fuzzerscreatemyriad testcasesbymutatinginputsrandomlyandfeedingthemintothetargetprogram. Asvulnerabilities donotconsistentlyleadtoprogramcrashes,relyingsolelyontheexecutionstateoftheprogramis notanidealmethodfordetectingwhetherthevulnerabilityhasbeentriggered. Toaddressthis,the MAGMA dataset includes specialized markers—referred to as ‘canaries’—on the execution paths towards the location of vulnerability. When a canary’s check returns true, it implies that the vul- nerability has been triggered. This approach does not depend on the more rigorous occurrence of programcrashestoascertainthepresenceofvulnerabilities. Whenscrutinizingvulnerabilities,we analyzethesecuritypatchesand‘canaries’tounderstandtherootcauseoftheflaws. InSectionE.2, wegiveanexampleofhowweutilizethisinformation. Toenhancetheanalysis,weprovideadditionalcontextsuchasmacroexpressionsandcorrelatable functionsrelatedtothevulnerabilityduringtheanalysis. Thisadditionalcontextisusedtosimulate areal-worldscenariowhereunderstandingtherelationshipsbetweenfunctionscanprovepivotalin identifying security vulnerability so that we can assess the model’s capability when more context isprovidedanditsresistancetotheimpactofextraneousinformation. Also,weprovidetwoinput features for LLM, the source code extracted from project repositories and the decompiled code extractedfromthecompiledbinarytomimicthescenariosofclosed-sourcesoftware. 3.2.3 DEVIGN,D2A,BIG-VUL For the purpose of supplementing our research with real-world vulnerability data, we additionally incorporate three prior datasets. These include the D2A dataset proposed by Zheng et al. (2021), theDevigndatasetintroducedbyZhouetal.(2019),andtheBig-VuldatasetdiscussedinFanetal. (2020). Nevertheless, we don’t directly employ these datasets in their entirety. Rather, we take |
guidancefrompreviousstudiestoensuremaximumaccuracyandreliabilitywithinourdataset. In Croft et al. (2023), a selection of vulnerability functions are randomly sampled from the triad of datasets. Those are then manually evaluated to ascertain if they represented authentic security patches within the git commit to mitigate the interfering factors such as non-functional changes in the same commit and wrongly identified due to unreliable keyword matching or false positives fromstaticanalysistools. Whenconstructingthedataset,functionsverifiedastruesecuritypatches (Related to fixing the vulnerability) are marked as vulnerable. On the other hand, functions that don’tqualifyassecuritypatches(Relatedtocodecleanuporirrelevantcodechanges)aredesignated asnon-vulnerable. Furthermore,wealsoconsiderpatchedfunctionstobeinvulnerable. Intheannotationprocess,forBig-Vuldataset,wefusethedescriptionsfromtheirCVEpageswith existingtagstocategorizethetypesandweillustratetheinformationprovidedbyBig-Vuldataset andtheannotationprocessinSectionE.3. ForDevigndataset,itlackscorrespondingvulnerability types,leadingtoourrelianceonCVEdescriptionstodistinguishthevulnerabilitytypes. Asforthe D2Avulnerabilitydataset,whichobtainedusingastaticanalysisprogramthatautomaticallynotes thevulnerabilitytype,ithasalreadybeentaggedbystaticanalysistools,facilitatingourdirectreuse. 4 EVALUATION 4.1 SELECTEDMODELS&BASELINES In our experiment, we select a series of large models, including GPT-3.5, GPT-4 and open-access LLMs comprises variations of Llama2 and those that underwent SFT on Llama2. The full list of selected models is in Table 5. We host the models on 48 A800 GPUs across 6 nodes and lever- agevLLM(Kwonetal.,2023)andtext-generation-inference(Huggingface)toexpeditethemodel inferenceprocess. Werepeattherequestsfor5timesforeachvulnerabilitydetectiontask. For comparison, we select three deep learning models and three rule-based static analysis tools according to Steenhoek et al. (2023); Lipp et al. (2022) to serve as baselines. The full list and descriptionareshownintheSectionB.WeexcludetheVulDeeLocator(Lietal.,2021;Zouetal., 2019)asitrequiresalotofhumanefforttoannotatetheexecutiontraceandrequiresthesourcecode 5Preprint 0.5 0.4 0.3 0.2 0.1 0.0 chatg -l 6m b2 Llam -a 7- b2 vicu -n 7a b vi -c 7u bn -a 16k Lla -m 1a- 32 b Baich -u 1a 3n b2 vic -u 1n 3a b vi -c 1u 3n b-a 16 ik nter -n 2l 0m b vic -u 3n 3a b CodeL -l 3a 4m ba falc -o 4n 0b Lla -m 7a- 02 b Platy -p 7u 0s b2 GPT-3.5 GPT-4 1f 0.35 0.30 0.25 0.20 0.15 0.10 0.05 0.00 chatg -l 6m b2 Llam -a 7- b2 vicu -n 7a b vi -c 7u bn -a 16k Lla -m 1a- 32 b Baich -u 1a 3n b2 vic -u 1n 3a b vi -c 1u 3n b-a 16 ik nter -n 2l 0m b vic -u 3n 3a b CodeL -l 3a 4m ba falc -o 4n 0b Lla -m 7a- 02 b Platy -p 7u 0s b2 GPT-3.5 GPT-4 1f 2-Shot-BC 5-Shot-BC 2-Shot-MC 5-Shot-MC Figure 1: Few-shot conversation for binary classification (whether a function is vulnerable) and multi-classclassification(whichvulnerabilitydoesthefunctionhave)inCTFdataset. BCstandsfor binaryclassification. MCstandsformulti-classclassification. compile-able. Using these, we aim to ascertain the degree of superiority, in terms of vulnerability detection,thecurrentLLMpossessescomparedtotraditionalmethods.Especially,theselecteddeep learningmodelscanonlyperformbinaryclassificationasitrequiresretrainingtosupportmulti-class classification,butthesizeofourdatasetistoolimitedtosupportit. 4.2 METRIC UnlikeGPTmodelsthatboastofanexcellentalignment(Ouyangetal.,2022),othermodelsdon’t comeequippedwithamechanismtoseamlesslyoutputinastandardformat. Toaddressthissitua- tion,weemploythetwofew-shotmethods,2-shotsand5-shots,toinsistonauniformtemplatefor output,makingiteasiertoparsetheanswerofmodels. Withinallthedatasets,werandomlyselect theexamplesthatserveasourfew-shotinstances.Thepromptsareformattedinthestyleofachatto ensurealignmentwiththemodels’methods. Eachdialogueroundincorporatedafew-shotinstance, withthesubstantivequestionintroducedonlyfromthethirdroundonwards. Figures3and4offera clearshowcaseofthe2-shotpromptatwork. Duringbinaryclassification, themodelisprogrammedtochanneloutputsasVULNERABLE: NO orVULNERABLE: YES.Alternately,inmulti-classclassificationscenarios,themodeldeliversout- puts as TYPE: [type], where [type] corresponds to a selection from an exhaustive list. To minimize parsing complications and take stock of the model’s few-shot ability, we take a leaf out of MT Bench’s (Zheng et al., 2023) playbook, targeting only the final result that complies with the stipulated format. Model outputs bereft of matching outputs are branded as invalid. Given that the 5-shots prompt is near 2000 tokens and the context length of Vicuna-33b-v1.3 and falcon-40b-instruct are limited to 2048 tokens, not all input may fit within the context lengthofthesemodels,soweexcludethe5-shotresultsofthesemodels. 6Preprint Table2:BinaryclassificationresultsinCTFandreal-worlddatasetsondeep-learning-basedmodels. CTF Real-world F1 Precision Recall F1 Precision Recall VulBERTa 0.354 0.350 0.391 0.406 0.456 0.388 |
LineVul 0.155 0.619 0.187 0.166 0.419 0.193 CodeXGLUE 0.375 0.341 0.617 0.429 0.437 0.462 Table3: Multi-classclassificationresultsinCTFdatasetonstaticanalyzertools,BinAbsInspector isprovidedwiththebinary,flawfinderandcppcheckisprovicedwithdecompiledcode. RawDecompiledcode/Binary ReversedDecompiledcode F1 Precision Recall F1 Precision Recall flawfinder 0.174 0.229 0.662 0.136 0.178 0.324 cppcheck 0.02 0.029 0.015 0.016 0.010 0.170 BinAbsInspector 0.604 0.652 0.563 / / / 4.3 CTFDATASET 4.3.1 OVERALLPERFORMANCE Weshowtheresultofbinaryclassificationandmulti-classclassificationforasinglefunctionwithan averagedF1scoreoverrawdecompiledcodeandmanuallyreverseddecompiledcodeinFigure1. TheresultsofthestaticanalyzeranddeeplearningmodelsareshowninTable3andTable2. The fullresultoftheCTFdatasetisshowninTable12. AnalysisoftheCTFdatasetrevealedthatGPT-4maintaineditsacknowledgedstrongcapabilityin various arenas—be it binary or multi-class classification—garnering impressive results. However, theLlama2familywithRLHFOuyangetal.(2022);Christianoetal.(2017)doesnotachievepro- jected expectations, suggesting that over-alignment will hurt the capability of the LLM OpenAI. When compared to the results of the supervised fine-tuned models of the same size, such as the PlatypusorVicuna,itisseenthatthesemodelsdemonstratedimprovedperformance. Furthermore, concerningdifferentquantitiesoffew-shotexamples,thereisanotableenhancementinthevulner- abilityminingcapabilitiesofmostmodelswhencomparing5-shotto2-shotscenarios. When comparing with the baselines, static analyzers, and deep learning-based models, the GPT- 3.5 and GPT-4 outperform the best baselines in terms of F1 in binary classification whereas the open-accessmodelsfallbehindwhencomparedtoVulBERTaandCodeXGLUE.Whencomparing open-access models that are trained on identical datasets (eg. Llama 2, Vicuna), we notice that whilethephenomenonofthescalinglawpersists,enhancementsarenotablylimited. Thisindicates thatmerelyaugmentingthevolumeofparametersdoesnotnecessarilyoptimizeamodel’scapacity forvulnerabilitydetectionadequately. Accordingly,complementingeffortsneedtobeallocatedto enhancingothercomponents, notablythequalityofthedatasetused. Moreover, inthevulnerabil- itydetectiondomain, substantialdisparitiesstillexistbetweenmodelscomparabletoGPT-3.5and large-scaleopen-accessalternatives,despiteclaimstothecontrary. 4.3.2 ABLATIONSTUDYONPROVIDEDINFORMATION Weconductacomparisonofthemodel’scapacitywhenprovidedwithvaryingtypesofinformation. The lower figure in Figure 8 illustrates the outcome of providing all functions within a binary in the CTF challenge compared to only a single function is provided. Experimental data revealed thatwhenprovidedwithreverse-engineeredpseudocode,severalmodels,specificallyGPT-4,GPT- 3.5,Falcon-40b,andVicuna,demonstratedimprovedperformance,andadeclineislesscommon. It suggeststhesemodels,muchlikehumans,comprehendwell-formattedandreadablecodebetterthan merelydecompiledcode.Interestingly,whencomparingperformanceongivingmorecontext,GPT- 4showesstableresultswhethergivenasinglefunctionortheentirebinary,whileGPT-3.5’saccuracy decreasedwhenexposedtomorefunctions,suggestingthatGPT-4handlesadditionalcontextmore effectively. Regarding open-access models, the majority of them have experienced performance improvements,indicatingthatgiventhecontextallows,acquiringmorecontextcanassistthemodel inmakingbetterjudgments. ThisistrueevenfordatasetslikeCTF,wherevulnerabilitiesoftenonly appearwithinasinglefunction. 7Preprint 0.6 0.5 0.4 0.3 0.2 0.1 0.0 chatg -l 6m b2 Llam -a 7- b2 vicu -n 7a b vi -c 7u bn -a 16k Lla -m 1a- 32 b Baich -u 1a 3n b2 vic -u 1n 3a b vi -c 1u 3n b-a 16 ik nter -n 2l 0m b vic -u 3n 3a b CodeL -l 3a 4m ba falc -o 4n 0b Lla -m 7a- 02 b Platy -p 7u 0s b2 GPT-3.5 GPT-4 1f 0.175 0.150 0.125 0.100 0.075 0.050 0.025 0.000 chatg -l 6m b2 Llam -a 7- b2 vicu -n 7a b vi -c 7u bn -a 16k Lla -m 1a- 32 b Baich -u 1a 3n b2 vic -u 1n 3a b vi -c 1u 3n b-a 16 ik nter -n 2l 0m b vic -u 3n 3a b CodeL -l 3a 4m ba falc -o 4n 0b Lla -m 7a- 02 b Platy -p 7u 0s b2 GPT-3.5 GPT-4 1f 2-Shot-BC 5-Shot-BC 2-Shot-MC 5-Shot-MC Figure 2: Few-shot conversation for binary classification (whether a function is vulnerable) and multi-classclassification(whichvulnerabilitydoesthefunctionhave)inthereal-worlddataset. Table 4: Multi-class classification results in MAGMA dataset on static analyzer tools, BinAbsIn- spector is provided with the binary, flawfinder, and cppcheck are provided with decompiled code andsourcecode. RawDecompiledcode/Binary SourceCode F1 Precision Recall F1 Precision Recall flawfinder 0.064 0.043 0.191 0.052 0.035 0.369 cppcheck 0.069 0.055 0.093 0.096 0.068 0.185 BinAbsInspector 0.011 0.006 0.066 / / / 4.4 REAL-WORLDDATASET Figure 2 showcases data from the real-world dataset as a collective. Along this, the results of the static analyzer and deep learning models are shown in Table 4 and Table 2. The full result on the real-worlddatasetisshowninTable13. |
As we delve into the challenging area of real-world vulnerability detection, it is evident that all LLMs,underperformwhenprovidedwithjustasinglefunction. Thisisduetothefactthatacom- prehensive understanding of the entire program is needed. Despite this, GPT-4 outperforms other models in scenarios requiring multi-class classifications, where the requisite for high model capa- bilities is more pronounced. Conversely, performance declines in binary classification scenarios. Butwhencomparedwiththedeeplearningmodels,theLLMperformsrelativelyworsethanthem. UponexaminingtheirpredictionsinSection4.5andH,wediscernthattheirdecreasedproficiency oftenstemsfromthemodel’sexcessiveconservatism—namely,itspropensitytoyield‘NoVulnera- bility’. Thistendencycontrastswiththatofdeeplearningmodels,which,whentrainedonaspecific dataset, strive to make judgments on the input content as far as possible. This may account for certaindisparitiesbetweenlarge-scalemodelsanddeeplearningmodels. When considering the single magma functions against those offering a more complete context (as depicted in the upper figure in Figure 9), there is negligible performance alteration. The LLMs may find it challenging to discern correctly due to the presence of a large amount of irrelevant normal coding, consequently undermining performance. The lower figure in Figure 9 extends the comparison,juxtaposingthemagmasourcecodewiththedecompiledcodeextractedusingIDA— 8Preprint acomparisonthatyieldssimilarresults. ThisoutcomeisoutofsyncwiththoserecordedintheCTF competition. We credit this discrepancy to the increased complexity found within the real-world vulnerabilitydatasetasopposedtotheCTFdataset. 4.5 ANALYSISOFTHEMODELS’PERFORMANCE InSectionH,wehavepresentedandanalyzedtheperformanceofmodelsonvariousdatasets. Itis noteworthythatdespitethesuboptimalperformanceofallmodelsonreal-worlddatasets,thereasons for their divergent outcomes are distinct. This includes a common tendency toward conservatism, where the models are inclined to identify the absence of vulnerabilities. Notably, models derived fromReinforcementLearningfromHumanFeedback(RLHF)exhibitastrongbiastowardscertain typesofvulnerabilities,highlightingthatRLHFmayalignmodelsmorecloselywithhumanvalues orpreferences,yetcouldalsointensifycertainbiases. ThiscouldpotentiallyconflictwithRLHF’s intenttominimizebiasesrelatedtovaluesorsafety,renderingitlesssuitableforothertasks. 5 DISCUSSION 5.1 EVALUATIONONROOTCAUSEDESCRIPTION In this study, we advocate for a vulnerability dataset characterized by natural language. Given the abundance of results derived from numerous models, manual evaluation of every outcome is impractical. Further complicating matters, the GPT-4’s limited comprehension of vulnerabilities restrictsitsutilityasaproficientcriticakintoMTBench(Zhengetal.,2023),therebyundermining theevaluationofthemodel’sresults. Weenvisageanexpansionofsimilardatasets,accompaniedby thedevelopmentofenhancedautomatedevaluationmethodologies. 5.2 LIMITATIONOFDECOMPILEDCODE We perform vulnerability detection for closed-source software based on decompiled code, but de- compiledcodeitselfalsohascertainlimitations,andtherewillbeerrorsandinformationloss,pre- ventingthevulnerabilitiesfrombeingreflectedatthedecompiledcodelevel. Weshowabadcase ofdecompiledcodeinSectionK.Thedecompilersmustadapttonewcompiler-generatedassembly patterns(Basqueetal.), whichisachallengebeyondthescopeofthisstudy. Betteryet, assembly codecouldbeprocesseddirectlybyanLLM,thoughitsunderstandingfallsshortcomparedtothe original decompiled code. Recent advances in effectively encoding assembly language (Pei et al., 2023;Wangetal.,2022)mightofferaworkaround,allowinganLLMtointerpretassemblydirectly, similartoLLaVA(Liuetal.,2023a),bypassingtheneedfordecompiledcode. 5.3 FUTUREFORVULNERABILITYDETECTIONWITHLLM We illustrate how GPT-4 noticeably surpasses other models on relatively simplistic datasets, such asCTF,indicativeofitscertainproficiencyinvulnerabilitymining. Conversely,whenconsidering thereal-worlddataset,noneofthemodels,exhibitedexceptionalvulnerabilityminingprowess. The increasingcomplexityofsoftwarenecessitatesadeeperunderstandingtoconducteffectivevulner- ability research. Enhancing our comprehension of entire projects (Bairi et al., 2023) is crucial to uncovermoresignificantvulnerabilities. Additionally, toolslikestaticanalysiscansupportLLMs by providing supplementary data or aiding in challenging tasks (Schick et al., 2023; Yang et al., 2023a). Moreover,modelscanactasknowledgerepositories,complementingotherdiscoverytech- niqueslikefuzzingorstaticanalysis(Lietal.,2023;Liuetal.,2023b). 6 CONCLUSION In this study, we introduced a comprehensive vulnerability benchmark dataset, VulBench, and conducted an extensive evaluation of LLMs in the field of software vulnerability detection. Our researchdemonstratesthatincertainscenarios,LLMs,particularlyGPT-4,outperformstraditional deeplearning-basedmodelsandstaticanalyzers,especiallyinCTFdatasets. Thisfindingnotonly 9Preprint underscores the potential application of LLMs in enhancing software security but also opens new avenuesforfutureresearchinautomatedvulnerabilitydetection. However, the performance of all models, including LLMs, drops in more complex real-world |
datasets. This suggests that while LLMs show promise in handling simplified scenarios, they still face challenges in understanding and analyzing complex software systems. Future research could focusonenhancingtheabilityofLLMstoprocesscomplexprojectsandexplorehowtosynergize LLMswithothervulnerabilitydetectiontechniquessuchasfuzzingorstaticanalysistomaximize their strengths. In summary, this study paves new paths for understanding and utilizing LLMs to improvesoftwaresecurity,providingvaluableinsightsanddirectionsfortheadvancementofauto- matedvulnerabilitydetectiontechnologies. REFERENCES TheAFL++fuzzingframework—AFLplusplus. https://aflplus.plus/. Ebtesam Almazrouei, Hamza Alobeidli, Abdulaziz Alshamsi, Alessandro Cappelli, Ruxandra Co- jocaru, Merouane Debbah, Etienne Goffinet, Daniel Heslow, Julien Launay, Quentin Malartic, Badreddine Noune, Baptiste Pannier, and Guilherme Penedo. Falcon-40B: an open large lan- guagemodelwithstate-of-the-artperformance. 2023. Baichuan. Baichuan2: Openlarge-scalelanguagemodels. arXivpreprintarXiv:2309.10305,2023. URLhttps://arxiv.org/abs/2309.10305. Ramakrishna Bairi, Atharv Sonwane, Aditya Kanade, Vageesh D. C, Arun Iyer, Suresh Parthasarathy,SriramRajamani,B.Ashok,andShashankShet. CodePlan:Repository-levelCod- ingusingLLMsandPlanning,September2023. ZionLeonahenaheBasque, AtiPriyaBajaj, WilGibbs, JudeO’Kain, DerronMiao, AdamDoupe´, Yan Shoshitaishvili, and Ruoyu Wang. Ahoy SAILR! There is No Need to DREAM of C: A Compiler-AwareStructuringAlgorithmforBinaryDecompilation.In33stUSENIXSecuritySym- posium(USENIXSecurity24). EdwardBeeching,Cle´mentineFourrier,NathanHabib,SheonHan,NathanLambert,NazneenRa- jani, Omar Sanseviero, Lewis Tunstall, and Thomas Wolf. Open llm leaderboard. https: //huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard,2023. CristianCadar,DanielDunbar,andDawsonEngler. Klee: Unassistedandautomaticgenerationof high-coveragetestsforcomplexsystemsprograms.InProceedingsofthe8thUSENIXConference onOperatingSystemsDesignandImplementation,OSDI’08,pp.209–224,USA,2008.USENIX Association. S.Chakraborty,R.Krishna,Y.Ding,andB.Ray. Deeplearningbasedvulnerabilitydetection: Are wethereyet? IEEETransactionsonSoftwareEngineering,48(09):3280–3296,sep2022. ISSN 1939-3520. doi: 10.1109/TSE.2021.3087402. Aaron Chan, Anant Kharkar, Roshanak Zilouchian Moghaddam, Yevhen Mohylevskyy, Alec Hel- yar, Eslam Kamal, Mohamed Elkamhawy, and Neel Sundaresan. Transformer-based vulner- ability detection in code at edittime: Zero-shot, few-shot, or fine-tuning? arXiv preprint arXiv:2306.01754,2023. Yizheng Chen, Zhoujie Ding, Lamya Alowain, Xinyun Chen, and David Wagner. Diversevul: A newvulnerablesourcecodedatasetfordeeplearningbasedvulnerabilitydetection. InProceed- ingsofthe26thInternationalSymposiumonResearchinAttacks,IntrusionsandDefenses,RAID ’23, pp. 654–668, New York, NY, USA, 2023. Association for Computing Machinery. ISBN 9798400707650. doi: 10.1145/3607199.3607242. URL https://doi.org/10.1145/ 3607199.3607242. AntonCheshkov,PavelZadorozhny,andRodionLevichev. EvaluationofChatGPTModelforVul- nerabilityDetection,April2023. 10Preprint Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, SiyuanZhuang,YonghaoZhuang,JosephE.Gonzalez,IonStoica,andEricP.Xing. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023. URL https: //lmsys.org/blog/2023-03-30-vicuna/. Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Ad- vances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper_files/paper/2017/ file/d5e2c0adad503c91f91df240d0cd4e49-Paper.pdf. Roland Croft, M. Ali Babar, and M. Mehdi Kholoosi. Data quality for software vulnerability datasets.InProceedingsofthe45thInternationalConferenceonSoftwareEngineering,ICSE’23, pp.121–133.IEEEPress, 2023. ISBN9781665457019. doi: 10.1109/ICSE48619.2023.00022. URLhttps://doi.org/10.1109/ICSE48619.2023.00022. ZhengxiaoDu, YujieQian, XiaoLiu, MingDing, JiezhongQiu, ZhilinYang, andJieTang. Glm: General language model pretraining with autoregressive blank infilling. In Proceedings of the 60thAnnualMeetingoftheAssociationforComputationalLinguistics(Volume1: LongPapers), pp.320–335,2022. JiahaoFan,YiLi,ShaohuaWang,andTienN.Nguyen. AC/C++CodeVulnerabilityDatasetwith Code Changes and CVE Summaries. In Proceedings of the 17th International Conference on MiningSoftwareRepositories, pp.508–512, SeoulRepublicofKorea, June2020.ACM. ISBN 978-1-4503-7517-7. doi: 10.1145/3379597.3387501. MichaelFuandChakkritTantithamthavorn. LineVul: ATransformer-basedLine-LevelVulnerabil- |
ityPrediction. In2022IEEE/ACM19thInternationalConferenceonMiningSoftwareReposito- ries(MSR),pp.608–620,May2022. doi: 10.1145/3524842.3528452. Patrice Godefroid, Hila Peleg, and Rishabh Singh. Learn&Fuzz: Machine Learning for Input Fuzzing.In201732ndIEEE/ACMInternationalConferenceonAutomatedSoftwareEngineering (ASE),pp.50–59,January2017. guyharris. CVE-2017-13010/BEEP:Doboundscheckingwhencomparingstrings. https://github.com/the-tcpdump-group/tcpdump/commit/ 877b66b398518d9501513e0860c9f3a8acc70892. Accessed: 2023-11-14. Hazim Hanif and Sergio Maffeis. Vulberta: Simplified source code pre-training for vulnerability detection. In 2022 International joint conference on neural networks (IJCNN), pp. 1–8. IEEE, 2022. Ahmad Hazimeh, Adrian Herrera, and Mathias Payer. Magma: A Ground-Truth Fuzzing Bench- mark. ProceedingsoftheACMonMeasurementandAnalysisofComputingSystems,4(3):1–29, November2020. ISSN2476-1249. doi: 10.1145/3428334. DanHendrycks,CollinBurns,StevenBasart,AndyZou,MantasMazeika,DawnSong,andJacob Steinhardt. Measuring massive multitask language understanding. Proceedings of the Interna- tionalConferenceonLearningRepresentations(ICLR),2021. Hex-Rays. Hex-rays – state of the art binary analysis solutions. https://www.hex-rays. com/. Accessed: 2023-11-20. Huggingface. Text generation inference. https://huggingface.co/docs/ text-generation-inference/index. Accessed: 2023-09-20. Keenlab. Binary abstract inspector, April 2022. URL https://github.com/ KeenSecurityLab/BinAbsInspector. WoosukKwon,ZhuohanLi,SiyuanZhuang,YingSheng,LianminZheng,CodyHaoYu,JosephE. Gonzalez,HaoZhang,andIonStoica. Efficientmemorymanagementforlargelanguagemodel servingwithpagedattention. InProceedingsoftheACMSIGOPS29thSymposiumonOperating SystemsPrinciples,2023. 11Preprint ArielN.Lee,ColeJ.Hunter,andNatanielRuiz. Platypus: Quick,Cheap,andPowerfulRefinement ofLLMs,August2023. HaonanLi,YuHao,YizhuoZhai,andZhiyunQian. TheHitchhiker’sGuidetoProgramAnalysis: AJourneywithLargeLanguageModels,July2023. Zhen Li, Deqing Zou, Shouhuai Xu, Xinyu Ou, Hai Jin, Sujuan Wang, Zhijun Deng, and Yuyi Zhong. VulDeePecker: ADeepLearning-BasedSystemforVulnerabilityDetection. InProceed- ings2018NetworkandDistributedSystemSecuritySymposium,2018. doi: 10.14722/ndss.2018. 23158. Zhen Li, Deqing Zou, Shouhuai Xu, Zhaoxuan Chen, Yawei Zhu, and Hai Jin. VulDeeLocator: ADeepLearning-basedFine-grainedVulnerabilityDetector. IEEETransactionsonDependable andSecureComputing,pp.1–1,2021. ISSN1545-5971,1941-0018,2160-9209. doi: 10.1109/ TDSC.2021.3076142. StephanLipp,SebastianBanescu,andAlexanderPretschner.Anempiricalstudyontheeffectiveness ofstaticCcodeanalyzersforvulnerabilitydetection. InProceedingsofthe31stACMSIGSOFT International Symposium on Software Testing and Analysis, pp. 544–555, Virtual South Korea, July2022.ACM. ISBN978-1-4503-9379-9. doi: 10.1145/3533767.3534380. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual Instruction Tuning, April 2023a. PuzhuoLiu,ChengnianSun,YaowenZheng,XuanFeng,ChuanQin,YunchengWang,ZhiLi,and LiminSun. HarnessingthePowerofLLMtoSupportBinaryTaintAnalysis,October2023b. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, LukeZettlemoyer, andVeselinStoyanov. RoBERTa: ARobustlyOptimizedBERTPre- trainingApproach,July2019. Kangjie Lu, Aditya Pakki, and Qiushi Wu. Detecting Missing-Check bugs via semantic- and Context-Aware criticalness and constraints inferences. In 28th USENIX Security Sympo- sium (USENIX Security 19), pp. 1769–1786, Santa Clara, CA, August 2019. USENIX As- sociation. ISBN 978-1-939133-06-9. URL https://www.usenix.org/conference/ usenixsecurity19/presentation/lu. Shuai Lu, Daya Guo, Shuo Ren, Junjie Huang, Alexey Svyatkovskiy, Ambrosio Blanco, Colin Clement,DawnDrain,DaxinJiang,DuyuTang,GeLi,LidongZhou,LinjunShou,LongZhou, MicheleTufano,MingGong,MingZhou,NanDuan,NeelSundaresan,ShaoKunDeng,Shengyu Fu, and Shujie Liu. CodeXGLUE: A Machine Learning Benchmark Dataset for Code Under- standingandGeneration. InProceedingsoftheNeuralInformationProcessingSystemsTrackon DatasetsandBenchmarks,volume1.Curran,March2021. doi: 10.48550/arXiv.2102.04664. DanielMarjama¨ki.Cppcheck:Atoolforstaticc/c++codeanalysis,September2023.URLhttps: //cppcheck.sourceforge.io/. MITRE. CVE. https://cve.mitre.org. Accessed: 2023-09-20. David Noever. Can large language models find and fix vulnerable software? arXiv preprint arXiv:2308.10345,2023. OpenAI. GPT-4TechnicalReport. Technicalreport. LongOuyang,JeffreyWu,XuJiang,DiogoAlmeida,CarrollWainwright,PamelaMishkin,Chong Zhang,SandhiniAgarwal,KatarinaSlama,AlexRay,etal. Traininglanguagemodelstofollow instructions with human feedback. Advances in Neural Information Processing Systems, 35: 27730–27744,2022. K.Pei,Z.Xuan,J.Yang,S.Jana,andB.Ray.Learningapproximateexecutionsemanticsfromtraces |
forbinary functionsimilarity. IEEETransactions onSoftware Engineering, 49(04):2776–2790, apr2023. ISSN1939-3520. doi: 10.1109/TSE.2022.3231621. 12Preprint BaptisteRozie`re,JonasGehring,FabianGloeckle,StenSootla,ItaiGat,XiaoqingEllenTan,Yossi Adi,JingyuLiu,TalRemez,Je´re´myRapin,ArtyomKozhevnikov,IvanEvtimov,JoannaBitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre De´fossez, JadeCopet, FaisalAzhar, HugoTouvron, LouisMartin, NicolasUsunier, ThomasScialom, and GabrielSynnaeve. CodeLlama: OpenFoundationModelsforCode,August2023. TimoSchick,JaneDwivedi-Yu,RobertoDess`ı,RobertaRaileanu,MariaLomeli,LukeZettlemoyer, NicolaCancedda,andThomasScialom. Toolformer: LanguageModelsCanTeachThemselves toUseTools,February2023. JiShi, ZhunWang, ZhiyaoFeng, YangLan, ShisongQin, WeiYou, WeiZou, MathiasPayer, and ChaoZhang. AIFORE:Smartfuzzingbasedonautomaticinputformatreverseengineering. In 32ndUSENIXSecuritySymposium(USENIXSecurity23),pp.4967–4984,Anaheim,CA,August 2023.USENIXAssociation. ISBN978-1-939133-37-3. URLhttps://www.usenix.org/ conference/usenixsecurity23/presentation/shi-ji. B. Steenhoek, M. Rahman, R. Jiles, and W. Le. An empirical study of deep learning models for vulnerability detection. In 2023 IEEE/ACM 45th International Conference on Software En- gineering (ICSE), pp. 2237–2248, Los Alamitos, CA, USA, may 2023. IEEE Computer Soci- ety. doi:10.1109/ICSE48619.2023.00188. URLhttps://doi.ieeecomputersociety. org/10.1109/ICSE48619.2023.00188. InternLMTeam.Internlm:Amultilinguallanguagemodelwithprogressivelyenhancedcapabilities. https://github.com/InternLM/InternLM,2023. HugoTouvron,LouisMartin,KevinStone,PeterAlbert,AmjadAlmahairi,YasmineBabaei,Niko- lay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu,WenyinFu,BrianFuller,CynthiaGao,VedanujGoswami,NamanGoyal,AnthonyHartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann,ArtemKorenev,PunitSinghKoura,Marie-AnneLachaux,ThibautLavril,JenyaLee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, AlanSchelten,RuanSilva,EricMichaelSmith,RanjanSubramanian,XiaoqingEllenTan,Binh Tang,RossTaylor,AdinaWilliams,JianXiangKuan,PuxinXu,ZhengYan,IliyanZarov,Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, SergeyEdunov,andThomasScialom. Llama2: OpenFoundationandFine-TunedChatModels, July2023. Hao Wang, Wenjie Qu, Gilad Katz, Wenyu Zhu, Zeyu Gao, Han Qiu, Jianwei Zhuge, and Chao Zhang.jTrans:Jump-awaretransformerforbinarycodesimilaritydetection.InProceedingsofthe 31stACMSIGSOFTInternationalSymposiumonSoftwareTestingandAnalysis,pp.1–13,Virtual SouthKorea,July2022.ACM. ISBN978-1-4503-9379-9. doi: 10.1145/3533767.3534367. DavidWheeler.Flawfinder,September2023.URLhttps://dwheeler.com/flawfinder/. Qiushi Wu, Aditya Pakki, Navid Emamdoost, Stephen McCamant, and Kangjie Lu. Under- standing and detecting disordered error handling with precise function pairing. In 30th USENIXSecuritySymposium(USENIXSecurity21),pp.2041–2058.USENIXAssociation,Au- gust 2021. ISBN 978-1-939133-24-3. URL https://www.usenix.org/conference/ usenixsecurity21/presentation/wu-qiushi. HuiYang,SifuYue,andYunzhongHe. Auto-GPTforOnlineDecisionMaking: Benchmarksand AdditionalOpinions,June2023a. Zhen Yang, Ming Ding, Qingsong Lv, Zhihuan Jiang, Zehai He, Yuyi Guo, Jinfeng Bai, and Jie Tang. GPTCanSolveMathematicalProblemsWithoutaCalculator,September2023b. Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, Weng Lam Tam, Zixuan Ma, Yufei Xue, Jidong Zhai, Wenguang Chen,ZhiyuanLiu,PengZhang,YuxiaoDong,andJieTang. GLM-130b:Anopenbilingualpre- trained model. In The Eleventh International Conference on Learning Representations (ICLR), 2023. URLhttps://openreview.net/forum?id=-Aw0rrrPUF. 13Preprint Chenyuan Zhang, Hao Liu, Jiutian Zeng, Kejing Yang, Yuhong Li, and Hui Li. Prompt-enhanced softwarevulnerabilitydetectionusingchatgpt. arXivpreprintarXiv:2308.12697,2023. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, ZiLin,ZhuohanLi,DachengLi,EricP.Xing,HaoZhang,JosephE.Gonzalez,andIonStoica. JudgingLLM-as-a-judgewithMT-BenchandChatbotArena,July2023. Yunhui Zheng, Saurabh Pujar, Burn Lewis, Luca Buratti, Edward Epstein, Bo Yang, Jim Laredo, |
Alessandro Morari, and Zhong Su. D2a: A dataset built for ai-based vulnerability detection methods using differential analysis. In Proceedings of the 43rd International Conference on Software Engineering: Software Engineering in Practice, ICSE-SEIP ’21, pp. 111–120. IEEE Press,2021. ISBN9780738146690. doi: 10.1109/ICSE-SEIP52600.2021.00020. URLhttps: //doi.org/10.1109/ICSE-SEIP52600.2021.00020. YaqinZhou,ShangqingLiu,JingkaiSiow,XiaoningDu,andYangLiu. Devign: EffectiveVulnera- bilityIdentificationbyLearningComprehensiveProgramSemanticsviaGraphNeuralNetworks. CurranAssociatesInc.,RedHook,NY,USA,2019. DeqingZou,SujuanWang,ShouhuaiXu,ZhenLi,andHaiJin. µVulDeePecker: ADeepLearning- Based System for Multiclass Vulnerability Detection. IEEE Transactions on Dependable and Secure Computing, pp. 1–1, 2019. ISSN 1545-5971, 1941-0018, 2160-9209. doi: 10.1109/ TDSC.2019.2942930. 14Preprint A SELECTED LARGE LANGUAGE MODELS Table5showstheselectedLLMsandtheircorrespondingmodelsize. Weusethemodelnameon Huggingface. Name Size ChatGLM2-6b(Duetal.,2022;Zengetal.,2023) 6B Llama-2-7b-chat-hf(Touvronetal.,2023) Vicuna-7b-v1.5(Chiangetal.,2023) 7B Vicuna-7b-v1.5-16k Llama-2-13b-chat-hf Vicuna-13b-v1.5 13B Vicuna-13b-v1.5-16k Baichuan2-13B-Chat(Baichuan,2023) Internlm-20b-chat(Team,2023) 20B Vicuna-33b-v1.3 33B CodeLlama-34b-Instruct(Rozie`reetal.,2023) 34B Falcon-40b-instruct(Almazroueietal.,2023) 40B Llama-2-70b-chat-hf 70B Platypus2-70B-instruct(Leeetal.,2023) GPT-3.5(Ouyangetal.,2022) / GPT-4(OpenAI) Table5: Namesandsizesofthelargemodelsselectedintheexperiment,matchedwiththeirname onHuggingface. B SELECTED BASELINES We have selected three deep learning models and three rule-based static analysis tools according toSteenhoeketal.(2023);Lippetal.(2022)toserveasbaselineswhencomparingwiththeLLMs. Thebasicmethodsofthebaselinesareshownabove. • LineVul(Fu&Tantithamthavorn,2022): CodeBERT-basedmodeltrainedontheBig-Vul datasetwithline-levelvulnerabilityprediction. • CodeXGLUE(Luetal.,2021): CodeBERT-basedmodeltrainedontheDevigndataset. • VulBERTa(Hanif&Maffeis,2022): Roberta-basedmodeltrainedonthemultiplevulner- abilitydatasetswithcustomtokenizationpipeline. • Cppcheck(Marjama¨ki,2023):usingacombinationofsyntacticandsemanticanalysistech- niquestodetectpotentialvulnerableC/C++sourcecode. • Flawfinder(Wheeler,2023): implementsasyntacticanalysistechniquethatscansC/C++ sourcecodetoreportpotentialsecurityflaws. • BinAbsInspector(Keenlab,2022): astaticanalyzerworksonGhidra’sPcodetoscanvul- nerabilitiesinbinaries C DATASET DETAILS Table 6 shows the specific types of vulnerabilities contained in each dataset, and in multi-label classification, the label of each dataset is composed of the vulnerability types of the respective datasetsandNo Vulnerability.Whenweconductbinaryclassificationexperiments,ourfocus isfunction-levelvulnerabilities,hencewebreakdowncomplexdatasetslikeCTFandMAGMAinto individualfunctionsforevaluation. FordatasetslikeDevign,D2A,andBig-Vul,whicharealready segmentedintofunctions,wedirectlyinputthemforevaluation,withoutextraprocessing. 15Preprint Dataset TypesofVulnerabilities bufferoverflow,formatstringvulnerability,integeroverflow,typeconfusion, CTF useafterfree bufferoverflow,integeroverflow,matherror,memoryleak,nullpointer MAGMA dereference,resourceexhaustion,typeconfusion,useafterfree bufferoverflow,dividebyzero,integeroverflow,lackofsanitycheck, Devign memoryleak,nullpointerdereference,racecondition bufferoverflow,impropercontrol,integeroverflow,matherror,memoryleak, Big-Vul nullpointerdereference,racecondition,resourceexhaustion,useafterfree D2A bufferoverflow,integeroverflow,nullpointerdereference,resourceexhaustion Table6: Thetypesofvulnerabilitiescontainedineachdataset. D ILLUSTRATION OF THE NEED OF CONTEXT We illustrate the need for context instead of only a single function is adequate in vulnerabil- ity detection in Listing 2, even if there are some patterns matched. kbuf = kzalloc(len, GFP KERNEL); allocates a memory block. But it returns -EFAULT directly without freeing the kbufifcopy from iter full(kbuf, len, from)returns0. Itisacommonpatternfor memoryleak.Butwestillcannotdeterminewhetheritwillleadtomemoryleakuntilwelookinside the implement of the function copy from iter full cause it may free the memory passed as parametersifanythinggoeswrong. 1 static ssize_t qrtr_tun_write_iter(struct kiocb *iocb, struct iov_iter *from) 2 { 3 struct file *filp = iocb->ki_filp; 4 struct qrtr_tun *tun = filp->private_data; 5 size_t len = iov_iter_count(from); 6 ssize_t ret; 7 void *kbuf; 8 kbuf = kzalloc(len, GFP_KERNEL); 9 if (!kbuf) 10 return -ENOMEM; 11 if (!copy_from_iter_full(kbuf, len, from)) 12 return -EFAULT; 13 ret = qrtr_endpoint_post(&tun->ep, kbuf, len); 14 return ret < 0 ? ret : len; 15 } Listing2: VulnerablefuntioninCVE-2019-19079. 16Preprint |
E EXAMPLE IN EACH DATASET E.1 CTFDATASET In Listing 3, we give an example of raw decompiled code from IDA in a CTF binary. With the removalofsymbolinformation,theinitialglobalvariablesarenowreplacedwithidentifiersthatare represented as addresses; for instance, dword 202050 and unk 202060. Likewise, any infor- mation regarding structures is removed. The revised way of accessing potential structure member variables now involves an address offset method, whose representative equivalent is *(( QWORD *)&unk 202070 + 4 * i). This differs from the previous, more common method which in- volveddirectmembernameaccess. 1 int dword_202050; 2 char unk_202060[16]; 3 char unk_202070[8]; 4 _QWORD qword_202078[17]; 5 6 unsigned __int64 sub_B0E() 7 { 8 unsigned int v1; // [rsp+0h] [rbp-10h] BYREF 9 unsigned int i; // [rsp+4h] [rbp-Ch] 10 unsigned __int64 v3; // [rsp+8h] [rbp-8h] 11 12 v3 = __readfsqword(0x28u); 13 if ( dword_202050 > 5 ) 14 { 15 puts("You can’t add any more notes!"); 16 exit(0); 17 } 18 for ( i = 0; i <= 4 && *((_QWORD *)&unk_202070 + 4 * i); ++i ) 19 ; 20 puts("How long is your note?"); 21 _isoc99_scanf("%u", &v1); 22 if ( i > 0x80 ) 23 { 24 puts("Too long!"); 25 exit(0); 26 } 27 *((_QWORD *)&unk_202070 + 4 * i) = 1LL; 28 qword_202078[4 * i] = malloc(v1 + 1); 29 puts("Input your note name and note content:"); 30 sub_AD6((char *)&unk_202060 + 32 * i, 21LL); 31 sub_AD6(qword_202078[4 * i], v1); 32 ++dword_202050; 33 puts("Done!"); 34 return __readfsqword(0x28u) ˆ v3; 35 } Listing3:AsinglefunctioninsideCTFproblemt3sec2018 xueba.Decompiledcodefrom IDAalongwithusedglobalvariables. We restored the structure and renamed variables of the decompiled code shown in Listing 3, and theoutcomesareshowcasedinListing4. Despitethemanualreverseengineeringeffort,thecontrol flowwasnotsimplified. Moreover,certainpatternsnecessitatedbycompileroperations,suchasthe stackoverflowcanarycheckindicatedbythecodelinev2 = readfsqword(0x28u);,have been preserved. The decompiled code references library function names typically used internally bycompilers,suchas isoc99 scanf,ratherthantheirmorecommonequivalents,likescanf. However, the structural reverse engineering has clarified that multiple identifiers - unk 202060, unk 202070,andqword 202078-fromListing3are,infact,referencestothesamestructured variable Note notes[5]. The IDA, which utilizes access patterns to deduce structure, can not mergetheseidentifiers. Thisreverseengineeringprocedureaddressesthisproblem, enhancingthe comprehensibilityoftheprogram’sfunctioning. Given a more understandable decompile, we can determine the vulnerability inside this function, andthiswillserveasthenaturallanguagedescriptionofthevulnerability. ItisshowninTable7 17Preprint 1 struct Note 2 { 3 char name[16]; 4 __int64 used; 5 char *content; 6 }; 7 8 Note notes[5]; 9 10 void add_note() 11 { 12 int v0; 13 unsigned int i; 14 unsigned __int64 v2; 15 16 v2 = __readfsqword(0x28u); 17 if ( note_count > 5 ) 18 { 19 puts("You can’t add any more notes!"); 20 exit(0); 21 } 22 for ( i = 0; i <= 4 && notes[i].used; ++i ) 23 ; 24 puts("How long is your note?"); 25 _isoc99_scanf("%u", &v0); 26 if ( i > 0x80 ) 27 { 28 puts("Too long!"); 29 exit(0); 30 } 31 notes[i].used = 1LL; 32 notes[i].content = (char *)malloc((unsigned int)(v0 + 1)); 33 puts("Input your note name and note content:"); 34 read_str(notes[i].name, 0x15u); 35 read_str(notes[i].content, v0); 36 ++note_count; 37 puts("Done!"); 38 } Listing 4: A single function inside CTF problem t3sec2018 xueba. Manually reversed decompiledcodefromIDAalongwithusedglobalvariables. the name in the Node struct has only a size of 16. But in the statement read str(notes[i].name, 0x15u);, the attacker can feed at most 0x15 bytes into notes[i].name,leadingtoaBuffer-Overflowvulnerability. Table7: ThenaturallanguagedescriptionofvulnerabilityinsideListing4. E.2 MAGMADATASET 1 static int wav_write_header(SF_PRIVATE *psf, int calc_length) { 2 ... 3 #ifdef MAGMA_ENABLE_FIXES 4 /* Make sure we don’t read past the loops array end. */ 5 if (psf->instrument->loop_count > ARRAY_LEN(psf->instrument->loops)) 6 psf->instrument->loop_count = ARRAY_LEN(psf->instrument->loops); 7 #endif 8 #ifdef MAGMA_ENABLE_CANARIES 9 MAGMA_LOG("%MAGMA_BUG%", 10 apsf->instrument->loopcount > ARRAYLEN(psf->instrument->loops)); 11 #endif 12 for (tmp = 0; tmp < psf->instrument->loop_count; tmp++) { 13 int type; 14 type = psf->instrument->loops[tmp].mode; 15 ... 16 } 17 ... 18 } Listing5: MAGMAexample(SND025)withan‘Out-of-Bound-Read’Vulnerability. 18Preprint In Listing 5, we show a case where we can use the ‘canary’ and the corresponding fixes to help understand the root cause of the vulnerability in the MAGMA dataset. In each MAGMA bug, the corresponding source code contains two marco MAGMA ENABLE CANARIES and |
MAGMA ENABLE FIXEStocheckwhetherthebugistriggeredandthefixesforthebug. Thecode snippets inside marco MAGMA ENABLE CANARIES will act as the ‘canary’ for checking whether the vulnerability is triggered during Fuzzing. The code inside marco MAGMA ENABLE FIXES will serve as the fix for this vulnerability. In this example, we identify the flaw as an ‘Out-of- Bound-Read’ according to the canary psf->instrument->loop_count > ARRAY_LEN(psf->instrument-> loops)andcorrelatewithsubsequentcode,thatispsf->instrument->loops[tmp].mode. E.3 BIG-VULDATASET InListing6,weshowanexamplefromBig-Vuldatasettodemonstratehowwenormalizethe‘CWE ID’, ‘Summary’, and ‘Vulnerability Classification’ into the label of multi-class classification. In the original Big-Vul dataset, there are ‘CWE ID’, ‘Summary’, and ‘Vulnerability Classification’ to describe the vulnerability of the function, however, they do not always exist. For example, in the vulnerable function in Listing 6, it contains ‘CWE ID’ (CWE-416) and ‘Summary’ (Shown in Table 8 but the ‘Vulnerability Classification’ is absent. Then we can determine that there is a ‘Use-After-Free’vulnerabilityinthisfunction. 1 int blk_init_allocated_queue(struct request_queue *q) 2 { 3 WARN_ON_ONCE(q->mq_ops); 4 5 q->fq = blk_alloc_flush_queue(q, NUMA_NO_NODE, q->cmd_size); 6 if (!q->fq) 7 return -ENOMEM; 8 9 if (q->init_rq_fn && q->init_rq_fn(q, q->fq->flush_rq, GFP_KERNEL)) 10 goto out_free_flush_queue; 11 12 if (blk_init_rl(&q->root_rl, q, GFP_KERNEL)) 13 goto out_exit_flush_rq; 14 15 INIT_WORK(&q->timeout_work, blk_timeout_work); 16 q->queue_flags |= QUEUE_FLAG_DEFAULT; 17 18 /* 19 * This also sets hw/phys segments, boundary and size 20 */ 21 blk_queue_make_request(q, blk_queue_bio); 22 23 q->sg_reserved_size = INT_MAX; 24 25 if (elevator_init(q)) 26 goto out_exit_flush_rq; 27 return 0; 28 29 out_exit_flush_rq: 30 if (q->exit_rq_fn) 31 q->exit_rq_fn(q, q->fq->flush_rq); 32 out_free_flush_queue: 33 blk_free_flush_queue(q->fq); 34 return -ENOMEM; 35 } Listing6: ABig-Vulexample(CVE-2018-20856)containingaUse-After-Freevulnerability. AnissuewasdiscoveredintheLinuxkernelbefore4.18.7. Inblock/blk-core.c,thereis an blk drain queue()use-after-freebecauseacertainerrorcaseismishandled. Table8: SummaryofListing6inBig-Vuldataset 19Preprint F CASE STUDY Here we show some cases (1 CTF and 2 real-world cases) where GPT-4 understand the code cor- rectlyandsuccessfullydetectthevulnerabilities. ThevulnerablefunctionsareshowninListing7, Listing8andListing10,andthecorrespondinganswersofGPT-4areshowninTable9,Table10, andTable11. F.1 CTFCASE1 InCTFproblemhuxiangbei 2019 namesystemshowninListing7andcorrespondinganswer inTable9,GPT-4successfullyidentifya1[v0] = a1[v0 + 1]willcausetwopointersinside thea1pointtothesamememory,resultingthepossibleUse-After-Freevulnerability. 1 void __fastcall delete_name() 2 { 3 unsigned int v0; // [rsp+Ch] [rbp-4h] 4 5 printf("The id you want to delete:"); 6 v0 = read_int(); 7 if ( v0 <= 0x13 && a1[v0] ) 8 { 9 free(a1[v0]); 10 a1[v0] = 0LL; 11 while ( (int)(v0 + 1) <= 19 ) 12 { 13 a1[v0] = a1[v0 + 1]; 14 ++v0; 15 } 16 puts("Done!"); 17 } 18 else 19 { 20 puts("Invalid id !!"); 21 } 22 } Listing7: Functiondelete nameinCTFchallengehuxiangbei 2019 namesystem. Output: ThecodehasapotentialUse-After-Freevulnerability. Thevulnerabilityoccursinthedelete name()func- tion: void fastcalldelete name() { ... free(a1[v0]); a1[v0]=0LL; while((int)(v0+1)≤19) { a1[v0]=a1[v0+1]; ++v0; } ... } Afterfreeingthememoryata1[v0],thefunctionshiftsa1[v0+1]toa1[v0]withoutcheckingifa1[v0+1] isNULL.Thiscouldleadtoause-after-freevulnerabilitywhentheprogramaccessesthefreedmemoryin subsequentoperations. TYPE:Use-After-Free Table9: GPT-4’sAnswerforListing7inMulti-classClassification. 20Preprint F.2 REALWORLDCASE1 Infunctionfromreal-worldprogram‘tcpdump’showninListing8,theLLMidentifiesthedifference between the first call to l strnstart and the subsequent l strnstart, and report that as a possiblevulnerability.However,toverifytheexploitabilityofthisvulnerability,weneedtoexamine thefunctionl strnstart. Thedevelopermaycheckthelengthinsidel strnstartasshown inListing9. 1 beep_print(netdissect_options *ndo, const u_char *bp, u_int length) 2 { 3 4 if (l_strnstart("MSG", 4, (const char *)bp, length)) /* A REQuest */ 5 ND_PRINT((ndo, " BEEP MSG")); 6 else if (l_strnstart("RPY ", 4, (const char *)bp, length)) 7 ND_PRINT((ndo, " BEEP RPY")); 8 else if (l_strnstart("ERR ", 4, (const char *)bp, length)) |
9 ND_PRINT((ndo, " BEEP ERR")); 10 else if (l_strnstart("ANS ", 4, (const char *)bp, length)) 11 ND_PRINT((ndo, " BEEP ANS")); 12 else if (l_strnstart("NUL ", 4, (const char *)bp, length)) 13 ND_PRINT((ndo, " BEEP NUL")); 14 else if (l_strnstart("SEQ ", 4, (const char *)bp, length)) 15 ND_PRINT((ndo, " BEEP SEQ")); 16 else if (l_strnstart("END", 4, (const char *)bp, length)) 17 ND_PRINT((ndo, " BEEP END")); 18 else 19 ND_PRINT((ndo, " BEEP (payload or undecoded)")); 20 } Listing8: Areal-worldfunctionnamedbeep print. Output: Thefunctionhasabufferoverflowvulnerabilityduetothecode: if (l strnstart("MSG", 3, (const char *)bp, length)) /* A REQuest */ ND PRINT((ndo, " BEEP MSG")); Theissueliesinthesecondargumentofl strnstart()function,whichshouldbe3insteadof4. This isbecausethestring"MSG"hasalengthof3,not4. Ifthelengthofbpislessthan3,thefunctionwill readbeyondthebufferboundary,leadingtoabufferoverflowvulnerability. TYPE:Buffer-Overflow Table10: GPT-4’sAnswerforListing8inMulti-classClassification. 1 static int 2 l_strnstart(netdissect_options *ndo, const char *tstr1, u_int tl1, 3 const char *str2, u_int l2) 4 { 5 if (!NDTTESTLEN(str2, tl1)) { 6 /* We don’t have tl1 bytes worth of captured data 7 * for the string, so we can’t check for this 8 * string. */ 9 return 0; 10 } 11 if (tl1 > l2) 12 return 0; 13 return (strncmp(tstr1, str2, tl1) == 0 ? 1 : 0); 14 } Listing9: l strnstartfunctioncalledinsidebeep print. Thecommit(guyharris)tofix thisCVEdidnotmodifyhowthel strnstartiscalled,instead,thedeveloperaddanextra checkonlength. 21Preprint F.3 REALWORLDCASE2 Infunctionfromreal-worldprogramV8showninListing10,GPT-4usesitsknowledgetoidentify thatcontext ->CallFunctionwillpotentiallyretainareferencetotheobject,yieldingaUse- After-Freevulnerability.ThisrequiresthattheLLMhasinternalknowledgeaboutthesideeffectsof thecallingfunction. However,whendealingwithanewproject,itrequiresthatthecalledfunction ispresentinthecontextastheLLMdoesnotknowtheproject. 1 void CallbackAndDie(bool succeeded) { 2 v8::Isolate* isolate = context_->isolate(); 3 v8::HandleScope handle_scope(isolate); 4 v8::Local<v8::Value> args[] = {v8::Boolean::New(isolate, succeeded)}; 5 context_->CallFunction(v8::Local<v8::Function>::New(isolate, callback_), 6 arraysize(args), args); 7 delete this; 8 } Listing10: Areal-worldfunctionnamedCallbackAndDie. Output: The code snippet appears to be a callback function for a V8 JavaScript engine context. It creates a handle scope and calls a function with the given arguments. After the function call, the object is deleted. There is a potential Use-After-Free vulnerability if the callback functionretainsareferencetotheobjectandattemptstoaccessitafterthe“deletethis”statement. TYPE:Use-After-Free Table11: GPT-4’sAnswerforListing10inMulti-classClassification. F.4 REALWORLDCASE3 In the function from a real-world Lua program shown in Listing 11, all LLMs fail to identify the integer overflow vulnerability. This vulnerability is still challenging for experienced security re- searchers,whichrequiresLLMstounderstandhowintegersarestoredincomputers,andtorealize thatthelogicofthiscodesegmentfailstoconsidertheonlycornercase:theresultoftakingthenega- tionofINT MINisstillINT MIN.Whenthefindvarargfunctioniscalled,ifnequalsINT MIN, itleadstointegeroverflowintheexpression*pos=ci->func - nextra + (n-1). 1 static const char *findvararg(CallInfo *ci, int n, StkId *pos) { 2 if (clLvalue(s2v(ci->func))->p->is_vararg) { 3 int nextra = ci->u.l.nextraargs; 4 if (n <= nextra) { 5 *pos = ci->func - nextra + (n - 1); 6 return "(vararg)"; /* generic name for any vararg */ 7 } 8 } 9 return NULL; /* no such vararg */ 10 } 11 12 const char *luaG_findlocal(lua_State *L, CallInfo *ci, int n, StkId *pos) { 13 StkId base = ci->func + 1; 14 const char *name = NULL; 15 if (isLua(ci)) { 16 if (n < 0) /* access to vararg values? */ 17 return findvararg(ci, -n, pos); 18 else 19 name = luaF_getlocalname(ci_func(ci)->p, n, currentpc(ci)); 20 } 21 /* 12 lines here */ 22 if (pos) 23 *pos = base + (n - 1); 24 return name; 25 } Listing11: Areal-worldintegeroverflowvulnerabilityinlua. 22Preprint G ILLUSTRATION OF FEW SHOT LEARNING CONVERSATION To ease the parsing process, we use a few shot learning to teach the models how to answer the question in a predefined format, especially those models not aligned to follow human instruction well, such as Baichuan2, internlm, and falcon. We show how we format the 2-shot conversation templateinFigure3andFigure4. Ineachfew-shotconversation,weteachthemodeltofollowthe formatasVULNERABLE: NO,VULNERABLE: YESorTYPE: [type]. Pleasenotethatthefollowingcodesandcorrespondingexampleanswersforonlyforillustration. Please review and analyse the following code, determine whether it is vulnerable. Think step by step. Answer as "VULNERABLE: YES/NO". beep_print(netdissect_options *ndo, const u_char *bp, u_int length) { |
if (l_strnstart("MSG", 4, (const char *)bp, length)) /* A REQuest */ ND_PRINT((ndo, " BEEP MSG")); else if (l_strnstart("RPY ", 4, (const char *)bp, length)) ND_PRINT((ndo, " BEEP RPY")); else if (l_strnstart("ERR ", 4, (const char *)bp, length)) ND_PRINT((ndo, " BEEP ERR")); else if (l_strnstart("ANS ", 4, (const char *)bp, length)) ND_PRINT((ndo, " BEEP ANS")); else if (l_strnstart("NUL ", 4, (const char *)bp, length)) ND_PRINT((ndo, " BEEP NUL")); else if (l_strnstart("SEQ ", 4, (const char *)bp, length)) ND_PRINT((ndo, " BEEP SEQ")); else if (l_strnstart("END", 4, (const char *)bp, length)) ND_PRINT((ndo, " BEEP END")); else ND_PRINT((ndo, " BEEP (payload or undecoded)")); } The function has a buffer overflow vulnerability due to the code: if (l_strnstart("MSG", 4, (const char *)bp, length)) /* A REQuest */ ND_PRINT((ndo, " BEEP MSG")); If the length of bp is less than 4, the function will read beyond the buffer boundary, leading to a buffer overflow vulnerability. VULNERABLE: YES Please review and analyse the following code, determine whether it is vulnerable. Think step by step. Answer as "VULNERABLE: YES/NO". static int announce_self_create(uint8_t *buf, uint8_t *mac_addr) { uint32_t magic = EXPERIMENTAL_MAGIC; uint16_t proto = htons(ETH_P_EXPERIMENTAL); /* FIXME: should we send a different packet (arp/rarp/ping)? */ memset(buf, 0, 64); memset(buf, 0xff, 6); /* h_dst */ memcpy(buf + 6, mac_addr, 6); /* h_src */ memcpy(buf + 12, &proto, 2); /* h_proto */ memcpy(buf + 14, &magic, 4); /* magic */ return 64; /* len */ } The memset function is used to initially set the entire buffer to zero, ensuring there's no lingering junk data. The function doesn't take any untrusted inputs from the outside. The only input is mac_addr, and we assume the code calling this function ensures it's a valid MAC address. There is no apparently vulnerability in the function. VULNERABLE: NO Few-shot Conversation User Conversation Please review and analyse the following code, determine whether it is vulnerable. Think step by step. Answer as "VULNERABLE: YES/NO". [Code to be evaluated] Figure3: Few-shotconversationforbinaryclassification(whetherafunctionisvulnerable). Texts inthegreenboxarethequeries,andtextsintheyellowboxarethemodel’sanswers. 23Preprint Please review and analyse the following code, determine whether it has vulnerabilities in the following type list. [Possible Types of Vulnerabilities] Think step by step. Answer as [Your Explanation] TYPE: [Vulnerability TYPE]. beep_print(netdissect_options *ndo, const u_char *bp, u_int length) { if (l_strnstart("MSG", 4, (const char *)bp, length)) /* A REQuest */ ND_PRINT((ndo, " BEEP MSG")); else if (l_strnstart("RPY ", 4, (const char *)bp, length)) ND_PRINT((ndo, " BEEP RPY")); else if (l_strnstart("ERR ", 4, (const char *)bp, length)) ND_PRINT((ndo, " BEEP ERR")); else if (l_strnstart("ANS ", 4, (const char *)bp, length)) ND_PRINT((ndo, " BEEP ANS")); else if (l_strnstart("NUL ", 4, (const char *)bp, length)) ND_PRINT((ndo, " BEEP NUL")); else if (l_strnstart("SEQ ", 4, (const char *)bp, length)) ND_PRINT((ndo, " BEEP SEQ")); else if (l_strnstart("END", 4, (const char *)bp, length)) ND_PRINT((ndo, " BEEP END")); else ND_PRINT((ndo, " BEEP (payload or undecoded)")); } The function has a buffer overflow vulnerability due to the code: if (l_strnstart("MSG", 4, (const char *)bp, length)) /* A REQuest */ ND_PRINT((ndo, " BEEP MSG")); If the length of bp is less than 4, the function will read beyond the buffer boundary, leading to a buffer overflow vulnerability. TYPE: Buffer-Overflow Please review and analyse the following code, determine whether it has vulnerabilities in the following type list. [Possible Types of Vulnerabilities] Think step by step. Answer as [Your Explanation] TYPE: [Vulnerability TYPE]. static int announce_self_create(uint8_t *buf, uint8_t *mac_addr) { uint32_t magic = EXPERIMENTAL_MAGIC; uint16_t proto = htons(ETH_P_EXPERIMENTAL); /* FIXME: should we send a different packet (arp/rarp/ping)? */ memset(buf, 0, 64); memset(buf, 0xff, 6); /* h_dst */ memcpy(buf + 6, mac_addr, 6); /* h_src */ memcpy(buf + 12, &proto, 2); /* h_proto */ memcpy(buf + 14, &magic, 4); /* magic */ return 64; /* len */ } The memset function is used to initially set the entire buffer to zero, ensuring there's no lingering junk data. The function doesn't take any untrusted inputs from the outside. The only input is mac_addr, and we assume the code calling this function ensures it's a valid MAC address.There is no apparently vulnerability in the function. TYPE: No-Vulnerability Few-shot Conversation |
User Conversation Please review and analyse the following code, determine whether it has vulnerabilities in the following type list. [Possible Types of Vulnerabilities] Think step by step. Answer as [Your Explanation] TYPE: [Vulnerability TYPE]. [Code to be evaluated] Figure4: Few-shotconversationformulti-classclassification(determineafunction’svulnerability type). Textsinthegreenboxarethequeries,andtextsintheyellowboxarethemodel’sanswers. 24Preprint H ANALYSIS OF THE MODELS’ PERFORMANCE H.1 CONFIGUATION In Figures5, 6, and7, wepresent the confusion matrices forselect representativemodels. Within these matrices, each row enumerates the percentage of accurate or inaccurate classification of a specificvulnerabilitytypeagainsteachpredictedcategory. WedemonstratetheresultsontheCTF, Big-Vul,andMAGMAdatasets.AsfortheCTFandMAGMAdatasets,wemixthe‘single’function resultand‘all’functionsresult. Inthetable,therowlabeled‘NoVulns’indicatesthatthefunction isdevoidofvulnerabilities. Thecolumntitled‘Invalid’quantifiesthepercentagewhereinthemodel outputsresultsthatareincompatiblewiththeexpectedtemplateoutlinedinSection4.2. H.2 MODELS’SBEHAVIOUR UponanalyzingtheCTFdataset,asshowninFigure5,bothGPT-4andGPT-3.5predominantlyyield correctresponseswithGPT-4slightlyahead. However,GPT-3.5tendstoanswer‘NoVulnerability’ intheMAGMAandBig-Vuldatasets,respectively. GPT-4’sperformance,whilecautious,ismarred byatendencytoincorrectlyflagpotential‘NullPointerDereference’and‘MemoryLeak’issuesdue tothelackofadditionalcontext,resultinginhypersensitivityandthemisconception. TheLlama2models,despiteusingsimilardatasetsforpretrainingandthesameRLHFmethodsfor alignment,displaystarklydifferentbehaviorsacrossdatasets. The70Bconsistentlypredicts‘Buffer Overflow’ on the CTF dataset, while the 13B variant exhibits a random pattern in its responses, showing a marginal tendency towards ‘Use After Free’. This bias is more evident on the Big-Vul andMAGMAdatasets,whichisalsoreplicatedintheBaichuan2andInternlmmodels. Meanwhile,regardingtheVicunaandPlatypusmodels,whicharefine-tunedontheSupervisedFine- Tuning (SFT) datasets, there is a greater propensity to output ‘No Vulnerability’ when uncertain. Platypus, in particular, despite leveraging a substantial corpus of GPT-4 generated data, fails to reach the level of GPT-4, exhibiting behaviors that are not entirely consistent with GPT-4 but are noticeablyconservative,frequentlyreturning‘NoVulnerability’. H.3 CAPABILITIESONDIFFERENTVULNERABILITIES Inconductingaspecific analysis ofamodel’scapacityfordifferentkinds ofvulnerabilities, ithas beenobservedthatinCTF,themodelexhibitsoptimalperformanceinidentifying‘IntegerOverflow’ vulnerability, almost always recognizing them correctly. Simultaneously, the model is relatively proficientinhandling‘FormatString’vulnerability.Apartfromthemodelshavingasignificantbias, mostmodelsstrugglewithtypeconfusionduringCTFencounters. Thismaybeinpartbecausethe modelsarepre-trainedwithconsiderablyfewerexamplesoftypeconfusion,andalsobecausetype confusionoftenoccursinmultiplefunctions,whichplacehigherdemandsonthemodel’scapability. However, intheMAGMAandBig-Vuldatasets, Platypus2, whichutilizessignificanttrainingcor- pus derived from GPT-4, demonstrate certain detection capabilities for Buffer Overflow, possibly because examples of ‘Buffer Overflow’ are more common in the standard internet corpus. Nev- ertheless, these models do not perform as well with other types of vulnerabilities. Surprisingly, Llama270B,whichhasatendencytooutput‘BufferOverflow’,showsaremarkablygoodresultin detecting‘RaceConditions’intheBig-Vuldataset,presentinganintriguingphenomenon. H.4 MODELBIAS Inresponsetotheconsiderablebiasestowardcertainvulnerabilitiesobservedacrossvariousmodels, we can discern that these biases—manifest in models such as Llama 2 Chat, Baichuan2, and In- ternLM—arelikelyinterconnectedwiththeRLHF.DuringtheRLHFprocess,it’sconceivablethat models might develop a propensity for outputs that align more with human values or preferences ratherthanreflectingtheultimatetruth—especiallywhentheylackthecapabilitytoindependently evaluatetheflaws. Thistendencytoerrtowardshuman-likeprejudicewhenpresentingavulnerabil- ityisevident. Conversely,modelsderivedthroughSFTexhibitsignificantlyanotherbiases,tending tobecomparativelyconservative. 25Preprint reffuB wolfrevO tamroF gnirtS regetnI wolfrevO epyT noisufnoC retfa esU eerF oN snluV dilavnI Buffer Overflow Format String Integer Overflow Type Confusion Use after Free No Vulns Predicted label lebal eurT 0.71 0.16 0.02 0.00 0.02 0.07 0.01 0.8 0.06 0.85 0.00 0.00 0.01 0.08 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.6 0.33 0.17 0.00 0.00 0.00 0.50 0.00 0.4 0.26 0.13 0.01 0.01 0.45 0.11 0.04 0.2 0.24 0.07 0.02 0.01 0.02 0.60 0.05 (a)GPT-4 reffuB wolfrevO tamroF gnirtS regetnI wolfrevO epyT noisufnoC retfa esU eerF oN snluV dilavnI Buffer Overflow Format String Integer Overflow Type Confusion Use after Free |
No Vulns Predicted label lebal eurT 0.34 0.14 0.01 0.00 0.02 0.38 0.11 0.8 0.16 0.48 0.00 0.00 0.00 0.29 0.08 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.6 0.08 0.00 0.00 0.00 0.00 0.83 0.08 0.4 0.15 0.11 0.01 0.00 0.27 0.38 0.09 0.2 0.10 0.06 0.01 0.00 0.02 0.75 0.07 (b)GPT3.5 reffuB wolfrevO tamroF gnirtS regetnI wolfrevO epyT noisufnoC retfa esU eerF oN snluV dilavnI Buffer Overflow Format String Integer Overflow Type Confusion Use after Free No Vulns Predicted label lebal eurT 0.20 0.10 0.01 0.02 0.03 0.55 0.09 0.8 0.23 0.28 0.01 0.03 0.01 0.40 0.05 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.6 0.05 0.00 0.00 0.00 0.00 0.82 0.12 0.4 0.09 0.09 0.03 0.04 0.12 0.50 0.13 0.2 0.06 0.03 0.01 0.01 0.02 0.80 0.07 (c)Platypus270B reffuB wolfrevO tamroF gnirtS regetnI wolfrevO epyT noisufnoC retfa esU eerF oN snluV dilavnI Buffer Overflow Format String Integer Overflow Type Confusion Use after Free No Vulns Predicted label lebal eurT 0.71 0.07 0.05 0.04 0.06 0.06 0.00 0.8 0.57 0.19 0.05 0.08 0.06 0.05 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.6 0.56 0.07 0.02 0.09 0.13 0.13 0.00 0.4 0.52 0.05 0.03 0.03 0.30 0.07 0.00 0.2 0.51 0.16 0.05 0.08 0.15 0.05 0.00 (d)Llama-270B reffuB wolfrevO tamroF gnirtS regetnI wolfrevO epyT noisufnoC retfa esU eerF oN snluV dilavnI Buffer Overflow Format String Integer Overflow Type Confusion Use after Free No Vulns Predicted label lebal eurT 0.11 0.21 0.02 0.20 0.26 0.16 0.02 0.8 0.08 0.33 0.02 0.16 0.25 0.15 0.02 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.6 0.07 0.24 0.00 0.24 0.27 0.16 0.02 0.4 0.10 0.19 0.02 0.16 0.35 0.16 0.01 0.2 0.08 0.22 0.03 0.14 0.22 0.30 0.01 (e)Llama213B reffuB wolfrevO tamroF gnirtS regetnI wolfrevO epyT noisufnoC retfa esU eerF oN snluV dilavnI Buffer Overflow Format String Integer Overflow Type Confusion Use after Free No Vulns Predicted label lebal eurT 0.16 0.13 0.06 0.06 0.12 0.31 0.18 0.8 0.16 0.29 0.04 0.05 0.10 0.22 0.15 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.6 0.05 0.05 0.07 0.00 0.02 0.58 0.23 0.4 0.07 0.11 0.06 0.07 0.17 0.33 0.19 0.2 0.09 0.09 0.04 0.07 0.10 0.51 0.10 (f)Vicuna13B reffuB wolfrevO tamroF gnirtS regetnI wolfrevO epyT noisufnoC retfa esU eerF oN snluV dilavnI Buffer Overflow Format String Integer Overflow Type Confusion Use after Free No Vulns Predicted label lebal eurT 0.17 0.08 0.05 0.18 0.10 0.34 0.07 0.8 0.15 0.16 0.02 0.17 0.10 0.33 0.07 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.6 0.16 0.04 0.00 0.18 0.11 0.38 0.13 0.4 0.10 0.04 0.06 0.18 0.17 0.34 0.09 0.2 0.15 0.08 0.04 0.19 0.07 0.41 0.07 (g)Baichuan213B reffuB wolfrevO tamroF gnirtS regetnI wolfrevO epyT noisufnoC retfa esU eerF oN snluV dilavnI Buffer Overflow Format String Integer Overflow Type Confusion Use after Free No Vulns Predicted label lebal eurT 0.32 0.20 0.07 0.08 0.07 0.21 0.06 0.8 0.28 0.32 0.06 0.09 0.07 0.13 0.04 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.6 0.33 0.20 0.05 0.09 0.07 0.20 0.05 0.4 0.24 0.19 0.08 0.09 0.13 0.21 0.06 0.2 0.23 0.18 0.03 0.08 0.05 0.37 0.06 (h)Internlm20B Figure5: ConfusionmatrixofdifferentmodelsontheCTFdataset 26Preprint reffuB wolfrevO reporpmI lortnoC regetnI wolfrevO rorrE htaM yromeM kaeL retnioP lluN ecnerefereD ecaR noitidnoC ecruoseR noitsuahxE retfa esU eerF oN snluV dilavnI Buffer Overflow Improper Control Integer Overflow Math Error Memory Leak Null Pointer Dereference Race Condition Resource Exhaustion Use after Free No Vulns Predicted label lebal eurT 0.400.050.000.000.100.000.000.000.000.450.00 0.000.050.000.000.000.050.000.000.000.860.05 0.8 0.000.000.170.000.170.000.000.170.000.500.00 0.000.000.000.000.000.000.000.000.001.000.00 0.6 0.250.000.000.000.250.250.000.000.000.250.00 0.000.000.000.000.000.170.170.170.000.500.00 0.4 0.000.000.000.000.000.000.000.000.001.000.00 0.170.000.000.000.170.000.000.000.000.670.00 0.2 0.000.000.000.000.000.000.000.000.500.500.00 0.080.000.020.000.010.040.030.010.050.740.01 (a)GPT4 reffuB wolfrevO reporpmI lortnoC regetnI wolfrevO rorrE htaM yromeM kaeL retnioP lluN ecnerefereD ecaR noitidnoC ecruoseR noitsuahxE retfa esU eerF oN snluV dilavnI Buffer Overflow Improper Control Integer Overflow Math Error |
Memory Leak Null Pointer Dereference Race Condition Resource Exhaustion Use after Free No Vulns Predicted label lebal eurT 0.050.000.000.000.050.050.000.050.000.800.00 0.000.000.000.000.000.050.000.000.000.950.00 0.8 0.000.000.170.000.000.170.000.000.000.670.00 0.000.000.000.000.000.000.000.000.001.000.00 0.6 0.000.000.000.000.000.000.000.000.001.000.00 0.000.000.000.000.000.000.000.000.001.000.00 0.4 0.000.000.000.000.000.000.000.000.001.000.00 0.000.000.000.000.000.000.000.000.001.000.00 0.2 0.000.000.000.000.000.000.000.000.001.000.00 0.030.000.000.000.010.040.000.010.010.880.03 (b)GPT3.5 reffuB wolfrevO reporpmI lortnoC regetnI wolfrevO yromeM kaeL retnioP lluN ecnerefereD ecaR noitidnoC ecruoseR noitsuahxE retfa esU eerF oN snluV dilavnI Buffer Overflow Improper Control Integer Overflow Memory Leak Null Pointer Dereference Race Condition Resource Exhaustion Use after Free No Vulns Predicted label lebal eurT 0.130.000.010.030.000.000.000.000.73 0.11 0.8 0.010.000.020.000.000.010.000.000.920.03 0.000.000.070.000.000.000.000.000.670.27 0.6 0.000.100.000.000.000.000.000.000.800.10 0.000.050.000.000.000.000.000.000.900.05 0.4 0.000.000.000.000.000.000.000.000.900.10 0.030.000.000.000.000.030.000.000.770.17 0.2 0.000.000.000.000.000.000.000.000.850.15 0.020.010.000.020.010.000.010.010.80 0.11 (c)Platypus270B reffuB wolfrevO reporpmI lortnoC regetnI wolfrevO rorrE htaM yromeM kaeL retnioP lluN ecnerefereD ecaR noitidnoC ecruoseR noitsuahxE retfa esU eerF oN snluV dilavnI Buffer Overflow Improper Control Integer Overflow Math Error Memory Leak Null Pointer Dereference Race Condition Resource Exhaustion Use after Free No Vulns Predicted label lebal eurT 0.440.020.130.000.010.040.000.080.110.170.00 0.300.120.020.000.020.030.130.000.130.230.00 0.8 0.600.030.000.000.070.000.000.000.070.230.00 0.000.000.000.000.000.000.000.000.001.000.00 0.6 0.700.000.000.000.100.100.000.000.100.000.00 0.480.000.000.000.000.000.000.000.080.440.00 0.4 0.000.000.000.000.000.000.800.000.000.200.00 0.830.000.000.000.030.000.030.000.100.000.00 0.2 0.250.000.000.000.000.000.000.050.450.250.00 0.410.050.030.010.000.020.040.010.210.210.00 (d)Llama270B reffuB wolfrevO reporpmI lortnoC regetnI wolfrevO rorrE htaM yromeM kaeL retnioP lluN ecnerefereD ecaR noitidnoC ecruoseR noitsuahxE retfa esU eerF oN snluV dilavnI Buffer Overflow Improper Control Integer Overflow Math Error Memory Leak Null Pointer Dereference Race Condition Resource Exhaustion Use after Free No Vulns Predicted label lebal eurT 0.080.060.000.000.010.010.010.010.580.230.01 0.020.040.000.020.000.000.020.010.400.460.02 0.7 0.000.000.000.000.030.030.000.070.670.170.03 0.6 0.000.000.000.000.200.000.000.000.600.000.20 0.5 0.100.000.000.000.000.000.000.000.700.200.00 0.4 0.000.000.000.000.000.000.000.080.480.240.20 0.000.000.000.000.000.000.000.000.200.800.00 0.3 0.030.030.000.000.000.000.000.000.600.300.03 0.2 0.000.000.000.050.000.000.000.000.500.450.00 0.1 0.030.030.000.010.010.000.020.010.490.370.03 (e)Llama213B reffuB wolfrevO reporpmI lortnoC regetnI wolfrevO rorrE htaM yromeM kaeL retnioP lluN ecnerefereD ecaR noitidnoC ecruoseR noitsuahxE retfa esU eerF oN snluV dilavnI Buffer Overflow Improper Control Integer Overflow Math Error Memory Leak Null Pointer Dereference Race Condition Resource Exhaustion Use after Free No Vulns Predicted label lebal eurT 0.010.010.000.010.000.010.010.000.040.680.23 0.000.000.000.000.000.000.010.000.020.890.09 0.8 0.000.030.000.000.000.000.000.000.070.730.17 0.000.000.100.000.100.000.000.000.300.100.40 0.6 0.000.000.100.000.000.000.000.000.000.600.30 0.000.000.000.000.030.000.000.000.070.730.17 0.4 0.000.000.000.000.000.000.000.000.001.000.00 0.000.000.000.000.000.000.030.000.000.870.10 0.2 0.000.000.050.000.050.000.000.000.000.900.00 0.010.010.010.000.010.010.010.000.020.800.12 (f)Vicuna13B reffuB wolfrevO reporpmI lortnoC regetnI wolfrevO rorrE htaM yromeM kaeL retnioP lluN ecnerefereD ecaR noitidnoC ecruoseR noitsuahxE retfa esU eerF oN snluV dilavnI Buffer Overflow Improper Control Integer Overflow Math Error Memory Leak Null Pointer Dereference Race Condition Resource Exhaustion Use after Free No Vulns Predicted label lebal eurT 0.110.030.020.000.070.260.000.010.030.390.08 0.030.100.000.000.000.300.000.010.000.410.14 0.7 0.030.070.000.000.070.230.000.000.070.470.07 0.6 |
0.000.000.000.000.200.000.000.000.000.000.80 0.5 0.000.100.000.000.000.500.000.000.000.200.20 0.4 0.050.050.000.000.000.350.000.000.000.400.15 0.000.000.000.000.000.000.500.000.000.300.20 0.3 0.030.030.000.030.100.330.030.030.000.330.07 0.2 0.000.100.000.000.050.350.050.000.000.300.15 0.1 0.060.050.000.000.040.280.020.030.030.380.13 (g)Baichuan213B reffuB wolfrevO reporpmI lortnoC regetnI wolfrevO rorrE htaM yromeM kaeL retnioP lluN ecnerefereD ecaR noitidnoC ecruoseR noitsuahxE retfa esU eerF oN snluV dilavnI Buffer Overflow Improper Control Integer Overflow Math Error Memory Leak Null Pointer Dereference Race Condition Resource Exhaustion Use after Free No Vulns Predicted label lebal eurT 0.120.100.030.000.010.010.020.030.010.540.13 0.030.030.010.000.010.020.000.010.000.780.10 0.7 0.100.030.070.000.000.000.000.000.000.730.07 0.6 0.600.200.000.000.000.000.000.000.200.000.00 0.5 0.000.100.000.000.000.000.000.000.000.800.10 0.4 0.030.270.000.000.000.000.000.000.000.570.13 0.000.000.300.000.000.100.000.000.000.500.10 0.3 0.070.130.070.000.000.030.000.000.000.600.10 0.2 0.100.000.000.000.100.000.000.150.150.450.05 0.1 0.060.050.030.000.000.010.020.020.010.690.10 (h)Internlm20B Figure6: ConfusionmatrixofdifferentmodelsontheBig-Vuldataset 27Preprint reffuB wolfrevO regetnI wolfrevO rorrE htaM yromeM kaeL retnioP lluN ecnerefereD ecruoseR noitsuahxE epyT noisufnoC retfa esU eerF oN snluV dilavnI Buffer Overflow Integer Overflow Math Error Memory Leak Null Pointer Dereference Resource Exhaustion Type Confusion Use after Free No Vulns Predicted label lebal eurT 0.23 0.11 0.030.020.090.050.010.010.440.03 0.130.140.020.060.060.040.000.000.540.00 0.5 0.000.000.080.250.250.170.000.000.250.00 0.4 0.000.000.000.290.140.000.000.000.570.00 0.020.050.000.150.090.030.010.030.590.02 0.3 0.000.500.000.000.000.250.000.000.250.00 0.2 0.050.000.000.250.150.000.000.000.550.00 0.000.000.000.200.200.000.000.000.600.00 0.1 0.140.080.030.070.100.030.010.010.510.02 (a)GPT-4 reffuB wolfrevO regetnI wolfrevO rorrE htaM yromeM kaeL retnioP lluN ecnerefereD ecruoseR noitsuahxE epyT noisufnoC retfa esU eerF oN snluV dilavnI Buffer Overflow Integer Overflow Math Error Memory Leak Null Pointer Dereference Resource Exhaustion Type Confusion Use after Free No Vulns Predicted label lebal eurT 0.010.030.000.020.020.000.000.000.81 0.11 0.8 0.020.010.000.010.000.010.000.000.880.08 0.000.060.000.000.000.000.000.000.530.41 0.6 0.000.000.000.000.000.000.000.000.820.18 0.010.000.000.000.010.000.000.000.840.14 0.4 0.000.330.000.000.000.000.000.000.670.00 0.000.000.000.000.040.000.000.000.920.04 0.2 0.000.000.000.000.000.000.000.000.570.43 0.010.020.000.010.020.000.000.000.83 0.11 (b)GPT-3.5 reffuB wolfrevO regetnI wolfrevO rorrE htaM yromeM kaeL retnioP lluN ecnerefereD ecruoseR noitsuahxE epyT noisufnoC retfa esU eerF oN snluV dilavnI Buffer Overflow Integer Overflow Math Error Memory Leak Null Pointer Dereference Resource Exhaustion Type Confusion Use after Free No Vulns Predicted label lebal eurT 0.03 0.11 0.000.020.000.010.000.000.670.17 0.8 0.010.040.000.040.000.010.000.010.790.09 0.7 0.000.000.000.200.000.000.000.000.800.00 0.6 0.000.000.000.000.000.000.000.000.850.15 0.5 0.000.010.000.020.000.010.000.010.760.18 0.4 0.000.000.000.000.000.070.000.000.530.40 0.3 0.000.000.000.010.000.020.000.000.870.10 0.2 0.000.000.000.030.000.000.000.000.850.12 0.1 0.020.080.000.020.000.010.000.000.710.15 (c)Platypus270B reffuB wolfrevO regetnI wolfrevO rorrE htaM yromeM kaeL retnioP lluN ecnerefereD ecruoseR noitsuahxE epyT noisufnoC retfa esU eerF oN snluV dilavnI Buffer Overflow Integer Overflow Math Error Memory Leak Null Pointer Dereference Resource Exhaustion Type Confusion Use after Free No Vulns Predicted label lebal eurT 0.660.080.000.010.010.010.020.040.140.03 0.690.070.000.000.010.020.040.100.050.01 0.6 0.370.000.050.000.000.050.000.050.470.00 0.5 0.380.000.000.000.000.000.000.060.500.06 0.4 0.640.060.000.020.030.010.010.09 0.11 0.04 0.3 0.670.000.000.000.000.000.000.000.330.00 0.290.020.000.020.130.000.190.270.100.00 0.2 0.380.000.000.000.000.000.050.450.120.00 0.1 0.630.070.000.010.020.010.030.080.140.02 (d)Llama270B reffuB wolfrevO regetnI wolfrevO rorrE |
htaM yromeM kaeL retnioP lluN ecnerefereD ecruoseR noitsuahxE epyT noisufnoC retfa esU eerF oN snluV dilavnI Buffer Overflow Integer Overflow Math Error Memory Leak Null Pointer Dereference Resource Exhaustion Type Confusion Use after Free No Vulns Predicted label lebal eurT 0.080.020.020.040.010.010.060.560.150.05 0.6 0.060.040.000.030.000.000.040.580.190.06 0.5 0.000.150.050.050.000.050.000.450.000.25 0.4 0.030.060.000.030.000.000.030.540.170.14 0.030.040.010.030.010.010.060.590.120.09 0.3 0.080.080.000.080.000.000.000.620.150.00 0.2 0.030.010.020.020.000.020.030.440.380.04 0.030.000.00 0.11 0.000.000.000.570.300.00 0.1 0.070.020.010.040.010.010.060.570.160.05 (e)Llama213B reffuB wolfrevO regetnI wolfrevO rorrE htaM yromeM kaeL retnioP lluN ecnerefereD ecruoseR noitsuahxE epyT noisufnoC retfa esU eerF oN snluV dilavnI Buffer Overflow Integer Overflow Math Error Memory Leak Null Pointer Dereference Resource Exhaustion Type Confusion Use after Free No Vulns Predicted label lebal eurT 0.010.080.000.010.010.010.03 0.11 0.430.31 0.7 0.000.040.000.020.010.030.030.090.500.28 0.6 0.000.030.000.010.000.010.070.100.250.52 0.5 0.000.020.000.050.000.020.020.200.460.24 0.4 0.000.010.010.010.010.010.010.100.420.42 0.000.000.000.000.000.050.050.210.420.26 0.3 0.000.000.000.000.020.010.060.020.740.16 0.2 0.000.030.000.000.000.030.090.060.620.18 0.1 0.000.070.000.010.010.010.020.090.490.30 (f)Vicuna13B reffuB wolfrevO regetnI wolfrevO rorrE htaM yromeM kaeL retnioP lluN ecnerefereD ecruoseR noitsuahxE epyT noisufnoC retfa esU eerF oN snluV dilavnI Buffer Overflow Integer Overflow Math Error Memory Leak Null Pointer Dereference Resource Exhaustion Type Confusion Use after Free No Vulns Predicted label lebal eurT 0.270.080.000.050.320.000.000.030.150.09 0.260.080.000.130.230.000.000.030.150.12 0.8 0.000.000.000.001.000.000.000.000.000.00 0.000.000.000.000.410.000.000.000.310.28 0.6 0.120.030.000.090.440.000.000.030.150.13 0.460.080.000.080.230.000.000.000.080.08 0.4 0.060.040.010.010.490.010.060.030.200.09 0.2 0.130.000.000.050.530.000.000.160.130.00 0.200.070.000.040.370.000.010.030.160.12 (g)Baichuan213B reffuB wolfrevO regetnI wolfrevO rorrE htaM yromeM kaeL retnioP lluN ecnerefereD ecruoseR noitsuahxE epyT noisufnoC retfa esU eerF oN snluV dilavnI Buffer Overflow Integer Overflow Math Error Memory Leak Null Pointer Dereference Resource Exhaustion Type Confusion Use after Free No Vulns Predicted label lebal eurT 0.230.140.000.010.020.030.010.020.400.14 0.6 0.140.080.000.010.020.040.010.020.510.18 0.5 0.150.230.000.000.000.120.000.000.270.23 0.11 0.040.000.090.000.020.000.040.510.20 0.4 0.200.070.000.010.020.030.010.020.410.23 0.3 0.200.250.000.000.000.100.050.050.200.15 0.050.030.000.010.020.030.030.040.660.12 0.2 0.11 0.000.000.060.030.000.030.030.540.20 0.1 0.160.090.000.010.020.020.020.020.490.16 (h)Internlm20B Figure7: ConfusionmatrixofdifferentmodelsontheMAGMAdataset 28Preprint I FULL EVALUATION RESULT WeshowthefullevaluationresultinTable12andTable13. Table12: FullevaluationresultsonCTFdataset BinaryClassification Multi-classClassification Size Model Type F1 Precision Recall F1 Precision Recall 2Shot 0.281 0.177 0.672 0.103 0.182 0.237 6B ChatGLM2 5Shot 0.255 0.168 0.529 0.081 0.181 0.135 2Shot 0.291 0.171 0.966 0.104 0.170 0.179 Llama-2 5Shot 0.288 0.171 0.917 0.099 0.172 0.190 2Shot 0.310 0.188 0.882 0.135 0.195 0.302 7B Vicuna 5Shot 0.325 0.198 0.902 0.128 0.203 0.282 2Shot 0.331 0.234 0.567 0.164 0.212 0.301 Vicuna-16k 5Shot 0.335 0.232 0.601 0.147 0.208 0.215 2Shot 0.291 0.172 0.948 0.133 0.184 0.279 Llama-2 5Shot 0.302 0.181 0.916 0.103 0.187 0.192 2Shot 0.294 0.178 0.845 0.189 0.202 0.198 Baichuan2 5Shot 0.289 0.170 0.972 0.068 0.186 0.217 13B 2Shot 0.336 0.246 0.530 0.188 0.203 0.208 Vicuna 5Shot 0.359 0.257 0.592 0.145 0.210 0.173 2Shot 0.329 0.236 0.545 0.169 0.195 0.226 Vicuna-16k 5Shot 0.336 0.238 0.571 0.167 0.202 0.211 2Shot 0.304 0.190 0.762 0.158 0.188 0.206 20B InternLM 5Shot 0.325 0.206 0.768 0.142 0.202 0.242 2Shot 0.333 0.213 0.768 0.168 0.205 0.267 33B Vicuna 5Shot / / / / / / 2Shot 0.319 0.224 0.553 0.148 0.207 0.226 34B CodeLlama 5Shot 0.340 0.239 0.590 0.120 0.214 0.287 2Shot 0.318 0.208 0.675 0.193 0.206 0.262 |
40B Falcon 5Shot / / / / / / 2Shot 0.295 0.173 0.993 0.100 0.197 0.340 Llama-2 5Shot 0.309 0.183 0.989 0.098 0.196 0.321 70B 2Shot 0.352 0.289 0.455 0.234 0.246 0.236 Platypus 5Shot 0.364 0.288 0.504 0.228 0.248 0.221 2Shot 0.440 0.327 0.689 0.305 0.307 0.381 GPT-3.5 5Shot 0.429 0.340 0.590 0.296 0.305 0.308 / 2Shot 0.466 0.319 0.861 0.296 0.300 0.439 GPT-4 5Shot 0.483 0.356 0.750 0.379 0.357 0.489 29Preprint Table13: Fullevaluationresultsonthereal-worlddataset BinaryClassification Multi-classClassification Size Model Type F1 Precision Recall F1 Precision Recall 2Shot 0.344 0.455 0.301 0.063 0.125 0.060 6B ChatGLM2-6b 5Shot 0.274 0.456 0.218 0.065 0.134 0.073 2Shot 0.588 0.463 0.891 0.106 0.150 0.124 Llama-2 5Shot 0.566 0.458 0.823 0.115 0.156 0.139 2Shot 0.413 0.478 0.382 0.111 0.171 0.113 7B Vicuna 5Shot 0.470 0.465 0.495 0.138 0.201 0.129 2Shot 0.144 0.454 0.088 0.109 0.148 0.101 Vicuna-16k 5Shot 0.165 0.457 0.101 0.095 0.141 0.079 2Shot 0.527 0.453 0.706 0.083 0.140 0.110 Llama-2 5Shot 0.555 0.475 0.725 0.089 0.174 0.150 2Shot 0.525 0.467 0.659 0.124 0.157 0.130 Baichuan2 5Shot 0.588 0.450 0.936 0.101 0.188 0.168 13B 2Shot 0.114 0.476 0.067 0.107 0.143 0.116 Vicuna 5Shot 0.133 0.510 0.080 0.093 0.189 0.078 2Shot 0.063 0.440 0.035 0.090 0.124 0.094 Vicuna-16k 5Shot 0.073 0.360 0.042 0.093 0.143 0.089 2Shot 0.419 0.496 0.392 0.135 0.151 0.136 20B InternLM 5Shot 0.453 0.491 0.455 0.132 0.154 0.127 2Shot 0.411 0.487 0.406 0.159 0.191 0.155 33B Vicuna 5Shot / / / / / / 2Shot 0.297 0.487 0.218 0.078 0.150 0.096 34B CodeLlama 5Shot 0.243 0.438 0.178 0.073 0.140 0.119 2Shot 0.305 0.465 0.251 0.155 0.183 0.154 40B Falcon 5Shot / / / / / / 2Shot 0.528 0.468 0.708 0.126 0.175 0.189 Llama-2 5Shot 0.581 0.472 0.837 0.118 0.209 0.237 70B 2Shot 0.166 0.562 0.102 0.119 0.138 0.131 Platypus2 5Shot 0.212 0.432 0.149 0.135 0.194 0.130 2Shot 0.128 0.490 0.076 0.088 0.111 0.094 GPT-3.5 5Shot 0.131 0.437 0.079 0.103 0.113 0.123 / 2Shot 0.187 0.501 0.120 0.159 0.187 0.207 GPT-4 5Shot 0.175 0.509 0.111 0.136 0.157 0.141 30Preprint J ABLATION STUDY ON PROVIDED INFORMATION WeconductsomeablationstudiesonhowdifferentinformationprovidedtoLLMsisaffectingtheir performance.WecomparetheimpactofmanuallyreverseddecompiledcodeintheCTFdataset,the impactofonlydecompiledcodeprovidedintheMAGMAdataset,andtheimpactoflargercontext provided instead of only a single function provided in both the CTF and MAGMA datasets. The result of the CTF dataset is shown in Figure 8 and the result of the MAGMA dataset is shown in Figure9. 0.35 0.30 0.25 0.20 0.15 0.10 0.05 0.00 chatg -l 6m b2 Llam -a 7- b2 vicu -n 7a b vi -c 7u bn -a 16k Lla -m 1a- 32 b Baich -u 1a 3n b2 vic -u 1n 3a b vi -c 1u 3n b-a 16 ik nter -n 2l 0m b vic -u 3n 3a b CodeL -l 3a 4m ba falc -o 4n 0b Lla -m 7a- 02 b Platy -p 7u 0s b2 GPT-3.5 GPT-4 1f 0.35 0.30 0.25 0.20 0.15 0.10 0.05 0.00 chatg -l 6m b2 Llam -a 7- b2 vicu -n 7a b vi -c 7u bn -a 16k Lla -m 1a- 32 b Baich -u 1a 3n b2 vic -u 1n 3a b vi -c 1u 3n b-a 16 ik nter -n 2l 0m b vic -u 3n 3a b CodeL -l 3a 4m ba falc -o 4n 0b Lla -m 7a- 02 b Platy -p 7u 0s b2 GPT-3.5 GPT-4 1f CTF-IDA CTF-Reversed CTF-Single-Function CTF-All-Function Figure 8: CTF’s F1 score averaged over 2 types of shots (2 shots and 5 shots) and 2 types of classification (Binary classification and multi-class classification). Upper: Comparison be- tween raw decompiled code from IDA (CTF-IDA) and manually reversed decompiled code (CTF-Reversed). Lower: Comparison between only a single function is provided to the LLM (CTF-Single-Function) and all functions in a binary are provided to the LLM (CTF-All-Function). 31Preprint 0.14 0.12 0.10 0.08 0.06 0.04 0.02 0.00 chatg -l 6m b2 Llam -a 7- b2 vicu -n 7a b vi -c 7u bn -a 16k Lla -m 1a- 32 b Baich -u 1a 3n b2 vic -u 1n 3a b vi -c 1u 3n b-a 16 ik nter -n 2l 0m b vic -u 3n 3a b CodeL -l 3a 4m ba falc -o 4n 0b Lla -m 7a- 02 b Platy -p 7u 0s b2 GPT-3.5 GPT-4 1f 0.14 0.12 0.10 0.08 0.06 0.04 0.02 0.00 chatg -l 6m b2 Llam -a 7- b2 vicu -n 7a b vi -c 7u bn -a 16k Lla -m 1a- 32 b Baich -u 1a 3n b2 vic -u 1n 3a b vi -c 1u 3n b-a 16 ik nter -n 2l 0m b vic -u 3n 3a b CodeL -l 3a 4m ba falc -o 4n 0b Lla -m 7a- 02 b Platy -p 7u 0s b2 GPT-3.5 GPT-4 1f MAGMA-Binary MAGMA-Source MAGMA-Single-Function MAGMA-All-Function |
Figure 9: MAGMA’s F1 score averaged over 2 types of shots (2 shots and 5 shots). Upper: Comparison between raw decompiled code (MAGMA-Binary) and source code (MAGMA-Source). Lower: Comparison between only a single function is provided to the LLM (MAGMA-Single-Function) and all functions related to the vulnerability are provided to the LLM(MAGMA-All-Function). 32Preprint K BAD CASES OF DECOMPILED CODE In this section, we show two common examples where IDA fails to recover the semantics of the original source code by decompiled code. As shown in Figure 10, line v0 = alloca(...); indecompiledcodeactuallycorrespondstosub rbp, raxinassemblycode,whichmeansthat the decompiled code translates the assembly code incorrectly, ignoring that it is a dynamic stack allocation. And in Figure 11, IDA 7.6 cannot understand the switch statement generated by a newer compiler, resulting completely wrong decompiled code (A bare jmp rax). This requires extramanualleffecttofixinanewerIDA. unsigned __int64 calc_sum() push rbp { mov rbp, rsp void *v0; // rsp push rbx __int64 *v1; // rbx sub rsp, 28h __int64 i; // [rsp+0h] [rbp-30h] BYREF mov rax, fs:28h __int64 v4; // [rsp+8h] [rbp-28h] mov [rbp+var_18], rax __int64 *v5; // [rsp+10h] [rbp-20h] xor eax, eax unsigned __int64 v6; // [rsp+18h] [rbp-18h] mov rax, cs:n shl rax, 2 v6 = __readfsqword(0x28u); lea rdx, [rax+0Fh] v0 = alloca(16 * ((4 * n + 30) / 0x10uLL)); mov eax, 10h v5 = &i; sub rax, 1 for ( i = 0LL; i >= 0 && i < n; ++i ) add rax, rdx { mov ecx, 10h printf("num[%ld] = ", i + 1); mov edx, 0 v1 = &v5[i]; div rcx *v1 = read_long(); imul rax, 10h } sub rsp, rax v4 = 0LL; mov rax, rsp for ( i = 0LL; i < n; ++i ) add rax, 0Fh v4 += v5[i]; shr rax, 4 printf("SUM = %ld\n", v4); shl rax, 4 return __readfsqword(0x28u) ^ v6; mov [rbp+var_20], rax } ...... Decompile code Assembly code Figure 10: Decompiled code and assembly code of function calc sum in CTF challegnge zer0pts 2020 protrude. __int64 __fastcall lua_gc(__int64 a1, unsigned int a2) endbr64 { push rbp if ( a2 <= 0xB ) push rbx __asm { jmp rax } sub rsp, 18h return lua_gc_cold(); mov rbx, [rdi+18h] } IDA 7.6 cmp esi, 0Bh ja lua_gc_cold lea rcx, unk_2A320 mov eax, esi __int64 __fastcall lua_gc(lua_State_0 mov rbp, rdi *L, unsigned int a2, int a3) movsxd rax, dword ptr { [rcx+rax*4] global_State_0 *l_G; // rbx add rax, rcx __int64 result; // rax db 3Eh l_mem v5; // rsi jmp rax _BOOL4 v6; // [rsp+Ch] [rbp-1Ch] mov esi, 2 call luaC_changemode l_G = L->l_G; xor eax, eax switch ( a2 ) add rsp, 18h { pop rbx case 0u: pop rbp l_G->gcrunning = 0; retn result = a2; ...... break; ... IDA 8.3 Decompile code Assembly code Figure11:DecompiledcodeandassemblycodeoffunctioncontainingswitchinIDAPro7.6and 8.3. 33 |
2311.16169 Understanding the Effectiveness of Large Language Models in Detecting Security Vulnerabilities AvishreeKhare∗ SaikatDutta∗ ZiyangLi UniversityofPennsylvania CornellUniversity UniversityofPennsylvania Philadelphia,USA Ithaca,USA Philadelphia,USA akhare@seas.upenn.edu saikatd@cornell.edu liby99@seas.upenn.edu AlaiaSolko-Breslin RajeevAlur MayurNaik UniversityofPennsylvania UniversityofPennsylvania UniversityofPennsylvania Philadelphia,USA Philadelphia,USA Philadelphia,USA alaia@seas.upenn.edu alur@seas.upenn.edu mhnaik@seas.upenn.edu ABSTRACT other.Hence,weneedmajortechnicaladvancestoeffectivelydetect Securityvulnerabilitiesinmodernsoftwareareprevalentandharm- securityvulnerabilitiesinsuchcomplexsoftware. ful.Whileautomatedvulnerabilitydetectiontoolshavemadepromis- Traditionaltechniquesforautomatedvulnerabilitydetection, ingprogress,theirscalabilityandapplicabilityremainchallenging. suchasfuzzers[35],andstaticanalyzerssuchasCodeQL[1]and Recently,LargeLanguageModels(LLMs),suchasGPT-4andCodeL- Semgrep[41]havemadepromisingstrides.Forexample,inthe lama,havedemonstratedremarkableperformanceoncode-related lasttwoyears,researchersfoundover300securityvulnerabilities tasks.However,itisunknownwhethersuchLLMscandocomplex through custom CodeQL queries [29, 42]. However, these tech- reasoningovercode.Inthiswork,weexplorewhetherpre-trained niquesfacechallengesinscalabilityandapplicability.Fuzzingdoes LLMscandetectsecurityvulnerabilitiesandaddressthelimitations notscaletolargeapplications,andfuzzingpartsofapplications ofexistingtools.Weevaluatetheeffectivenessofpre-trainedLLMs, requiresmanuallycraftingfuzzdrivers.Moreover,itishardtouse intermsofperformance,explainability,androbustness,onaset onlargecriticalprogramswithcomplexinputs,suchasnetwork offivediversesecuritybenchmarksspanningtwolanguages,Java servers,GUI-basedprograms,embeddedfirmware,bootloaders, andC/C++,andcoveringbothsyntheticandreal-worldprojects. andsystemservices.Ontheotherhand,staticanalysisreliesheav- Overall,allLLMsshowmodesteffectivenessinend-to-endrea- ilyonmanualAPIspecifications,andskillfullycraftedheuristics soningaboutvulnerabilities,obtaininganaverageof60%accuracy tobalanceprecisionandscalability.Inlightofthesechallenges, acrossalldatasets.However,weobservethatLLMsshowpromising GitHubpaysabountyofover7KUSDforeachCodeQLquerythat abilitiesatperformingpartsoftheanalysiscorrectly,suchasiden- canfindnewcriticalsecuritybugs[22]. tifyingvulnerability-relatedspecifications(e.g.,sourcesandsinks) LargeLanguageModels(LLMs),includingpre-trainedmodels andleveragingnaturallanguageinformationtounderstandcode suchasGPT-4andCodeLlama,havemaderemarkableadvancesin behavior(e.g.,tocheckifcodeissanitized).Further,LLMsarerela- code-relatedtasksinarelativelyshortperiod.Suchtasksinclude tivelymuchbetteratdetectingsimplervulnerabilitiesthattypically codecompletion[8],automatedprogramrepair[25,48,49],test onlyneedlocalreasoning(e.g.,IntegerOverflowsandNULLpointer generation[16,28],codeevolution[51],andfaultlocalization[50]. dereference).Wefindthatadvancedpromptingstrategiesthatin- TheseresultsclearlyshowthepromiseofLLMs,openingupanew volvestep-by-stepanalysissignificantlyimproveperformanceof directionforexploringadvancedtechniques.Hence,anintriguing LLMsonreal-worlddatasets(improvingF1scorebyupto0.25on questioniswhetherthestate-of-the-artpre-trainedLLMscanalso average).Finally,weshareourinsightsandrecommendationsfor beusedfordetectingsecurityvulnerabilitiesincode. futureworkonleveragingLLMsforvulnerabilitydetection. TodevelopLLM-basedsolutions,animportantfirststepisto systematically evaluate the ability of LLMs in detecting known vulnerabilities.Thisisespeciallyimportantinlightoftherapidly 1 INTRODUCTION evolvinglandscapeofLLMsinthreeaspects:scale,diversity,and Securityvulnerabilitiesafflictsoftwaredespitedecadesofadvances applicability.First,scalingthesemodelstoeverlargernumbersof inprogramminglanguages,programanalysistools,andsoftware parametershasledtosignificantimprovementsoverpreviousgen- engineeringpractices.Evenwell-testedandcriticalsoftwaresuch erationsintheircapabilities—aphenomenontermedasemergent asOpenSSL,awidelyusedlibraryforapplicationsthatprovide behavior[46].Forinstance,GPT-4,whichispresumablyordersof securecommunications,containstrivialbufferoverflowvulnera- magnitudelargerthanits175-billionpredecessorGPT-3.5,signifi- bilities,e.g.,[13]and[14].ArecentstudybyMicrosoftshowed cantlyoutperformsGPT-3.5onawiderangeofcode-understanding thatmorethan70%ofthevulnerabilitiesarestillcausedbywell- tasks[5].Second,thediversityofLLMshasgrownrapidlyand understoodmemorysafetyissues[36].Thesizeandcomplexity nowincludesnotonlyproprietarygeneral-purposeonessuchas ofmodernsoftwaresystemsaregrowingquickly,encompassing GPT-4butalsoopen-sourcedLLMssuchasCodeLlama[39]and |
numerousprograms,libraries,andmodulesthatinteractwitheach StarCoder[31]thatarespecializedforcode.Finally,thereason- ingcapabilitiesofLLMs(andhencetheirapplicability)mayvary significantlyacrossdifferentprogramminglanguages. Allthese *Equalcontribution 4202 nuJ 9 ]RC.sc[ 2v96161.1132:viXraAvishreeKhare∗,SaikatDutta∗,ZiyangLi,AlaiaSolko-Breslin,RajeevAlur,andMayurNaik factorsopenupalargeexplorationspaceforapplyingLLMstothe (1) LLMsacrossallsizeshavemodestvulnerabilitydetectionability, challengingtaskofvulnerabilitydetection. withanaverageaccuracyofonlyabout60%acrossalldatasets. OurWork.Weconductthefirstcomprehensivestudyofusing (2) UsingpromptsthatfocusondetectingspecificCWEsimproves LLMsfordetectingsecurityvulnerabilities.Westudyfivestate-of- theperformanceofLLMs.Thedataflowanalysis-basedprompt the-artLLMs,includingproprietarymodelssuchasGPT-3.5and furtherimprovesresultsforlargerLLMsbyallowingstep-by- GPT-4,andopen-sourcemodelslikeCodeLlama.Weevaluatethese stepreasoning.Theimprovementissignificantforreal-world modelsonfivepopularsecurityvulnerabilitydatasets. datasets(anincreaseofupto0.25F1score).However,weob- WedesignasetofthreepromptingstrategiesforLLMstoelicit servethatLLMsofteninfertherelevantsource,sink,andsani- increasinglysophisticatedformsofreasoningandexplanations. tizerscorrectlybutfailinend-to-endlogicalreasoning. OursimplestpromptingstrategiesincludetheBasicprompt,which (3) LLMsarerelativelybetteratdetectingvulnerabilitiesthatre- simplyasksanLLMtocheckforanyvulnerabilitiesinthegiven quirelocalreasoningwithoutcomplexcodestructures(such codeandtheCWEspecificprompt,whichaskstheLLMtocheckfora asIntegerOverflowandNULLPointerDereference).However, specificclassofvulnerabilitiesorCWEs(suchasBufferOverflows). theystruggletodetectmorecomplexvulnerabilitiesinreal- Asignificantlimitationofstaticvulnerabilitydetectiontoolsis worldprograms. therequirementofbuildingthetargetprojecttoenablethemtofind (4) LargerLLMslikeGPT-4showmilddegradationinperformance bugs.Further,theyalsorequireconcretespecificationsofAPIs(e.g., whensubjecttoadversarialattacksoncode(averagedrop8.6%). sources,sanitizers,andsinks).Incontrast,LLMshaveaninternal Incontrast,smallerLLMs,likeCodeLlama-7B,showasignifi- modelofAPIsalreadyseenduringthepre-trainingphaseandhence cantdropinperformance(averagedrop39%)inthepresence theydonotrequirecompiledorcompletecodebasestorun.Inspired ofperturbations. bythisinsight,weadditionallydesignanewpromptingstrategy, (5) LLMslagbehindstate-of-the-artstaticanalysistoolslikeCod- calledDataflowanalysis-basedprompt,thatsimulatesasource-sink- eQLinaverageaccuracy(by15%)andprecision(by26%)across sanitizerbaseddataflowanalysisonthetargetcodesnippetbefore datasets.However,LLMslikeGPT-4showsomepromisingpar- predictingifitisvulnerable.TheDataflowanalysis-basedprompt, tialabilities,e.g.,byidentifyingsource/sinkspecificationsor similartoaclassicaldataflow-basedstaticanalyzeraskstheLLMto doingcontext-basedreasoningusingnaturallanguageinfor- firstinferthesources,sinks,andsanitizersinthecodesnippetand mation,whichcanpotentiallybeusefulforstaticanalysis. checkforanyunsanitizeddataflowsbetweenthesourceandsink. (6) WeobtainmixedresultscomparingLLMstopriordeep-learning- Thisstyleofpromptingissimilartochain-of-thoughtreasoning basedtools:whilethedeep-learning-basedtoolsonlyslightly andsimulatesstep-by-stepreasoningbyLLMs. outperformLLMsonsyntheticdatasets(by0.07inF1score),all AkeychallengeinevaluatingLLMsisdataleakage:LLMsmay approachesstruggleequallyonreal-worlddatasetsoftenpro- perform well on a dataset because such samples were already ducingcloseto50%accuracyonly(sameasarandombaseline). present in their pre-training data. Hence, we implement three Contributions.Tosummarize,wemakethefollowingcontribu- semantics-preserving adversarial attacks for code and evaluate tionsinthispaper: whethertheysignificantlyimpacttheperformanceofLLMs. • EmpiricalStudy:Weconductthefirstlargecomprehensive ResearchQuestions.Westudythefollowingresearchquestions: studyonhowstate-of-the-artLLMsperformindetectingsecu- rityvulnerabilitiesacrossfivedatasetsandtwoprogramming • RQ1:Howdodifferentpre-trainedLLMsperformindetecting languages(C/C++andJava). securityvulnerabilitiesacrossdifferentlanguagesanddatasets? • PromptingStrategies:Wedesignthreepromptingstrategiesfor (Section§3.1) LLMs,inspiredbytherecentadvancesinnaturallanguagepro- • RQ2:Howdodifferentpromptingstrategiesaffecttheperfor- cessingandtraditionalprogramanalysistechniques,thatelicit manceofLLMs?(Section3.2) differentreasoningstylesfromLLMsandalsoprovidehuman- • RQ3:HowdoestheperformanceofLLMsvaryacrossdifferent readableexplanationsfortheirpredictions. vulnerabilityclasses?(Section3.3) • RobustnessofLLMs:WestudyhowtheperformanceofLLMs • RQ4:CanadversarialattacksimpacttheperformanceofLLMs isimpactedbysemantics-preservingadversarialattacksoncode. |
indetectingvulnerabilities?(Section§3.4) • Comparisonwithothervulnerabilitydetectiontools:We • RQ5:HowdoLLMscomparetostate-of-the-artstaticanalysis contrasttheperformanceofLLMsagainstpopularstaticanalysis tools?(Section3.5) anddeep-learning-basedvulnerabilitydetectiontools. • RQ6:HowdoLLMscomparetostate-of-the-artdeep-learning- • Insights:WeperformarigorousmanualanalysisofLLMs’pre- basedtools?(Section3.6) dictions and highlight vulnerability patterns that impact the performanceofthesemodels. ResultsandFindings.Wechoosefivestate-of-the-artLLMs:GPT- 4,GPT-3.5,CodeLlama-34B,CodeLlama-13B,andCodeLlama-7B 2 APPROACH andevaluatetheireffectivenessindetectingsecurityvulnerabilities 2.1 Datasets across five vulnerability datasets: OWASP [38], Juliet Java [27], Juliet C/C++ [26], CVEFixes Java [2], and CVEFixes C/C++ [2]. Forourstudy,weselectfivediversevulnerabilitydatasetsfromtwo These datasets cover 5000 vulnerable and non-vulnerable code languages:C++andJava.Foreachlanguage,wealsoselectboth samples,across25CWEs.Ourfindingsaresummarizedasfollows: syntheticandreal-worldbenchmarks.Table1presentsthedetailsUnderstandingtheEffectivenessofLargeLanguageModelsinDetectingSecurityVulnerabilities ofeachdataset,suchasthedatasetsize,programminglanguage, 2.2 Metrics numberofvulnerableandnon-vulnerablesamples,andthenumber Toevaluatetheeffectivenessofeachtool,weusethestandardmet- ofuniqueCWEs.Wedescribeeachdatasetnext. ricsusedforclassificationproblems.Inthiswork,atruepositive representsacasewhenatooldetectsatruevulnerability.Incon- Table1:DetailsofSelectedDatasets trast,afalsepositiveiswhenthetooldetectsavulnerabilitythat isnotexploitable.Trueandfalsenegativesaredefinedanalogously. Dataset Language Size Vul/Non-Vul CWEs Wedescribeeachmetricinthecontextofvulnerabilitydetection. OWASP[38] Java 2740 1415/1325 11 SARDJuliet(C/C++)[26] C/C++ 81,280 40,640/40,640 118 • Accuracy:Accuracymeasureshowoftenthetoolmakesacor- SARDJuliet(Java)[27] Java 35,940 17,970/17,970 112 rectprediction,i.e.,whetheracodesnippetisvulnerableornot. CVEFixes[2] C/C++ 19,576 8223/11,347 131 TruePositives+TrueNegatives CVEFixes[2] Java 3926 1461/2465 68 Itiscomputedas: #Samples . • Precision:Precisionrepresentswhatproportionofcasesthata tooldetectsasavulnerabilityisacorrectdetection.Itiscomputed 2.1.1 OWASP (Synthetic). The Open Web Application Security as: TruePositives . TruePositives+FalsePositives Project(OWASP)benchmark[38]isaJavatestsuitedesignedto • Recall:Recallrepresentswhatproportionofvulnerabilitiesthe evaluate the effectiveness of vulnerability detection tools. Each toolcandetect.Itiscomputedas: TruePositives . TruePositives+FalseNegatives testrepresentsasyntheticallydesignedcodesnippetcontaininga • F1score:TheF1scoreisaharmonicmeanofprecisionand securityvulnerability.OWASPcontains2740testcasesrepresenting recall.Itiscomputedas: 11uniqueclassesofsecurityvulnerabilities(alsoknownasCommon 2∗ Precision*Recall. WeaknessEnumerationorCWE). Precision+Recall 2.3 LargeLanguageModels 2.1.2 Juliet(Synthetic). Juliet[4]isawidely-usedvulnerability datasetdevelopedbyNIST.Julietcomprisesthousandsofsyntheti- Wechoosethemostpopularstate-of-the-artpre-trainedLargeLan- callygeneratedtestcasesrepresentingvariousknownvulnerability guageModels(LLMs)forourevaluation.WechooseOpenAImod- patterns.Itcontains81,280C/C++programscovering118unique els:GPT-4(gpt-4)andGPT-3.5(gpt-3.5-turbo).GPT-3.5allows CWEs,and35,940Javaprogramscovering112uniqueCWEs.For upto4096inputtokenswhileGPT-4(whichpresumablyismuch ourpaper,weusethelatestversion,Juliet1.3[3,26,27]. larger)allowsupto8192tokensintheinputprompt.Sincethese modelsareclosed-source,wealsoevaluateCodeLlamamodels[39], 2.1.3 CVEFixes(Real-World). Bhandarietal.[2]curatedadataset, whichwererecentlyopen-sourcedbyMeta.Weselectthreever- knownasCVEFixes,from5365CommonVulnerabilitiesandEx- sionsofthesemodels:CodeLlama-7B(CL-7B),CodeLlama-13B(CL- posures(CVE)recordsfromtheNationalVulnerabilityDatabase 13B),andCodeLlama-34B(CL-34B),containing7billion,13billion, (NVD).FromeachCVE,theyautomaticallyextractedthevulnera- and34billionparameters,respectively.WeusetheHuggingFace bleandpatchedversionsofeachmethodinopen-sourceprojects, APIs[24]toaccessCodeLlamamodels.Weusethe“Instruct”ver- alongwithextensivemeta-datasuchasthecorrespondingCWEs, sionofCodeLlamamodels—thesemodelsarefine-tunedtofollow projectinformation,andcommitdata.CVEFixesconsistsofmeth- userinstructionsandhencecanbetteradapttospecificreasoning odsextractedfrom5495vulnerability-fixingcommits.Thesemeth- tasks.Table3presentsLLMdetails. odsspanmultipleprogramminglanguagessuchasC/C++,Java, Python,andJavaScript.Forourwork,weextractedallC/C++and Table3:DetailsofLLMs |
JavamethodsfromCVEFixes.Wecollected19,576C/C++and3926 Javamethods(bothvulnerableandnon-vulnerable),covering131 ModelClass ModelVersion Size ContextWindow and68differentCWEs,respectively. GPT-4 gpt-4-0613 N/A 8k Whilemanyreal-worlddatasetshavebeenproposedinthelitera- GPT-3.5 gpt-3.5-turbo-0613 N/A 4k CodeLlama-34B CodeLlama-34B-Instruct 34B 16k ture,weneededadatasetforourstudythat1)containsvulnerability CodeLlama-13B CodeLlama-13B-Instruct 13B 16k metadatasuchasCVEandCWEIDs,2)istwo-sided,i.e.,contains CodeLlama-7B CodeLlama-7B-Instruct 7B 16k bothvulnerableandnon-vulnerablecodesamples,and3)covers multiplelanguagessuchasJavaandC/C++.Table2showsacom- parisonofexistingreal-worldvulnerabilitydatasets.Weselected 2.4 PromptingStrategiesforLLMs CVEFixesbecauseitistheonlydatasetthatfitsourcriteria. WeexplorevariouspromptingstrategiesthatcanassistLLMsin predictingifagivencodesnippetisvulnerable.TheLLMsdiscussed Table2:ComparisonofReal-WorldDatasets inthisstudysupportchatinteractionswithtwomajortypesof prompts:thesystempromptandtheuserprompt.Thesystemprompt Dataset Languages CVEMetadata Two-Sided Multi-Lang canbeusedtosetthecontextfortheentireconversationwhile BigVul[17] C/C++ ✓ ✗ ✗ userpromptscanbeusedtoprovidespecificdetailsthroughoutthe Reveal[7] C/C++ ✗ ✗ ✗ chatsession.Weincludeasystempromptatthestartofeachinput DiverseVul[9] C/C++ ✗ ✓ ✗ DeepVD[45] C/C++ ✗ ✗ ✗ todescribethetaskandexpectedstructureoftheresponse.Since personaassignmenthasbeenshowntoimprovetheperformanceof CVEFixes[2] C/C++,Java,... ✓ ✓ ✓ GPT-4onspecializedtasks[40],weaddtheline“Youareasecurity researcher,expertindetectingsecurityvulnerabilities”atthestartofAvishreeKhare∗,SaikatDutta∗,ZiyangLi,AlaiaSolko-Breslin,RajeevAlur,andMayurNaik everysystemprompttoassignapersonaofaSecurityResearcher specificpromptwhilerequiringmoretokens.Ouranalysisofthe tothemodel.Thesystempromptforallexperimentsendswiththe few-shotpromptssuggeststhatprovidingmoreexamplesmight statement“Provideresponseonlyinthefollowingformat:”followed notbeausefulstrategyforvulnerabilitydetection.Itmightbemore by an expected structure of the response from the model. The usefultousepromptsthatinsteadelicitreasoningorexplanations systempromptisfollowedbyauserpromptthatvariesacrossthe ofsomekindbeforedetectingifthegivensnippetisvulnerable. variouspromptingstrategies.Inallourexperiments,weincorporate With Chain-of-thought prompting, we explicitly ask the model thetargetcodesnippetintotheuserpromptwithoutanychanges. toprovideareasoningchainbeforethefinalanswerbyaddinga Weconstructdifferentpromptingstrategiestostudythereason- “Let’sthinkstep-by-step”statementattheendoftheCWEspecific ingabilitiesofLLMs: prompt.ThissetupdidnotyieldbetterresultsthantheDataflow analysis-basedprompt.Moreover,thereasoningchainsobtainedby 2.4.1 Basicprompt. Wedesignaverysimpleprompt(shownin Chain-of-thoughtprompting(bothzero-shotandfew-shot)were Listing4intheappendix)withthegoalofunderstandingifthe notaselaborateasthosefromtheDataflowanalysis-basedprompt modelcantakeatargetcodesnippetasinputanddetectifitis thuslimitingtheeaseofdebugging.Weprovidethedetailsofthis vulnerableanddeterminethecorrectCWEaswell.Theprompt experimentinAppendixA.3. beginswiththemessage“Isthefollowingcodesnippetpronetoany securityvulnerability?”followedbythecodesnippet. 2.5 DatasetProcessingandSelection Table4:DatasetProcessingandSelection Weperformadataprocessingandcleaningstepforeachdataset beforeevaluatingthemwithLLMs.Weremoveoranonymizeinfor- mationsuchascommits,benchmarkIDs,orvulnerabilitynames OWASP Juliet Juliet CVEFixes CVEFixes Total C/C++ Java C/C++ Java thatmayprovideobvioushintsaboutthevulnerability.Weskip Original 2740 128,198 56,162 19,576 3926 210,602 benchmarksthatarespreadacrossmultiplefiles,duetolimitations Filtering 2740 81,280 35,940 19,576 3926 144,002 ofpromptsize.Table4presentsthedetailsofourselectionstages. Top25CWE 1478 11,766 8,506 12,062 1810 23,560 Duetotheprohibitivelylargecostofrunningpre-trainedLLMs, RandomSelection 1000 1000 1000 1000 1000 5000 weselectasubsetofsamplesfromtheoriginaldatasets.Weselect samplescorrespondingtovulnerabilitytypes(orCWEs)listedin 2.4.2 CWEspecificprompt. TheCWEspecificpromptispresented MITRE’sTop25MostDangerousSoftwareWeaknesses[37]. Due inListing5intheappendix.ThispromptissimilartotheBasic topromptsizelimitations,wefilteroutcodesnippetslargerthan promptexceptthatitasksthemodeltopredictifthegivencode 2048tokens.Finally,becausethedatasetsarequitelarge,wefurther snippet is vulnerable to a specific target CWE. Hence, the user randomlyselect500vulnerableand500non-vulnerablesamplesper promptstartswith“Isthefollowingcodesnippetproneto<CWE>?” dataset.Finally,weendupwith5000samplesacrossfivedatasets. followedbythecodesnippet.The<CWE>placeholderherecontains WeprovidemoredetailsforeachdatasetinAppendixA.1. |
boththeIDoftheCWEandthename.Forinstance,forCWE-22,the userpromptwouldstartwith“Isthefollowingcodesnippetproneto 2.6 ExperimentalSetup CWE-22(ImproperLimitationofaPathnametoaRestrictedDirectory (PathTraversal))?”followedbythetargetcodesnippet. ExperimentswithGPT-3.5andGPT-4.WeusetheOpenAIpub- licAPItoperformtheexperimentswithGPT-3.5andGPT-4.Weuse 2.4.3 Dataflowanalysis-basedprompt. Inadditiontothestraight- theChatCompletionsAPIendpointtoquerythemodelswiththe forwardone-stepanalysisintheprevioustwoprompts,wealso promptsdiscussedinSection2.4.Wesetthesamplingtemperature studywhetherprovidingspecificstep-by-stepanalysisinstructions to0forobtainingdeterministicpredictions,themaximumnumber canhelptheLLMsdobetterreasoningovercodeandmakebetter oftokensto1024,andusethedefaultvaluesforallotherparameters. predictions.Dataflowanalysisisusedbyseveralstaticanalysis Inallourexperiments,weusethetop-1predictionfromthemodels. toolstoinferifthereexistsanunsanitizedpathfromasourcetoa ExperimentswithCodeLlama.WerunallCodeLlamaexperi- targetnode.Further,priorliteraturehasshownstep-by-stepinstruc- mentsontwosetsofmachines:onewithIntelXeonmachine,with tionscanoftenelicitbetterreasoningfromLLMs[47].Motivated 40CPUs,fourGeForceRTX2080TiGPUs,and750GBRAM,and bytheseobservations,wedesignedtheCWE-DFprompt(shown anotherIntelXeonmachinewith48CPUs,8A100s,and1.5TRAM. in Listing 6 in appendix) that prompts the model to simulate a SimilartoGPT-4,wesetthetemperatureto0,themaximumtokens source-sink-sanitizer-baseddataflowanalysisonthetargetcode to1024,andusetop-1predictionforevaluation. snippetbeforepredictingifitisvulnerable.Naturally,comparedto theotherprompts,thispromptgeneratesmanymoretokensandis 3 RESULTS hencemorecostly.WeprovidethefullpromptsinAppendixA.2. 3.1 RQ1:EffectivenessofLLMs 2.4.4 Otherpromptingstrategies. Wealsotriedotherprompting strategiessuchasFew-shot promptingandChain-of-thought Weevaluatetheperformanceofpre-trainedLLMsonfiveopen- prompting.Inthefew-shotpromptingsetup,weincludetwoex- sourcedatasetsdiscussedinSection2.1.presentsthebestaccuracy amplesofthetask(onewithavulnerabilityandonewithout)in andF1scores(acrossprompts)ofGPT-4,GPT-3.5,andCodeLlama theCWEspecificpromptbeforeprovidingthetargetcodesnippet. models(CL-7B,CL-13B,CL-34B)onalldatasets.Themoredetailed Few-shotpromptingreportedpoorerresultsthanthebaseCWE metricsforallpromptsarepresentedinAppendixA.4.UnderstandingtheEffectivenessofLargeLanguageModelsinDetectingSecurityVulnerabilities Figure1:EffectivenessofLLMsinPredictingSecurityVulnerabilities(JavaandC++).WereportthehighestaccuracyandF1 scorespermodelperdataset(acrossallpromptingstrategies). ModestVulnerabilityDetectionPerformanceAcrossLLMs. private void func() throws Throwable { Theaverageofthehighestaccuraciesacrossalldatasetsis60.6% int data; withthemaximumbeing77%forCL-7B(withBasicprompt)on switch (5) { case 6: the Juliet Java dataset. In terms of F1 score, GPT-4 (with CWE- data = 0; break; // Hardcoded values in data DF)performsthebestonthesyntheticdatasets:OWASP(F1:0.69), default: data = 2; break; // Hardcoded values in data JulietJava(F1:0.76),andJulietC/C++(F1:0.70).Weobservethat } CodeLlamamodels,despitebeingmuchsmallerthanGPT-4,are switch (7) { competitiveonmostdatasets,evenwiththeBasicprompt.CL-34B case 7: // POTENTIAL FLAW: Integer Overflow int result = (int)(data + 1); // But no overflow due to reportsthehighestF1scores(F1:0.67)onthereal-worlddatasets, ↩→ hardcoded values in data CVEFixesJavaandCVEFixesC/C++,withtheCWE-DFprompt. IO.writeLine("result: " + result); break; default: Performancedoesnotimprovewithscale.WhileGPT-4per- IO.writeLine("fixed string"); break; formsstrictlybetterthanGPT-3.5on3ofthe5datasets,theCodeL- lama models roughly exhibit similar performance. Surprisingly, Codellama-7B’s response: The code snippet does not CL-7BperformsbetterthanthelargerLLMsontheJuliet containanyobvioussecurityvulnerabilities.✓ JavaandJulietC/C++datasetsevenwiththeBasicprompt. Listing1presentsarepresentativeexamplewhereonlyCL-7Bcor- rectlypredictsthatanintegeroverflowvulnerability(CWE-190) Codellama-13B’sresponse: Theintegeroverflowoccurs cannotoccurinthegivencontext.Sincethedatavariablecanonly whenthevalueofthe‘data’variableexceedsthemaximum containvalue2,adding1toitinLine11cannotcauseaninteger valuethatcanbestoredinaninteger,causingthevalueto overflow.CL-7Brecognizesthisandpredictsthatthesnippetisnot wraparoundtoanegativenumber...potentiallyallowan vulnerable.CL-13BandGPT-4(CWE-DF)ignorethehard-coded attackertoexecutearbitrarycode.✗ valuesandpredictthatthesnippetisvulnerable,indicatingthat smallerLLMsmaysometimesperformbetterthanlargermodels. Trendswithinmodelclasses.Themodelsherecanbedivided Listing1:CL-7Bcorrectlypredictsthatthissnippetisnot |
intotwobroadclasses:theGPTxmodels(GPT-4andGPT-3.5)and vulnerabletoIntegerOverflowbutGPT-4andCL-13Bdonot. theCodeLlamamodels(CL-7B,CL-13B,CL-34B).TheCodeLlama modelsdisplayahigherdividebetweentheaccuracyandF1scores thantheGPTxmodelswithF1scoresmuchhigherthantheaccu- racies(whicharecloseto50%).ThisisbecausetheCodeLlama 3.2 RQ2:ComparisonofPromptingStrategies modelstendtoalmostalwayspredictthatthegivencode Figure2presentstheaccuracyandF1scores(averagedacrossall snippetisvulnerable(asisevidentwithahighf1scorebutlower LLMs)ofthethreepromptingstrategiesacrossalldatasets.Overall, accuracyonmostdatasets). thethreepromptsperformsimilarlyintermsofaccuracy.CWE-DF performsbetterthanCWEandBasicpromptintermsofF1score Finding1:LLMsacrossallsizeshavemodestvulnerability onthereal-worlddatasets(CVEFixesC/C++andCVEFixesJava). detectionability,withanaverageaccuracyofonlyabout Wenexthighlightqualitativedifferencesbetweenvariousprompts: 60%acrossalldatasets. TheBasicpromptoftendetectsincorrectvulnerabilityclasses. Table5presentsthepercentageofsamplespredictedasvulnera- ble with the Basic prompt where the predicted CWE is correct, Implication1:Beyondprompting,futuretechniquesneed averagedoveralldatasetswiththesameprogramminglanguage. tospecializegeneral-purposeorcodeLLMsonhigh-quality Wecanobservethatallmodelspredictanincorrectvulnerability vulnerabilitydatasetstoimprovetheirperformance. in>60%and>53%ofallJavaandC/C++samplespredictedasAvishreeKhare∗,SaikatDutta∗,ZiyangLi,AlaiaSolko-Breslin,RajeevAlur,andMayurNaik vulnerablerespectively.ThissuggeststhattheBasicpromptdetects // Code snippet prone to CWE-78 (OS Command Injection) incorrectCWEsinroughlyhalfofthecases. // Unverified data from socket is used as an argument to _execvp void func(){ char dataBuffer[100] = "dir "; Table5:CorrectCWEsdetectedwiththeBasicprompt(%) char *data = dataBuffer; size_t dataLen = strlen(data); int recvResult; SOCKET connectSocket = INVALID_SOCKET; Language(Avg.) GPT-4 GPT-3.5 CL-34B CL-13B CL-7B do {// setup socket and receive user input conn = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP); Java 0.41 0.34 0.37 0.38 0.39 recvResult = recv(conn, (char *)(data + dataLen), C/C++ 0.29 0.31 0.33 0.35 0.47 ↩→ sizeof(char) * (100 - dataLen - 1), 0)}; // missing validation of recvResult data[dataLen + recvResult / sizeof(char)] = '\0'; SpecifyingtheCWEinthepromptreducesfalsealarms.In } while (0) 16 out of the 25 model-dataset combinations, the CWE specific char *args[] = {"%WINDIR%\\system32\\cmd.exe", "/c", data, promptimprovesorretainsboththeaccuracyandF1scoreoverthe ↩→ NULL}; Basicprompt.GPT-4withtheCWEspecificpromptonJulietJava _execvp("cmd.exe", args); reportsa13%higheraccuracyanda0.1higherF1scorethanthose } withtheBasicprompt.Wemanuallyinspect10vulnerableand10 non-vulnerablesamplesfromthisdatasetwheretheCWEspecific GPT-4’sresponse:Thedatareceivedfromthesocketcon- promptiscorrectandtheBasicpromptisnot.Wefindthatnot nectionisdirectlyusedinthe_execvpfunctionwithout includingtheCWEintheBasicpromptresultsinpredictions propersanitization.Thiscouldallowanattackertoinject discussingotherincorrectCWEs.TheBasicpromptincorrectly maliciouscommands.✓ predictsthatthe10non-vulnerablesamplesarevulnerabletoother CWEsofwhichweonlyfind3tobeplausiblebutunlikelydueto Listing2:GPT-4(CWE-DF)detectsthatthissnippetisprone thecontext.Forexample,itpredictsthatavaluereadfromaninput toOSCommandInjectionduetounsanitizedpathsfroma streamcanbevulnerableifnotvalidated(CWE-20)butthisvalueis source to sink.CodeQLdoesnotdetectthisvulnerability. notusedinavulnerablecontext.Ofthe10vulnerablesamples,Basic promptpredictsaplausibleCWEforonesamplewhileincorrectly predictingthattheremaining9samplesarenotvulnerable.Onthe cases.ThemostsignificantimprovementwiththeCWE-DFprompt otherhand,theCWEspecificpromptalsoprovidesusefulhigh-level overtheCWEspecificpromptisfortheCVEFixesJavadatasetand explanationsforwhythesnippetisvulnerable/notvulnerablein GPT-3.5with7%higheraccuracyand0.4higherF1score.Weman- 18/20samples.The2incorrectexplanationsareartifactsoffaulty uallyinspect10vulnerableand10non-vulnerablesamplesfrom reasoningorhallucination:forexample,anIntegerOverflowdue thisdatasetwhereonlytheCWE-DFpromptiscorrect.Wefind toadditiontoINT_MAXinthefunctionisincorrectlyattributedto thattheCWE-DFpromptleveragestextualcuesforsanitiza- subtractingfromINT_MINintheexplanation. tion(e.g.,csrftokenhandler()suggestsprotectionfromCSRF) DataflowanalysisidentifiesCWE-relevanttextualcuesand whiletheCWEspecificpromptdoesnot.TheCWE-DFprompt providesmoreelaborateexplanations.TheDataflowanalysis- alsoassumesthatanyexternalparameterorcallsourceistainted |
basedprompt(CWE-DF)promptperformsbetterthanCWEspecific bydesignandpredictsthatthesampleisvulnerablemoreoften(as promptbothintermsofaccuracyandF1scorein12outof25model- isevidentfromhigherF1).Further,theanalysisfromCWE-DF datasetcombinationsandreportsahigherF1scorein23outof25 promptismoredetailedandusefulinlocalizationthanthe explanationsfromCWEspecificprompt. CWEspecificprompt’sexplanationsfor6/10vulnerablesam- Basic CWE-Specific CWE-DF plesandallthe10non-vulnerablesampleswereincorrectdueto 1.0 ignoranceofCWE-specificelementsandmissedtextualcuesof Accuracy 0.8 sanitizationortestcode.Forexample,theCWEspecificprompt 0.6 considersasnippetvulnerableevenwhentherequestiswrapped insideanXssHttpRequestWrapperforsanitization.Incontrast,the 0.4 CWE-DFpromptprovidescorrectexplanationsfor18/20samples. 0.2 Moreover,itpredictsthecorrectsourcesandsinksin18/20samples, 0.0 sanitizersin16/20samples,andunsanitizedflowsinallsamples. 1.0 Listing2presentsaresponsefromGPT-4usingCWE-DFprompt F1score 0.8 thatcorrectlyidentifiestheunsanitizedflowsbetweensourcesand 0.6 sinks.WepresentmoreCWE-DFexamplesinAppendixA.7. 0.4 LLMs often identify correct sources, sinks, and sanitizers but reason about them incorrectly. We manually inspect 10 0.2 vulnerable and 10 non-vulnerable samples for the Juliet C/C++ 0.0 datasetwherethepredictionsfromCWE-DFareincorrect.In17/ OWASP Juliet CVEFixes Juliet CVEFixes 20samples,theexplanationisincorrectduetoerroneousreasoning Java Java C/C++ C/C++ aboutthesnippet/falseassumptionsabouttheCWE.Forexam- Figure2:Performanceofdifferentpromptingstrategies ple,anexplanationincorrectlystatesthatacharoverflowisnotUnderstandingtheEffectivenessofLargeLanguageModelsinDetectingSecurityVulnerabilities (a)OWASP (b)JulietJava (c)CVEFixesJava Figure3:AccuracyAcrossCWEsonJavadatasets. vulnerabletoCWE-190(IntegerOverfloworwraparound)since self-containedandlittleadditionalcontextisneededtodetectthem. thewraparoundisvalidinC.Surprisingly,thesources/sinksare Concretely,GPT-3.5consistentlyperformswellonNULLPointer incorrectonlyin2samplesandthesanitizersareincorrectonly Dereferenceacrossalldatasetswithaccuraciesof80.8%,87.9%and in3samples.Theunsanitizedflowsare,however,incorrectin12 60.5%ontheJulietC/C++,JulietJavaandCVEFixesC/C++datasets samplessuggestingthatthepromptiscapableofidentifying respectively. GPT-4 also performs consistently better on NULL thesources/sinks/sanitizersaccuratelyinmostcasesbutfails PointerDereferenceandIntegerOverflowonthesyntheticdatasets toreasonabouttheunsanitizedflowsandthetargetvulnera- withaccuraciesof62.1%and68.4%onJulietC/C++and78.5%and bility.Listing3presentsanexamplewherethevulnerabilityisnot 73%onJulietJavarespectively.CL-7Breportshighaccuracieson detectedbutthesourcesandsinksarecorrectlyidentified. IntegerOverflowonthesyntheticdatasetswithaccuraciesof81% onJulietJavaand70%onJulietC/C++respectively.GPT-4and Finding 2: Specifying the CWE in the prompt reduces CL-34BperformextremelywellonOut-of-boundsRead(78.6%and falsepositivesbasedonotherincorrectCWEs.Dataflow- 64.3%)andOut-of-boundsWrite(78.1%and71.9%)ontheCVEFixes basedpredictionsprovidementionaccuratesources,sinks, Javadataset. andsanitizersevenwhenthedeductivelogicalreasoningis incorrect. Implication2: Whilestep-by-stepanalysisimprovesper- formance,futureapproachesshouldcombineLLMswith symbolictoolsthatcanhandlethelogicalreasoningaspects ofanalysis. 3.3 RQ3:PerformanceofLLMsacrossCWEs (a)JulietC/C++ (b)CVEFixesC/C++ WenextevaluatehowtheLLMsperformondifferentclassesof securityvulnerabilities(CWEs).Foreachdatasetandmodel,wecon- Figure4:AccuracyAcrossCWEsonC/C++datasets. siderthebest-performingpromptfortheanalysisandonlyreport CWEswithatleast10samples.BecausetheCWE-wisedistribution Poor performance on real-world C/C++ is due to missing ofvulnerableandnon-vulnerablesamplescanbeimbalanced,we globalcontext.WeseethattheperformanceofallLLMsonvulner- computebalancedaccuracyforeachCWE(foreaseofpresenta- abilitiesinCVEFixesC/C++isworsethanthatonthesameCWEs tion,werefertoitasaccuracyhenceforthinthissection).Figure3 inCVEFixesJavaandJulietC/C++.Forinstance,whileGPT-4and presentstheCWE-wisedistributionofaccuraciesontheOWASP, CL-34BperformextremelywellontheOut-of-boundsRead/Write JulietJava,andCVEFixesJavadatasets.Figure4reportstheaccura- vulnerabilitiesinCVEFixesJavaasdiscussedabove,theyreport ciesontheJulietC/C++andCVEFixesC/C++datasetsrespectively. accuracies<53%fortheseCWEsontheCVEFixesC/C++dataset. Foreachmodel,weconsiderthepromptwiththebestaccuracyon Weattributethisdisparitytothenatureofthesevulnerabilitiesin thedatasetandbreaktiesusingtheF1score. thetwolanguages:Out-of-boundsReads/WritesinCVEFixes |
LLMsperformwellonvulnerabilitiesthatdonotrequire C/C++requirereasoningaboutpointersandstructswhich additionalcontext.WefindthattherearecertainCWEswhere requiresmorecontextaboutthestructsandtheirmembers. multipleLLMsperformbetteron:Out-of-boundsRead/Write InCVEFixesJava,ontheotherhand,thesevulnerabilitiesarise (CWE-125,CWE-787),Nullpointerdereference(CWE-476) primarilyduetoillegalarrayindexing.Thisissuedoesnotemerge andIntegerOverflow(CWE-190).Thehigherperformanceon inJulietC/C++becausealltheinformationaboutthepointersis thesevulnerabilitiescanbeattributedtothefactthatthesearefairly presentedinthesnippet.WepresentexamplesinAppendixA.8.AvishreeKhare∗,SaikatDutta∗,ZiyangLi,AlaiaSolko-Breslin,RajeevAlur,andMayurNaik SomeLLMsarebetteratdetectingcertainCWEs.GPT-4reports highaccuraciesonCWE-78(OSCommandInjection)in3/5datasets (CVEFixesC/C++with70.8%,CVEFixesJavawith61.7%,andJuliet Javawith63.8%)buttheothermodelsdonot.AmongtheCodeLlama models,CL-13BperformswellonCWE-362(RaceCondition)in CVEFixesC/C++with63.1%andCVEFixesJavawith66.7%.CWE-89 (SQLInjection)isanothervulnerabilitywhereonlyCL-7Breports highaccuracieson2/3datasets(JulietJavawith74%andCVEFixes Javawith64%). Finding3: LLMsperformbetteronvulnerabilitiesthatonly needreasoningaboutlocalentities(Out-of-boundsRead/ Write,IntegerOverflow,NULLpointerdereference).This localitycanvaryacrosslanguages:ArrayindexinginJava canbelocallyreasonedaboutwhilestructpointeraccesses inC/C++needadditionalcontext. Implication 3: Techniques for language-specific and vulnerability-specificadaptionofLLMsshouldbedeveloped toimprovetheirdetectionabilities. 3.4 RQ4:ImpactofAdversarialAttacks Experiment setup. We implement three semantics-preserving attacks,proposedbyGaoetal.[21]:dead-codeinjection,variable renaming,andbranchinsertion(illustratedinTable11inappendix). WeonlyselecttestsampleswheretheoriginalpredictionofLLMs (withtheCWEprompt)wascorrect.Weselect100testsamplesper datasetforthisexperimentandmeasurethereductioninaccuracy aftertheattack.Forthisexperiment,weselecttwomodels,GPT-4 andCL-7B,andfourdatasets.Figure5presentsourresults. 0 −15 −30 −45 −60 DC VR BI JulietJava )%(noitcudeRycaruccA CodeLlama-7B GPT-4 DC VR BI JulietC/C++ 0 −15 −30 −45 −60 DC VR BI CVEFixesJava )%(noitcudeRycaruccA Results.WefindGPT-4suffersmilddegradationinaccuracy(aver- age8.6%)acrossalldatasetsandattacks.ForGPT-4,thedegradation ismoresignificantforreal-worlddatasets(upto23%)comparedto syntheticones(uptoonly2%).Incontrast,forCodeLlama-7B,the accuracysignificantlyreducesbyalmost40-50%inmanycasesand by39%onaverage,especiallyforC++. Amongthethreeattacks,branchinsertion(27%avg.reduction) andvariablerenaming(23%avg.reduction)havethehighestimpact onLLMperformance.Onfurtheranalysisofincorrectpredictions, weobservethatundertheseattacks,GPT-4eitherfailstoinferany sources,sinks,orsanitizers(eveniftheyexist)orthereasoning chainbreaks(whichworkedpreviously).Interestingly,underthese attacks,CodeLlama-7Bfailstofollowtheinstructionsinthesystem promptanddoesnotperformproperanalysisusingthedataflow analysissteps,causingittooutputanincorrectresponse. Finding4:MorepowerfulLLMslikeGPT-4remainrobust toadversarialattacksandonlysuffermilddegradationin performance.WhereassmallerLLMslikeCodeLlama-7B showasignificantdropinperformanceinthepresenceof perturbations.Overall,theimpactofattacksismorepro- nouncedforreal-worlddatasets. Implication4:DevelopersofcodeLLMsshouldconsider adversarialtrainingmethodsduringfine-tuningtoensure modelrobustnesstosuchattacks. 3.5 RQ5:LLMsvsStaticAnalysisTools Experimentsetup.WenextexplorehowGPT-4(CWE-DF)com- paresagainstCodeQL.SinceCodeQLrequiresbuildingprojects beforeanalysisandthereal-worlddatasetscontainlargeprojects, welimitourfocustothethreesyntheticdatasets,namelyOWASP andJulietJavaandJulietC/C++.Inthisstudy,weruntheofficial CodeQLqueriesdesignedforthetop25CWEs.Table6presents resultsfromCodeQLandGPT-4onthethreedatasets.CodeQL performsbetterthanGPT-4(CWE-DF)intermsofaccuracy onallthreedatasets.GPT-4reportsa0.09higherF1thanCodeQL onJulietC/C++whileCodeQLreportsa0.03and0.18higherF1on OWASPandJulietJavarespectively. Table6:GPT-4vsCodeQLonsyntheticdatasets Dataset/Model CodeQL GPT-4(CWE-DF) A P R F1 A P R F1 OWASP 0.63 0.58 0.96 0.72 0.55 0.53 1.00 0.69 JulietJava 0.94 0.92 0.95 0.94 0.70 0.63 0.98 0.76 JulietC/C++ 0.72 0.99 0.44 0.61 0.59 0.55 0.98 0.70 WemanuallyanalyzesomeexampleswhereCodeQLiscorrect andGPT-4isn’tandviceversa.AsnotedinSection3.2,wefind DC VR BI thatevenwhenGPT-4(withCWE-DF)providesanincorrect CVEFixesC/C++ prediction, the identified sources / sinks and even unsan- |
Figure5:AccuracydegradationofCodeLlama-7BandGPT- itized flows are often correct. Listing 3 presents an example 4underthreekindsofadversarialattacks(DC:Deadcode whereCodeQLcorrectlypredictsthatthetargetsnippetisvulner- insertion,VR:VariableRename,BI:BranchInsertion).Lower abletoIntegerOverflowwhileGPT-4withCWE-DFdoesnot.The valuesindicatemorepronetoattack. modelcorrectlyidentifiesthesources,sinksandevenunsanitizedUnderstandingtheEffectivenessofLargeLanguageModelsinDetectingSecurityVulnerabilities // Code snippet prone to CWE-190 (Integer Overflow) Implication5:FuturetechniquesshouldcombineLLMs’ // data is incremented without verifying if data < INT_MAX abilitytoinferrelevantsourcesandsinksincodeandits static const int STATIC_CONST_FIVE = 5; contextunderstandingabilitiestoimprovestaticanalysis void f7909() toolslikeCodeQLtoobtainthebestofbothworlds. { int64_t data; data = 0LL; if(STATIC_CONST_FIVE==5) { data = (int64_t)RAND64(); 3.6 RQ6:LLMsvsDeep-Learning-BasedTools } if(STATIC_CONST_FIVE==5) WecompareLLMsagainsttwopriordeeplearning-basedapproaches: { 1)DeepDFA[43],whichtrainsGraphNeuralNetworksusingem- { ++data; beddingsofcontrolflowgraphsandassociateddataflowfacts,and int64_t result = data; 2)LineVul[20],whichisatransformed-basedmodeltrainedusing printLongLongLine(result); token-basedrepresentationofcode. } Experimentsetup.WeusedtheDeepDFAandLineVulversions } } fromDeepDFA’slatestartifactversion[15].ForCVEFixesC/C++ and Juliet C/C++, we use the same test set as our main evalua- tion§3.1.Wesplittheremainingdatasetintotraining(80%)andval- GPT-4’sresponse: UnsanitizedDataFlows✓ idation(20%)sets.WealsocomparetheresultswiththeBigVul[17] The data from RAND64() is directly used in dataset,usedinthepriorwork.BecauseBigVulisamuchlarger printLongLongLine() without any sanitization or real-worlddataset,itcanpotentiallyimprovetheperformanceof thetools.Weexploretwoevaluationsettings:1)trainandteston checksforintegeroverflow.) thesamedatasetand2)trainononeandtestonadifferentdataset Vulnerabilityanalysisverdict✗ toevaluatethegeneralizabilityofthesemodels.Table7presents theresults.Eachcellreportstheaverageresultsacrossthreeruns. vulnerability:NO(CWE-190) DL-basedapproacheshavelimitedeffectivenessonreal-world Explanation✗ datasets.WeobservethatbothDeepDFAandLineVulobtainamax- Thecodedoesnotperformanyarithmeticoperationsthat imumF1scoreof0.62onCVEFixesC/C++,whichisquitesimilar couldleadtoanintegeroverflow.Theonlyoperationper- to the scores of GPT-4. For DeepDFA, while training on larger formed on the data variable is incrementing it by one BigVulandJulietC/C++datasetsimprovesitsF1scores,theac- (++data),whichisunlikelytocauseanintegeroverflow curacyscoresremaincloseto50%.Ontheotherhand,LineVulis giventhatdataisa64-bitinteger. abletolearnbetterevenwhenusingCVEFixesC/C++,whichis relativelysmaller.Interestingly,whentrainedonBigVul,LineVul obtainsapoorF1scoreof0.02onCVEFixesC/C++,whichmight Listing3:CodeQLcorrectlypredictsIntegerOverflowwhile indicateanover-fittingproblem.Weobserveasimilarpatternwhen GPT-4 doesn’t. GPT-4, however, correctly identifies the LineVulistestedonJulietC/C++usingCVEFixesC/C++orBigVul sources and sinks andtheunsanitizedpaths. trainingdatasets.ForJulietC/C++,bothtoolsobtainbetterscores comparedtoGPT-4.LineVulobtainsaperfectscoreindicatingthat JulietC/C++haslimitedcodepatternsthatarelikelyeasiertolearn. dataflowsinthiscasebutfailstofaithfullyreasonoverthemwhen However,asweexplainlater,LineVuldoesnotgeneralizewell. predictingthevulnerability. Ontheotherhand,Listing2providesanexamplewhereGPT-4 Generalizabilityacrossdatasets.Weobservethat,forLineVul, correctlypredictsthatthesnippetispronetoOSCommandInjec- theF1scoresdropdrasticallywhentrainedonBigVulandtestedon tionwhileCodeQLdoesnot.ThishappensbecauseCodeQLonly CVEFixesC/C++orJulietC/C++.ComparedtoLineVul,DeepDFA looksforveryspecificcasesofcommandinjection:Wefind generalizesbetter,obtaininghigheraccuracyandF1scoresonJuliet thatCodeQLonlydetectsOSCommandInjectionforsystemcom- C/C++whentrainedusingotherdatasets. mandsthattakeastringofarguments(execl)butnotforthosethat Trade-offs.DeepDFAinvolvessignificantinferenceoverhead,due takealistofarguments(_execvp,usedinthisexample).GPT-4,on totheCFGextractionanddataflowanalysissteps.LLMs,however, theotherhand,identifiesthecorrectsources,sinks,andunsanitized canusethetextualrepresentationofcodeandcanoperateonin- flowsandaccuratelyreasonsthatthesnippetisvulnerable.More complete/partialprograms.Theuseofdata-flowandcontrol-flow examplesarepresentedintheAppendixA.6. informationinDeepDFAisevidentlyuseful.Wemadesimilarobser- vationswithLLMswhenusingtheCWE-DFprompt.Ontheother Finding5: CodeQLperformsbetterthanGPT-4onvulner- hand,LineVul,likeLLMs,canleveragenaturallanguageinforma- abilitydetectionacrossallCWEs.GPT-4can,however,often tionbuthasageneralizationproblem.Finally,bothDeepDFAand |
accuratelyidentifyanalysis-relevantsourcesandsinksthat LineVulprovidebinarylabelsandlinenumbersthataredifficultto aremissedbyCodeQL’sstrictqueries. interpret.LLMscanadditionallyprovideexplanations,whichare usefulforfurtherdebugging(asshowninpriorsections).AvishreeKhare∗,SaikatDutta∗,ZiyangLi,AlaiaSolko-Breslin,RajeevAlur,andMayurNaik LSTM-basedmodelsonslicesanddatadependencieswhileDraper Finding 6: Deep Learning-based tools, similar to LLMs, usedConvolutionalNeuralNetworks.Recentworksdemonstrate have poor effectiveness on vulnerability detection, espe- thatTransformer-basedmodelsfine-tunedonthetaskofvulnera- ciallywhendealingwithreal-worlddatasets.Further,prior bilitydetectioncanoutperformspecializedtechniques(CodeBERT, transformer-basedapproachessufferfrompoorgeneraliza- LineVul[20],UnixCoder).DeepDFA[43]andContraFlow[10]learn tionacrossdatasets. specializedembeddingsthatcanfurtherimprovetheperformance ofTransformer-basedvulnerabilitydetectiontools.Tothebestof Implication6:Futuretechniquesshouldexploreadeeper ourknowledge,thesetechniquesprovidebinarylabelsforvulnera- combinationofpre-trainedLLMswithdataflow-styleanal- bilitydetectionandcannotclassifythetypeofvulnerability.Thapa ysistobuildmoreeffective,interpretable,andgeneralvul- etal.[44]explorewhetherLanguageModelsfine-tunedonmulti- nerabilitydetectiontools. classclassificationcanperformwellwheretheclassescorrespond togroupsofsimilartypesofvulnerabilities.Incontrast,westudy someofthelargestLanguageModels,suchasGPT-4,andperforma Table7:GPT-4vsDeepDFAvsLineVulonCVEFixesC/C++ muchgranularCWE-levelclassification,generatehuman-readable andJulietC/C++ informalspecificationsandexplorevariouspromptingtechniques thatallowusingtheLLMsout-of-the-box. Model Train/Prompt Test A P R F1 LLMsforautomatedsoftwareengineering.Severalrecentap- DeepDFA BigVul BigVul 0.98 0.53 0.92 0.67 LineVul BigVul BigVul 0.99 0.96 0.88 0.92 proacheshavedemonstratedthatLLMscanbeeffectivelylever- agedtoimprovethestate-of-the-artperformanceinvarioustra- GPT-4 CWE-DF CVEFixesC/C++ 0.52 0.51 0.76 0.61 ditionalsoftwareengineeringtaskssuchasautomatedprogram DeepDFA CVEFixesC/C++ CVEFixesC/C++ 0.51 0.55 0.17 0.23 repair[25,48,49],testgeneration[16,28],codeevolution[51],and DeepDFA JulietC/C++ CVEFixesC/C++ 0.53 0.53 0.65 0.58 DeepDFA BigVul CVEFixesC/C++ 0.52 0.52 0.76 0.62 faultlocalization[50].However,unliketheseapproaches,wefind thatLLMshavelimitedvulnerabilitydetectionabilities. LineVul CVEFixesC/C++ CVEFixesC/C++ 0.59 0.58 0.65 0.61 LineVul JulietC/C++ CVEFixesC/C++ 0.50 0.50 0.91 0.64 Recently,Lietal.[30]developedLlift,anapproachthatcombines LineVul BigVul CVEFixesC/C++ 0.50 0.63 0.01 0.02 LLMswithstaticanalysistodetectUseBeforeInitialization(UBI) GPT-4 CWE-DF JulietC/C++ 0.59 0.55 0.98 0.70 bugsinLinuxkernel.Whiletheyfocusonaspecificclassofbugs, theirapproachsupportstheobservationswemakeinSection3.5 DeepDFA JulietC/C++ JulietC/C++ 0.77 0.74 0.82 0.78 DeepDFA CVEFixesC/C++ JulietC/C++ 0.64 0.72 0.46 0.55 onthecomplementarynatureofstaticanalysisandLLMs.Tothe DeepDFA BigVul JulietC/C++ 0.74 0.70 0.82 0.77 bestofourknowledge,ourworkisthefirstcomprehensiveand LineVul JulietC/C++ JulietC/C++ 1.0 1.0 0.99 1.0 generalstudyofvulnerabilitydetectionabilitiesofLLMsacrossa LineVul CVEFixesC/C++ JulietC/C++ 0.51 0.62 0.14 0.22 broadrangeofvulnerabilities(25CWEs)acrosstwolanguages. LineVul BigVul JulietC/C++ 0.43 0.42 0.39 0.41 4 RELATEDWORK 5 THREATSTOVALIDITY Staticanalysistoolsforvulnerabilitydetection.Staticanal- External.ThechoiceofLLMsanddatasetsmaybiasourevaluation ysis tools search for pre-defined vulnerability patterns in code. andinsights.Toaddressthisthreat,wechoosemultiplepopular ToolssuchasFlawFinder[19]andCppCheck[12]usesyntactic syntheticandreal-worlddatasetsacrosstwolanguages:Javaand andsimplesemanticanalysistechniquestofindvulnerabilitiesin C++.Wealsochoosebothstate-of-the-artclosed-sourceandopen- C++code.MoreadvancedtoolslikeCodeQL[1],Infer[18],and sourceLLMs.However,ourinsightsmaynotgeneralizetoother CodeChecker[11]employsemanticanalysistechniquesandcan languagesordatasetsnotstudiedinthispaper. detectvulnerabilitiesinmultiplelanguages.Staticanalysistools Internal. Owing to the non-deterministic nature of LLMs and relyonmanuallycraftedrulesandprecisespecificationsofcodebe- singleexperimentrunsperbenchmark,ourobservationsmaybebi- havior,whichisdifficulttoobtainautomatically.Incontrast,while ased.Tomitigatethisthreat,weuseatemperatureofzerotoensure LLMscannotalwaysreliablyperformend-to-endreasoningover determinismacrossallLLMs.Whilethisworkswellforlocallyrun |
code,weobservethatLLMscanautomaticallyidentifysuchspecifi- CodeLlamamodels,itiswell-knownthatGPT-4andGPT-3.5might cationsbyleveragingstatisticallylearnedrulesfromtrainingdata. stillreturnnon-deterministicresults.However,duetothelarge Further,duetotheirsemanticunderstandingofnaturallanguage, numberofbenchmarksweevaluate,thenon-determinismshould LLMscanalsoperformmorecontextualreasoning.Theseabilities balanceoutacrossthedatasets.Further,giventhepooreffective- canpotentiallybeleveragedtoimprovestaticanalysistools. nessofallLLMsacrosstheboard,wedonotexpectourresultsto DeepLearning-basedvulnerabilitydetection.Severalworks significantlychangewithre-runs. havefocusedonusingDeepLearningtechniquesforvulnerabil- Ourevaluationcodeandscriptsmayhavebugs,whichmightbias itydetection.EarlierworkssuchasDevign[52],Reveal[6],LineVD[23] ourresults.Ourmanualanalysisofresultsmayleadtoerroneous andIVDetect[32]leveragedGraphNeuralNetworks(GNNs)for inferences.Toaddressthisthreat,multipleco-authorsreviewedthe modeling dataflow graphs, control flow graphs, abstract syntax coderegularlyandactivelyfixedissues.Further,multipleco-authors treesandprogramdependencygraphs.Otherworksexploredalter- ofthepaperindependentlyanalyzedtheresultsanddiscussedthem natemodelarchitectures:VulDeePecker[33]andSySeVR[34]used togethertomitigateanydiscrepancies.UnderstandingtheEffectivenessofLargeLanguageModelsinDetectingSecurityVulnerabilities 6 CONCLUSION In this work, we performed a comprehensive analysis of LLMs forsecurityvulnerabilitydetection.Ourstudyrevealsthatboth closed-sourceLLMs,suchasGPT-4,andopen-sourceLLMs,like CodeLlama,performmodestlyatvulnerabilitydetectionforboth JavaandC/C++.Theirperformanceonvulnerabilitydetectionis lowerwhendealingwithreal-worldcode.However,wefindthat evenincaseswherethemodelsproduceincorrectpredictions,they identifyrelevantsources,sinksandsanitizersfordataflowanalysis. Hence,webelievethataninterestingfuturedirectionistodevelop neuro-symbolictechniquesthatcombinetheintuitivereasoning abilitiesofLLMswithsymbolictoolssuchaslogicalreasoningen- ginesandstaticcodeanalyzersformoreeffectiveandinterpretable solutions.AvishreeKhare∗,SaikatDutta∗,ZiyangLi,AlaiaSolko-Breslin,RajeevAlur,andMayurNaik REFERENCES (ICSE). [1] PavelAvgustinov,OegedeMoor,MichaelPeytonJones,andMaxSchäfer.2016. [29] LucasLeong.2022.Mindshare:WhenMySQLClusterEncountersTaintAnaly- QL:Object-orientedQueriesonRelationalData.InEuropeanConferenceon sis.https://www.zerodayinitiative.com/blog/2022/2/10/mindshare-when-mysql- Object-OrientedProgramming. cluster-encounters-taint-analysis. [2] GuruPrasadBhandari,AmaraNaseer,andLeonMoonen.2021.CVEfixes:au- [30] HaonanLi,YuHao,YizhuoZhai,andZhiyunQian.2024. EnhancingStatic tomatedcollectionofvulnerabilitiesandtheirfixesfromopen-sourcesoftware. AnalysisforPracticalBugDetection:AnLLM-IntegratedApproach.Proceedings Proceedingsofthe17thInternationalConferenceonPredictiveModelsandData oftheACMonProgrammingLanguages8,OOPSLA1(2024). AnalyticsinSoftwareEngineering(2021). [31] RaymondLi,LoubnaBenAllal,YangtianZi,NiklasMuennighoff,DenisKocetkov, [3] PaulEBlackandPaulEBlack.2018.Juliet1.3testsuite:Changesfrom1.2.US ChenghaoMou,MarcMarone,ChristopherAkiki,JiaLi,JennyChim,etal.2023. DepartmentofCommerce,NationalInstituteofStandardsandTechnology. StarCoder:maythesourcebewithyou!arXivpreprintarXiv:2305.06161(2023). [4] TimBolandandPaulEBlack.2012.Juliet1.1C/C++andJavatestsuite.Computer [32] YiLi,ShaohuaWang,andTienNhutNguyen.2021. Vulnerabilitydetection (2012). withfine-grainedinterpretations.Proceedingsofthe29thACMJointMeetingon [5] SébastienBubeck,VarunChandrasekaran,RonenEldan,JohannesGehrke,Eric EuropeanSoftwareEngineeringConferenceandSymposiumontheFoundationsof Horvitz,EceKamar,PeterLee,YinTatLee,YuanzhiLi,ScottLundberg,Harsha SoftwareEngineering(2021). Nori,HamidPalangi,MarcoTulioRibeiro,andYiZhang.2023.SparksofArtificial [33] ZhuguoLi,DeqingZou,ShouhuaiXu,ZhaoxuanChen,YaweiZhu,andHai GeneralIntelligence:EarlyexperimentswithGPT-4. arXiv:2303.12712[cs.CL] Jin.2020.VulDeeLocator:ADeepLearning-BasedFine-GrainedVulnerability [6] SaikatChakraborty,RahulKrishna,YangruiboDing,andBaishakhiRay.2020. Detector.IEEETransactionsonDependableandSecureComputing(2020). DeepLearningBasedVulnerabilityDetection:AreWeThereYet?IEEETransac- [34] Z.Li,DeqingZou,ShouhuaiXu,HaiJin,YaweiZhu,ZhaoxuanChen,Sujuan tionsonSoftwareEngineering48(2020),3280–3296. Wang,andJialaiWang.2018.SySeVR:AFrameworkforUsingDeepLearning [7] SaikatChakraborty,RahulKrishna,YangruiboDing,andBaishakhiRay.2021. toDetectSoftwareVulnerabilities.IEEETransactionsonDependableandSecure Deeplearningbasedvulnerabilitydetection:Arewethereyet?IEEETransactions Computing19(2018),2244–2258. |
onSoftwareEngineering48,9(2021),3280–3296. [35] ValentinJMManes,HyungSeokHan,ChoongwooHan,SangKilCha,Manuel [8] MarkChen,JerryTworek,HeewooJun,QimingYuan,HenriquePonde,JaredKa- Egele,EdwardJSchwartz,andMaverickWoo.2018.Fuzzing:Art,science,and plan,HarrisonEdwards,YuraBurda,NicholasJoseph,GregBrockman,AlexRay, engineering.arXivpreprintarXiv:1812.00140(2018). RaulPuri,GretchenKrueger,MichaelPetrov,HeidyKhlaaf,GirishSastry,Pamela [36] MattMiller.2019.Microsoft:70percentofallsecuritybugsarememorysafetyis- Mishkin,BrookeChan,ScottGray,NickRyder,MikhailPavlov,AletheaPower, sues.https://www.zdnet.com/article/microsoft-70-percent-of-all-security-bugs- LukaszKaiser,MohammadBavarian,ClemensWinter,PhilippeTillet,FelipePet- are-memory-safety-issues/. roskiSuch,DavidW.Cummings,MatthiasPlappert,FotiosChantzis,Elizabeth [37] MITRETop25CWEs2023. https://cwe.mitre.org/top25/archive/2023/2023_ Barnes,ArielHerbert-Voss,WilliamH.Guss,AlexNichol,IgorBabuschkin, top25_list.html S.ArunBalaji,ShantanuJain,AndrewCarr,JanLeike,JoshuaAchiam,Vedant [38] OWASPBenchmarkSuite2023. https://owasp.org/www-project-benchmark. Misra,EvanMorikawa,AlecRadford,MatthewM.Knight,MilesBrundage,Mira [39] BaptisteRozière,JonasGehring,FabianGloeckle,StenSootla,ItaiGat,Xiao- Murati,KatieMayer,PeterWelinder,BobMcGrew,DarioAmodei,SamMcCan- qingEllenTan,YossiAdi,JingyuLiu,TalRemez,JérémyRapin,etal.2023.Code dlish,IlyaSutskever,andWojciechZaremba.2021.EvaluatingLargeLanguage llama:Openfoundationmodelsforcode.arXivpreprintarXiv:2308.12950(2023). ModelsTrainedonCode.ArXivabs/2107.03374(2021). [40] LeonardSalewski,StephanAlaniz,IsabelRio-Torto,EricSchulz,andZeynep [9] YizhengChen,ZhoujieDing,LamyaAlowain,XinyunChen,andDavidA.Wag- Akata. 2024. In-Context Impersonation Reveals Large Language Models’ ner.2023.DiverseVul:ANewVulnerableSourceCodeDatasetforDeepLearning StrengthsandBiases. AdvancesinNeuralInformationProcessingSystems36 BasedVulnerabilityDetection.Proceedingsofthe26thInternationalSymposium (2024). onResearchinAttacks,IntrusionsandDefenses(2023). [41] Semgrep.2023.TheSemgrepPlatform.https://semgrep.dev/. [10] XiaoCheng,GuanqinZhang,HaoyuWang,andYuleiSui.2022.Path-sensitive [42] Semmle.2023.VulnerabilitiesdiscoveredbyCodeQL.https://securitylab.github. codeembeddingviacontrastivelearningforsoftwarevulnerabilitydetection. com/advisories/. Proceedingsofthe31stACMSIGSOFTInternationalSymposiumonSoftwareTesting [43] BenjaminSteenhoek,HongyangGao,andWeiLe.2024. DataflowAnalysis- andAnalysis(2022). InspiredDeepLearningforEfficientVulnerabilityDetection.InProceedingsof [11] CodeChecker2023. https://github.com/Ericsson/codechecker. the46thIEEE/ACMInternationalConferenceonSoftwareEngineering.1–13. [12] CPPCheck2023. https://cppcheck.sourceforge.io/. [44] ChandraThapa,SeungIckJang,MuhammadEjazAhmed,SeyitAhmetÇamtepe, [13] CVE-2022-36022022. https://nvd.nist.gov/vuln/detail/CVE-2022-3602. JosefPieprzyk,andSuryaNepal.2022. Transformer-BasedLanguageModels [14] CVE-2022-37862022. https://nvd.nist.gov/vuln/detail/CVE-2022-3786. forSoftwareVulnerabilityDetection.Proceedingsofthe38thAnnualComputer [15] DeepDFAartifact2024. https://github.com/ISU-PAAL/DeepDFA/tree/master. SecurityApplicationsConference(2022). [16] YinlinDeng,ChunqiuStevenXia,HaoranPeng,ChenyuanYang,andLingming [45] WenboWang,TienNNguyen,ShaohuaWang,YiLi,JiyuanZhang,andAashish Zhang.2023.Largelanguagemodelsarezero-shotfuzzers:Fuzzingdeep-learning Yadavally.2023.DeepVD:TowardClass-SeparationFeaturesforNeuralNetwork librariesvialargelanguagemodels.InProceedingsofthe32ndACMSIGSOFT VulnerabilityDetection.In2023IEEE/ACM45thInternationalConferenceon InternationalSymposiumonSoftwareTestingandAnalysis.423–435. SoftwareEngineering(ICSE). [17] JiahaoFan,YiLi,ShaohuaWang,andTienN.Nguyen.2020. AC/C++Code [46] Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian VulnerabilityDatasetwithCodeChangesandCVESummaries.InProceedingsof Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, the17thInternationalConferenceonMiningSoftwareRepositories(Seoul,Republic EdHuaihsinChi,TatsunoriHashimoto,OriolVinyals,PercyLiang,JeffDean, ofKorea)(MSR’20).AssociationforComputingMachinery,NewYork,NY,USA, andWilliamFedus.2022.EmergentAbilitiesofLargeLanguageModels.Trans. 508–512. https://doi.org/10.1145/3379597.3387501 Mach.Learn.Res.2022(2022). [18] FbInfer2023. https://fbinfer.com/. [47] JasonWei,XuezhiWang,DaleSchuurmans,MaartenBosma,FeiXia,EdChi, |
[19] FlawFinder2023. https://dwheeler.com/flawfinder QuocVLe,DennyZhou,etal.2022.Chain-of-thoughtpromptingelicitsreason- [20] MichaelFuandChakkritTantithamthavorn.2022. LineVul:ATransformer- inginlargelanguagemodels.Advancesinneuralinformationprocessingsystems basedLine-LevelVulnerabilityPrediction.In2022IEEE/ACM19thInternational 35(2022),24824–24837. ConferenceonMiningSoftwareRepositories(MSR).IEEE. [48] ChunqiuStevenXia,YuxiangWei,andLingmingZhang.2023. Automated [21] FengjuanGao,YuWang,andKeWang.2023. DiscreteAdversarialAttackto programrepairintheeraoflargepre-trainedlanguagemodels.InProceedingsof ModelsofCode.7,PLDI(2023). https://doi.org/10.1145/3591227 the45thInternationalConferenceonSoftwareEngineering(ICSE2023).Association [22] GitHub.2023.TheBugSlayer. https://securitylab.github.com/bounties. forComputingMachinery. [23] DavidHin,AndreyKan,HuamingChen,andMuhammadAliBabar.2022.LineVD: [49] ChunqiuStevenXiaandLingmingZhang.2022.Lesstraining,morerepairing Statement-levelVulnerabilityDetectionusingGraphNeuralNetworks. 2022 please:revisitingautomatedprogramrepairviazero-shotlearning.InProceedings IEEE/ACM19thInternationalConferenceonMiningSoftwareRepositories(MSR) ofthe30thACMJointEuropeanSoftwareEngineeringConferenceandSymposium (2022). ontheFoundationsofSoftwareEngineering.959–971. [24] HuggingFace2023. https://huggingface.co/. [50] AidanZHYang,ClaireLeGoues,RubenMartins,andVincentHellendoorn.2024. [25] HarshitJoshi,JoséCambroneroSanchez,SumitGulwani,VuLe,GustVerbruggen, Largelanguagemodelsfortest-freefaultlocalization.InProceedingsofthe46th andIvanRadiček.2023.Repairisnearlygeneration:Multilingualprogramrepair IEEE/ACMInternationalConferenceonSoftwareEngineering.1–12. withllms.InProceedingsoftheAAAIConferenceonArtificialIntelligence. [51] JiyangZhang,PengyuNie,JunyiJessyLi,andMilosGligoric.2023. Multilin- [26] JulietC/C++2023. https://samate.nist.gov/SARD/test-suites/112. gualcodeco-evolutionusinglargelanguagemodels.InProceedingsofthe31st [27] JulietJava2023. https://samate.nist.gov/SARD/test-suites/111. ACMJointEuropeanSoftwareEngineeringConferenceandSymposiumonthe [28] CarolineLemieux,JeevanaPriyaInala,ShuvenduKLahiri,andSiddharthaSen. FoundationsofSoftwareEngineering.695–707. 2023. CODAMOSA:Escapingcoverageplateausintestgenerationwithpre- [52] YaqinZhou,ShangqingLiu,J.Siow,XiaoningDu,andYangLiu.2019. De- trainedlargelanguagemodels.InInternationalconferenceonsoftwareengineering vign:EffectiveVulnerabilityIdentificationbyLearningComprehensiveProgram SemanticsviaGraphNeuralNetworks.InNeuralInformationProcessingSystems.UnderstandingtheEffectivenessofLargeLanguageModelsinDetectingSecurityVulnerabilities A APPENDIX System: [Same as above] User: Is the following code snippet prone to <CWE>? A.1 DatasetProcessingandSelection <CODE_SNIPPET> Response: Weperformadataprocessingandcleaningstepforeachdataset beforeevaluatingthemwithLLMs. Listing5:CWE-specificLLMPrompt OWASP.WeremoveoranonymizeinformationinOWASPbench- marksthatmayprovideobvioushintsaboutthevulnerabilityina System: You are a security researcher, expert in detecting security vulnerabilities. Carefully analyze the given code snippet and track file.Forinstance,wechangepackage,variablenames,andstrings the data flows from various sources to sinks. Assume that any call suchas“owasp”,“testcode”,and“/sqli-06/BenchmarkTest02732” to an unknown external API is unsanitized. tootherpre-selectedun-identifyingnamessuchas“pcks”,“csdr”, Please provide a response only in the following itemized OUTPUT etc.Weremoveallcommentsinthefilebecausetheymayexplic- FORMAT. Use N/A in other fields if there are no vulnerabilities. DO itlyhighlightthevulnerablelineofcodeormayhaveirrelevant NOT INCLUDE ANYTHING ELSE IN YOUR RESPONSE. <OUTPUT FORMAT> text(suchascopyrightinfo),whichmayleakinformation.These Data flow analysis of the given code snippet: changes,however,donotchangethesemanticsofthecodesnippets. 1. Sources: <numbered list of input sources> JulietJavaandC/C++.SimilartoOWASP,weremoveallcom- 2. Sinks: mentsandtransformallidentifiersthatleakidentifyinginformation <numbered list of output sinks> inalltestcasesintheJulietbenchmark.Forinstance,wechange 3. Sanitizers: <numbered list of sanitizers, if any> “classCWE80_XSS_CWE182_Servlet_connect_tcp_01”to“classMy- 4. Unsanitized Data Flows: Class”.TheJulietbenchmarkprovidesthevulnerable(namedas <numbered list of data flows that are not sanitized in the format “bad”)andnon-vulnerable(namedas“good*”)methodsinthesame (source, sink, why this flow could be vulnerable)> 5. Final Vulnerability analysis verdict: vulnerability: <YES or NO> file. For easier evaluation, we perform a pre-processing step to | vulnerability type: <CWE_ID> | vulnerability name: <NAME_OF_CWE> spliteachfileintotwo,eachcontainingeitheravulnerableornon- | explanation: <explanation for prediction> </OUTPUT FORMAT> vulnerablemethod.Julietalsocontainsspecialbenchmarksthat |
User: Is the following code snippet prone to <CWE>? havedependenciesacrossmultiple(2-5)files.Weskipthesebench- <CODE_SNIPPET> marksbecausetheyaretypicallytoobigtofitintotheLLMprompt. Response: Hence,thenumberoftestcasesafterthedataprocessingstepin Listing6:Dataflowanalysis-basedLLMPrompt Julietisreduced(asshowninTable4). CVEFixes.ForeachCVE,CVEFixesprovidesthemethodsthat wereinvolvedinthefixcommit.Italsoincludesthemethodcode limitedexperimentalsetting.Forthefew-shotprompt(CWE-Few- intheparentcommit,i.e.,themethodversionbeforethefix.We shot),weincludedtwoexamplesofthetask(onewithavulnerability collectallmethodsinthefixcommitandtheparentcommitand andonewithout)intheCWEspecificpromptbeforeprovidingthe labelthemasvulnerableandnon-vulnerable,respectively.Similar targetcodesnippet.Forthechain-of-thoughtprompt(CWE-CoT), tootherdatasets,wealsoremoveallcommentsinthemethodcode. weexplicitlyaskthemodeltoprovideareasoningchainbeforethe WhileCVEFixescontainsmethodsacrossmultipleprogramming finalanswerbyaddinga“Let’sthinkstep-by-step”statementatthe languages,weonlycollectC/C++andJavamethodsforourstudy. endoftheCWEspecificprompt. Table8andTable9presenttheresultsfromGPT-4withvarious A.2 PromptingStrategies promptingstrategiesonarandomsubsetof100samplesofthe JulietJavaandCVEFixesC/C++datasetsrespectively.TheCWE- TheBasicpromptispresentedinListing4,CWEspecificpromptin DFpromptreportsthehighestaccuracyof69%andthehighest Listing5andDataflowanalysis-basedpromptinListing6. F1scoreof0.75ontheJulietJavadataset.TheCWE-DFprompt reportsa0.05higherF1scorethantheCWE-CoTpromptanda0.03 System: You are a security researcher, expert in detecting security vulnerabilities. higherF1scorethantheCWE-Few-shotprompt.Thisdifferenceis Provide response only in following format: vulnerability: <YES or muchmoreprominentontheCVEFixesC/C++datasetwherethe NO> | vulnerability type: <CWE ID> | vulnerability name: <CWE NAME> CWE-DFpromptreportsa0.34higherF1scorethantheCWE-CoT | explanation: <explanation for prediction>. Use N/A in other fields if there are no vulnerabilities. Do not promptanda0.31higherF1scorethantheCWE-Few-shotprompt. include anything else in response. Moreover,theCWE-Few-shotpromptreporteda0.2lowerF1score than the CWE specific prompt on the CVEFixes C/C++ dataset User: Is the following code snippet prone to any security vulnerability? whilerequiringmoretokens.Ouranalysisofthefew-shotprompts <CODE_SNIPPET> suggeststhatprovidingmoreexamplesmaynotbeausefulstrategy Response: forvulnerabilitydetection.Becausethepotentialsetofvulnerable Listing4:BasicLLMPrompt codepatternsisquitelarge,theprovidedexampleshardlymake adifferencetoLLMs’reasoningabilities.Hence,itmaybemore usefultousepromptsthatinsteadelicitreasoningorexplanations A.3 OtherPromptingStrategies ofsomekindbeforedetectingifthegivensnippetisvulnerable. Inadditiontothepromptingstrategiespresentedinourmaineval- The CWE-CoT prompt, however, does not help with reasoning uation, we considered other popular prompting strategies such always,asiteitherperformedatparorworsethantheDataflow asFew-shot promptingandChain-of-thought promptingina analysis-basedprompt.AvishreeKhare∗,SaikatDutta∗,ZiyangLi,AlaiaSolko-Breslin,RajeevAlur,andMayurNaik Table8:Allpromptingstrategieson100samplesfromJuliet Interestingly,CWE-78reportstheworstresultsontheothertwo Java. datasets(F1scoreof0.67withOWASPand0.03withJulietC/C++). Model Prompt Metrics // Code snippet prone to CWE-476 (NULL Pointer Dereference) A P R F1 // Both the conditions are evaluated due to the bitwise & operator GPT-4 CWE 0.65 0.58 0.96 0.72 GPT-4 CWE-Few-shot 0.65 0.58 0.94 0.72 if(globalReturnsTrue()) GPT-4 CWE-CoT 0.69 0.64 0.79 0.70 { GPT-4 CWE-DF 0.69 0.61 0.96 0.75 twoIntsStruct *twoIntsStructPointer = NULL; if ((twoIntsStructPointer != NULL) Table9:Allpromptingstrategieson100samplesfromCVE- & FixesC/C++. (twoIntsStructPointer->intOne == 5)) { Model Prompt Metrics printLine("intOne == 5"); } A P R F1 } GPT-4 CWE 0.55 0.54 0.58 0.56 GPT-4 CWE-Few-shot 0.49 0.38 0.34 0.36 GPT-4 CWE-CoT 0.52 0.37 0.30 0.33 GPT-4 CWE-DF 0.56 0.56 0.83 0.67 GPT-4’sresponse:ThepointertwoIntsStructPointeris initializedasNULLandthencheckedifit’snotNULLbefore dereferencing. However, the pointer is not assigned any Learningfromtheseexperiments,weselectedtheCWEspecific valueotherthanNULL,soit’snotactuallyariskofNULL prompt,Dataflowanalysis-basedprompt,inadditiontotheBasic pointerdereference.✗ prompt,forourmainevaluationwithLLMs. A.4 DetailedmetricsacrossallLLMsand Listing7:CodeQLcorrectlypredictsNullPointerDereferenc- Datasets ingwhileGPT-4doesn’t.GPT-4,however,correctlyidentifies |
Table10presentsthemetricsforallLLMsandDatasetsacrossall the sources and sinks. prompts. InListing7,themodelincorrectlyreasonsaboutthesanitization A.5 AdversarialAttackExamples byoverlookingthe&whichwouldcauseboththeexpressionsto beevaluated.Theidentifiedsourcesandsinksarecorrect,however. Table11:Illustrationofthreedifferentsemantics-preserving A.7 QualitativeanalysisofGPT-4responses adversarialattacks(ingray)onJavaprograms. Wefirstpresentexampleswherethedataflowanalysisfromthe Attack Example CWE-DFpromptisuseful.ConsiderthecodesnippetinListing8. Inthissnippet,thevariabledirisindirectlybeingusedtocreatea Dead-code int obj = new Object(...); Injection if(obj==null){/*deadcode*/} directoryviathedirToCreatevariable.GPT-4correctlyidentifies thatthispathisnotsanitizedandcouldbeusedtocreateadirectory Variable String var3 = (String) Renaming names.nextElement(); inotherwiserestrictedlocations.ThiscouldleadtoCWE-22(path String[] values = traversal)asisrightlypredictedbyGPT-4.InListing9,themethod request.getParameterValues(var3); writetakesanarraybandintegersoffsetandlengthasinputs. Branch Insertion booleanvar11=true; Theexpressionoffset + length > b.lengthcanleadtoanInteger if(var11){ configManager.init(); } Overflow(CWE-190)if offset + lengthexceedsthemaximum integervalueandthisiscorrectlyidentifiedbyGPT-4.Finally,in Table11presentsthethreeadversarialattacksthatweimple- Listing 10, there are multiple array accesses that could lead to mented: dead-codeinjection, variablerenaming,and branchin- CWE-125(Out-of-boundsread).GPT-4correctlyidentifiesthatthe sertion. Eachattack containsmultiplecode patterns,which are expressionciphertextOffset + length + indexcouldexceedthe randomlychosenwhenperturbingagivencodesnippet.Weimple- sizeoftheciphertextarray,leadingtothisvulnerability. menttheseattacksforJavaandC/C++.Foragivencodesnippet, weonlyapplyanattackonlyonce. // TARGET CWE: CWE-190 (Integer Overflow or Wraparound) // CODE SNIPPET A.6 MoreDetailsonCodeQLvs.GPT-4 public void write(byte[] b, int offset, int length) throws IOException { Table12presentstheCWE-wisedistributionofCodeQLresults. if (b == null) { CodeQLshowsrelativelybetterperformanceonJulietJava(with throw new NullPointerException(); anF1scoreover0.75acrossCWEs)thanJulietC/C++(withtwo } CWEsreportingF1scoresunder0.4).OnJulietJava,CodeQLre- if (offset < 0 || offset + length > b.length) { portsthehighestperformanceonCWE-78withanF1scoreof0.92. throw new ArrayIndexOutOfBoundsException();UnderstandingtheEffectivenessofLargeLanguageModelsinDetectingSecurityVulnerabilities Table10:EffectivenessofLLMsinPredictingSecurityVulnerabilities(JavaandC++).ThehighestaccuracyandF1scores(as wellasoneswithin0.1rangeofthehighestvalues)foreachdatasetarehighlightedinblue. Model Prompt OWASP JulietJava CVEFixesJava JulietC/C++ CVEFixesC/C++ A P R F1 A P R F1 A P R F1 A P R F1 A P R F1 GPT-4 Basic 0.52 0.51 1.00 0.68 0.56 0.54 0.85 0.66 0.50 0.50 0.34 0.41 0.54 0.52 0.92 0.67 0.51 0.50 0.57 0.54 GPT-4 CWE 0.54 0.52 1.00 0.69 0.69 0.63 0.97 0.76 0.55 0.56 0.44 0.50 0.58 0.54 0.95 0.69 0.52 0.52 0.52 0.52 GPT-4 CWE-DF 0.55 0.53 1.00 0.69 0.70 0.63 0.98 0.76 0.53 0.53 0.59 0.56 0.59 0.55 0.98 0.70 0.52 0.51 0.76 0.61 GPT-3.5 Basic 0.53 0.52 0.72 0.60 0.58 0.57 0.71 0.63 0.46 0.35 0.09 0.15 0.49 0.49 0.64 0.56 0.52 0.55 0.19 0.29 GPT-3.5 CWE 0.55 0.54 0.62 0.58 0.52 0.52 0.55 0.54 0.47 0.41 0.12 0.19 0.49 0.49 0.70 0.58 0.51 0.54 0.19 0.28 GPT-3.5 CWE-DF 0.51 0.51 0.93 0.66 0.40 0.44 0.73 0.55 0.54 0.53 0.66 0.59 0.40 0.44 0.77 0.56 0.52 0.52 0.75 0.61 CL-34B Basic 0.51 0.51 1.00 0.67 0.47 0.48 0.85 0.62 0.50 0.50 0.28 0.36 0.50 0.50 0.93 0.65 0.51 0.51 0.19 0.28 CL-34B CWE 0.57 0.54 0.94 0.69 0.49 0.49 0.94 0.65 0.50 0.51 0.16 0.25 0.53 0.52 0.98 0.68 0.51 0.54 0.08 0.14 CL-34B CWE-DF 0.50 0.50 1.00 0.67 0.50 0.50 1.00 0.67 0.50 0.50 1.00 0.67 0.50 0.50 1.00 0.67 0.50 0.50 1.00 0.67 CL-13B Basic 0.52 0.51 0.98 0.67 0.47 0.48 0.72 0.58 0.50 0.50 0.13 0.21 0.46 0.48 0.79 0.59 0.51 0.52 0.22 0.31 CL-13B CWE 0.52 0.51 0.98 0.67 0.50 0.50 0.89 0.64 0.48 0.47 0.29 0.36 0.53 0.51 0.98 0.67 0.52 0.52 0.56 0.54 CL-13B CWE-DF 0.50 0.50 1.00 0.67 0.50 0.50 1.00 0.67 0.50 0.50 1.00 0.67 0.50 0.50 1.00 0.67 0.50 0.50 0.96 0.66 |
CL-7B Basic 0.56 0.54 0.81 0.65 0.77 0.79 0.72 0.75 0.47 0.46 0.42 0.44 0.62 0.58 0.86 0.69 0.47 0.35 0.07 0.11 CL-7B CWE 0.50 0.50 1.00 0.67 0.52 0.51 0.99 0.67 0.51 0.51 0.84 0.63 0.51 0.50 0.99 0.67 0.50 0.50 0.84 0.63 CL-7B CWE-DF 0.50 0.50 1.00 0.67 0.50 0.50 1.00 0.67 0.50 0.50 1.00 0.67 0.50 0.50 1.00 0.67 0.51 0.50 0.98 0.66 Table12:CodeQLvsGPT-4(CWE-DF)CWE-wisecomparison E. Vulnerability analysis verdict: $$ vulnerability: YES | vulnerability type: CWE-190 | vulnerability name: Integer Overflow or Wraparound | explanation: The code does not Dataset CWE CodeQL GPT-4(CWE-DF) properly handle the case where 'offset + length' could overflow. If ’offset’ and ’length’ are both very large, A P R F1 A P R F1 their sum could wrap around to a negative value, passing the OWASP 22 0.52 0.50 1.00 0.67 0.48 0.48 1.00 0.64 bounds check and leading to an ArrayIndexOutOfBoundsException OWASP 78 0.56 0.53 0.77 0.63 0.48 0.48 1.00 0.65 at runtime. $$' OWASP 89 0.57 0.54 1.00 0.70 0.52 0.52 1.00 0.68 OWASP 79 0.79 0.71 1.00 0.83 0.65 0.59 1.00 0.74 Listing 9: GPT-4 with CWE-DF prompt on a real-world JulietJava 78 0.94 0.86 1.00 0.92 0.57 0.47 1.00 0.64 IntegerOverflowvulnerability JulietJava 89 0.88 0.85 1.00 0.92 0.69 0.68 1.00 0.81 JulietJava 476 0.76 0.76 0.76 0.76 0.79 0.75 0.88 0.81 JulietJava 190 0.97 1.00 0.92 0.96 0.71 0.62 0.97 0.75 JulietC/C++ 78 0.52 1.00 0.04 0.08 0.50 0.50 1.00 0.67 // TARGET CWE: CWE-125 (Out-of-bounds Read) JulietC/C++ 476 0.80 0.95 0.75 0.84 0.71 0.75 0.86 0.80 JulietC/C++ 416 0.63 1.00 0.26 0.41 0.52 0.51 0.97 0.67 // CODE SNIPPET JulietC/C++ 190 0.93 0.99 0.87 0.93 0.68 0.60 0.98 0.74 public int encryptWithAd(byte[] ad, byte[] plaintext, int plaintextOffset, byte[] ciphertext, int ciphertextOffset, int length) throws ShortBufferException { int space; if (ciphertextOffset > ciphertext.length) space = 0; else space = ciphertext.length - ciphertextOffset; } if (!haskey) { if (length > space) write(fd, b, offset, length); throw new ShortBufferException(); } if (plaintext != ciphertext || plaintextOffset != ciphertextOffset) // RESPONSE WITH DATAFLOW ANALYSIS INSPIRED PROMPT System.arraycopy(plaintext, plaintextOffset, Here is a data flow analysis of the given code snippet: ciphertext, ciphertextOffset, length); A. Sources: return length; 1. The byte array 'b' } 2. The integer 'offset' if (space < 16 || length > (space - 16)) 3. The integer 'length' throw new ShortBufferException(); setup(ad); B. Sinks: encryptCTR(plaintext, plaintextOffset, ciphertext, 1. The 'write' function ciphertextOffset, length); ghash.update(ciphertext, ciphertextOffset, length); C. Sanitizers: ghash.pad(ad != null ? ad.length : 0, length); 1. The condition 'if (offset < 0 || offset + length > b.length)' ghash.finish(ciphertext, ciphertextOffset + length, 16); for (int index = 0; index < 16; ++index) D. Unsanitized Data Flows: ciphertext[ciphertextOffset + length + index] ^= 1. (offset, write, The offset value is directly used in the write hashKey[index]; function without any sanitization) return length + 16; 2. (length, write, The length value is directly used in the write } function without any sanitization) // RESPONSE WITH DATAFLOW ANALYSIS INSPIRED PROMPTAvishreeKhare∗,SaikatDutta∗,ZiyangLi,AlaiaSolko-Breslin,RajeevAlur,andMayurNaik C. Sanitizers: // CVE-2021-39180 1. The `if (ciphertextOffset > ciphertext.length)` check 2. The `if (length > space)` check // CODE SNIPPET 3. The `if (space < 16 || length > (space - 16))` check public FileVisitResult preVisitDirectory(Path dir, D. Unsanitized Data Flows: BasicFileAttributes attrs) throws IOException { 1. (plaintext, System.arraycopy, if plaintext is larger than the Path relativeDir = source.relativize(dir); destination array, it could lead to an out-of-bounds read) final Path dirToCreate = Paths.get(destDir.toString(), 2. (ciphertext, ghash.finish, if ciphertextOffset + length + 16 is relativeDir.toString()); larger than the size of ciphertext, it could lead to an out- if(!dirToCreate.toFile().exists()) { of-bounds read) Files.createDirectory(dirToCreate); } E. Vulnerability analysis verdict: vulnerability: YES | return FileVisitResult.CONTINUE; vulnerability type: CWE-125 | vulnerability name: Out-of- } bounds Read | explanation: The code does not properly check the boundaries of the ‘ciphertext‘ array before reading from // RESPONSE WITH DATAFLOW ANALYSIS INSPIRED PROMPT it, which could lead to an out-of-bounds read if ` |
Here is a data flow analysis of the given code snippet: ciphertextOffset + length + 16` is larger than the size of A. Sources: the `ciphertext` array. 1. The `dir` parameter in the `preVisitDirectory` method. 2. The `source` variable. Listing10:GPT-4withCWE-DFpromptonareal-worldOut 3. The `destDir` variable. ofBoundsvulnerability B. Sinks: 1. The `Files.createDirectory` method. A.8 CWE-125(Out-of-boundsReadinJavavs. C. Sanitizers: No sanitizers are present in the code snippet. C++) D. Unsanitized Data Flows: Listing10presentsanexampleofOut-of-boundsreadvulnerability 1. (dir, Files.createDirectory, The `dir` parameter is used to inCVEFixesJavawhichGPT-4predictscorrectly.Codellama-34B create a new directory without any sanitization or validation. If an attacker can control the `dir` parameter, they could alsocorrectlypredictsthatthissnippetisvulnerable.Listing11 potentially create directories in unintended locations.) presentsanexamplefromCVEFixesC/C++whereOut-of-bounds 2. (source, Files.createDirectory, The `source` variable is used to readvulnerabilityisnotpossiblebutthisisnotevidentfromcontext. create a relative path without any sanitization or validation. If an attacker can control the `source` variable, Codellama-34B’sexplanationseemsplausiblegiventhatlittleis they could potentially manipulate the path creation process knownaboutthepointersinquestion. .) 3. (destDir, Files.createDirectory, The `destDir` variable is used // CODE SNIPPET NOT VULNERABLE TO CWE-125 (Out-of-bounds Read) to create a new directory without any sanitization or // The pointers cannot be reasoned about with this snippet alone validation. If an attacker can control the `destDir` variable, they could potentially create directories in unintended snmp_ber_decode_length(snmp_packet_t *snmp_packet, uint8_t *length) locations.) { if(snmp_packet->used == 0) { E. Vulnerability analysis verdict: $$ vulnerability: YES | return 0; vulnerability type: CWE-22 | vulnerability name: Improper } Limitation of a Pathname to a Restricted Directory ('Path Traversal') | explanation: *length = *snmp_packet->in++; The code does not validate or sanitize the ‘dir‘, ‘source‘, snmp_packet->used--; and ‘destDir‘ variables before using them to create a new directory. This could allow an attacker to manipulate the path and return 1; create directories in unintended locations, leading to a path } traversal vulnerability. $$ // CODELLAMA-34B WITH THE DATAFLOW ANALYSIS BASED PROMPT Listing 8: Correct response by GPT-4 on a Path Traversal vulnerability Here is a data flow analysis of the given code snippet: A. Sources: 1. snmp_packet->in Here is a data flow analysis of the given code snippet: 2. snmp_packet->used A. Sources: 1. The `plaintext` array B. Sinks: 2. The `ciphertext` array 3. The `ad` array 1. *length 4. The `plaintextOffset` integer 2. snmp_packet->used 5. The `ciphertextOffset` integer 6. The `length` integer C. Sanitizers: B. Sinks: 1. None 1. The `System.arraycopy` method 2. The `encryptCTR` method D. Unsanitized Data Flows: 3. The `ghash.update` method 4. The `ghash.pad` method 1. (snmp_packet->in, *length, This flow could be vulnerable to Out- 5. The `ghash.finish` method of-bounds Read (CWE-125) if the value of snmp_packet->in is 6. The `ciphertext` array not properly validated and could be pointing to an invalid memory location.)UnderstandingtheEffectivenessofLargeLanguageModelsinDetectingSecurityVulnerabilities 2. (snmp_packet->used, snmp_packet->used--, This flow could be vulnerable to Out-of-bounds Read (CWE-125) if the value of snmp_packet->used is not properly validated and could be pointing to an invalid memory location.) E. Vulnerability analysis verdict: $$ vulnerability: YES | vulnerability type: CWE-125 | vulnerability name: Out-of- bounds Read | explanation: The code snippet does not properly validate the input values, which could lead to an Out-of- bounds Read vulnerability. $$ Listing11:Codellama-34BwithCWE-DFpromptonareal- worldOutofBoundsvulnerability(incorrect) |
2312.00024 Can LLMs Patch Security Issues? KamelAlrashedy,AbdullahAljasser, PradyumnaTambwekar,MatthewGombolay GeorgiaInstituteofTechnology,GA,USA {kalrashedy3,aaljasser3,ptambwekar3}@gatech.edu {matthew.gombolay}@cc.gatech.edu Abstract LargeLanguageModels(LLMs)haveshown impressiveproficiencyincodegeneration. Un- fortunately, these models share a weakness withtheirhumancounterparts: producingcode thatinadvertentlyhassecurityvulnerabilities. Thesevulnerabilitiescouldallowunauthorized attackers to access sensitive data or systems, whichisunacceptableforsafety-criticalappli- cations. In this work, we propose Feedback- Driven Security Patching (FDSP), wherein LLMs automatically refine vulnerable gener- Figure1:Overviewofourapproach:Initially,theLLMs atedcode. Ourapproachleveragesautomatic generatecode. Thiscodeissubsequentlyanalyzedfor static code analysis to empower the LLM to security vulnerabilities using Bandit, a tool for static generateandimplementpotentialsolutionsto codeanalysis,toidentifypotentialsecurityissues(see address vulnerabilities. We address the re- Figure2). Theidentifiedpotentialsecuritythreatsare search community’s needs for safe code gen- thenincorporatedintotheLLMstogeneratepossible eration by introducing a large-scale dataset, solutions for resolving these security issues. Finally, PythonSecurityEval,coveringthediversityof each proposed solution is sent back to the LLMs for real-world applications, including databases, coderefinement. websites, and operating systems. We empir- ically validate that FDSP outperforms prior work that uses self-feedback from LLMs by posedaself-debuggingtechnique,whereinLLMs upto17.6%throughourprocedurethatinjects generatecode,whichisthensentbacktothesame targeted,externalfeedback. Codeanddataare LLMsforevaluationandfurtherrefinement. availableathttps://github.com/Kamel773/ LLM-code-refine However, code generated by LLMs could in- clude security vulnerabilities. Vulnerabilities in 1 Introduction codemayallowunauthorizeduserstoaccesssensi- Although Large Language Models (LLMs), such tivedataorsystems. Forexample,attackerscanma- as GPT-4 (Brown et al., 2020) and CodeLlama nipulateSQLqueriestogainaccesstoadatabase, (Rozièreetal.,2023),arepowerfultoolsforcode atechniqueknownasSQLinjection. Particularly, generation,theyarepronetogeneratingvulnerable whenthecodeinteractswithexternalservicesand code(Pearceetal.,2023). LLMshaveshownhigh- systems, LLMs might struggle to recognize and competencyforawidevarietyofcodegeneration fixsecurityissuesincodeduetotheirlimitedun- tasks,suchasforproducingcodefromnaturallan- derstandingofsecurityvulnerabilitiesandlackof guage(Yuetal.,2018),codetranslation(Lachaux specific security knowledge (Athiwaratkun et al., etal.,2020),andcodeoptimization(Shypulaetal., 2023;Siddiqetal.,2023). 2023). Utilizing LLMs for code generation has One potential approach to mitigate these secu- been shown to increase developers’ productivity rity vulnerabilities is to train LLMs to recognize withwritingandexplainingcode,andfixingbugs and patch security vulnerabilities. This method (Wongetal.,2023). Toenhancecoderefinement also has several significant challenges. Firstly, it withLLMs,recentworkbyChenetal.(2023)pro- requires a large human-labeled dataset that accu- 4202 tcO 61 ]RC.sc[ 5v42000.2132:viXrarately distinguishes between vulnerabilities and and CodeLlama, in identifying and refining non-vulnerabilities to train the LLMs effectively. insecure code. We utilize three benchmarks Collecting such a human-labeled dataset is both (includingours)andapplyfivebaselinetech- costly and time-consuming. Additionally, there niquesforthisevaluation. isacriticalneedforrobustfeedbackmechanisms during the training process. LLMs require accu- 2 Relatedwork ratefeedbacktolearnfromtheirmistakes,andthis feedbackmustcomefromsecurityexperts. Wediscusstwocategoriesofpreviouswork: LLMs Inthispaper,weaddressthekeylimitationsof forcodegenerationandrefinementofLLMs. priorworkbydevelopingFeedback-DrivenSecu- Language models for code: Code genera- rityPatching(FDSP)andthePythonSecurityEval tionmodelshavebecomeapopularresearcharea benchmark. In FDSP, LLMs generate potential amongMachineLearning(ML)andSoftwareEn- solutions to ameliorate the security threats in the gineering (SE) communities. The most common generated code. This process involves analyzing applicationofcodegenerationmodelsisthetext- the generated code through static code analysis to-codegenerationtask,whereinuserspromptan to identify potential security threats and produce LLM with natural language instructions to com- feedback. LLMsutilizethisfeedbacktogenerate pleteacodingtask,andtheLLMgeneratesthecor- potential solutions, and then each potential solu- respondingcode. Examplesofthetext-to-codegen- tionsolution,anditscorrespondinggeneratedcode, erationincludeCodeLlama (Rozièreetal.,2023) are returned to the LLMs to for further code re- and CodeGeeX (Zheng et al., 2023). All three finement. Next,wecurateanextensivebenchmark achievestate-of-the-artperformanceontheMostly fromStackOverflow, calledPythonSecurityEval, Basic Programming Problems (MBPP) dataset as existing security evaluation datasets are quite (Austinetal.,2021). TheDocPromptingapproach |
limitedandinsufficienttoevaluateamodel’sabil- furtherdemonstratesthatpromptinglanguagemod- ity to produce non-vulnerable code. Our dataset els with code documentation improves code gen- originatesfromreal-worldapplications,providing eration performance on models such as CodeT5, diversitywithpromptstogeneratecodeforavari- CodeX, and GPT-Neo on MBPP (Zhou et al., etyofapplications,includingdatabases(e.g.,SQL, 2023). Beyond code generation, LLMs are also MySQL),URLs,operatingsystems,andwebsites capableofcodetranslation(Roziereetal.,2020), (e.g., Flask). We consider these types of applica- code repair (Allamanis et al., 2021), code docu- tionsasprimarysourcesofcommonvulnerabilities, mentation(Nametal.,2024),codetesting(Wang includingSQLinjection,cross-sitescripting(XSS), etal.,2024)anddefectprediction(Alrashedyetal., brokenaccesscontrol,andcommandinjection. 2023). Our interest lies in exploring how these Tosummarize,ourworkpresentsthreekeycon- various capabilities demonstrated by LLMs can tributions: alsobeappliedtoaddresssecurityissuesinLLM- generatedcode. • WeproposeFeedback-DrivenSecurityPatch- Refinement of LLMs: Recent studies have ing(FDSP),atechniquethatenhancesLLMs demonstratedthatLLMscanrefinetheirownout- capabilitiestogeneratepotentialsolutionsfor put and adapt to feedback from external tools or repairingsecurityissuesinthegeneratedcode human input. Self-Refine (Madaan et al., 2023) by receiving feedback from (Bandit) static generates feedback and refines its output to im- codeanalysis. provethequalityofthegeneratedanswersacross 7tasksusingstate-of-the-artmodelssuchasGPT- • WepresentPythonSecurityEval,adatasetcon- 3.5andGPT-4. Additionally,asimilartechnique sisting of 470 natural language prompts de- calledSelf-Debugging(Chenetal.,2023)enables signedtoevaluatetheabilityofLLMstogen- code generation models to debug initially gener- eratesecurecode. atedcodeusingfeedbackfromthesameLLM,unit • WedemonstratethatFDSPoutperformsprior testresults,orcompilererrormessages. Thefeed- worksthatuseself-feedbackbyupto17.6% backfromtheLLMexplainsthecodeline-by-line, on the PythonSecurityEval dataset. We em- which is then used to refine the generated code. piricallyevaluatethecapabilitiesofthemost This approach has shown improvement in three advancedLLMs,includingGPT-4,GPT-3.5, different code generation applications. An alter-nateapproach,calledSelf-Repair (Olaussonetal., Algorithm1FDSPalgorithm 2023), seeks to produce feedback specifically fo- Require: Inputx,LLMsP ,numberofpoten- LM cusing on why any faulty code snippet is incor- tialsolutionsJ,numberofiterationsK rect. Anotherstudy(Gouetal.,2023)introduced Ensure: Refinevulnerablecodey fromtheLLMs CRITIC,whichenablesthemodeltoengagewith P (y |x) LM i externaltoolssuchasacodeinterpreter,calculator, 1: Initializeoutputy i fromP LM(x) andsearchenginetoreceivefeedbackandimprove 2: //Generatepotentialsolutions(Eqn. 4) the generated output. In our work, we build on 3: S ∼ P LM(y,ℜ,j,p) these self-refinement methods towards enabling 4: //Iterationforeachpotentialsolution(Eqn. 5) largelanguagemodelstofixsecurityissuesingen- 5: fors ∈ S do eratedcode. 6: fork ← 1toK do The feedback from the aforementioned meth- 7: y i ← P LM(y,s) odscomefrommultiplesourcesincludinghuman 8: ifδ(y i)issecurethen▷Stopcondition feedback,externaltools,orenvironments. Human 9: Returny i feedbackisthemosteffectiveandaccuratesource 10: endif of feedback; however, it is also costly and time- 11: endfor intensive(Elgoharyetal.,2021;YuntaoBai,2023). 12: endfor Analternativewaytoobtainfeedbackisfromex- 13: Return y ternal tools such as compiler error messages for program repair (Yasunaga and Liang, 2020) and Pylint,astaticcodeanalyzer,forimprovingPython gramy isexecuted. Ifthereiscompilererrormes- codingstandards(Bafatakisetal.,2019). Addition- sage,wesendtheprogramywith{e }totheLLMs ally,previousstudieshaveproposedtechniquesto c tofixtheerror,asdescribeinEquation2. Theini- obtainfeedbackfromLLMs,includingtheLLM- tialgeneratedcodecanbedescribedasfollows: Augmentersystem(Pengetal.,2023)andRecur- siveRepromptingandRevisionframework(Yang y ∼ P (y |x) (1) et al., 2022). Whereas the techniques described i LM i aboveutilizefeedbackreceivesfromLLMsorex- ternal tools, our approach combines the strength yc ∼ P (yc|x,y ,e ) (2) LM i c of bother approaches by incorporating feedback from both external tools and LLMs, wherein the Weconsiderazero-shotsettingbecause,inreal- static code analysis provides feedback regarding world use cases, users prompt LLMs to generate thegeneratedcode,whichLLMsutilizetogenerate codedirectlywithoutprovidingexamples. Weas- potentialsolutionsforaddressingsecuritythreats sumedthattheLLMwouldgeneratesecurecode inthecode. bydefault. 3.2 Codetesting 3 OurApproach Staticcodeanalysistoolsareutilizedbysoftware Ourapproach,FDSP,bothidentifiesandresolves engineers to evaluate the quality of the code and vulnerabilitiesincodegeneratedbyanLLM.The identifyanypotentialvulnerabilities. WeuseBan- |
principal component of FDSP is the use of static dit,1 astaticcodeanalysistooldesignedtodetect codeanalysis(Bandit)togeneratesolutionstopo- commonvulnerabilitiesinPythonfunctions. Ban- tentially vulnerable code. We utilize a four-step ditconstructstheAbstractSyntaxTree2 (AST)for approach: (i) code generation, (ii) code testing, a Python function and conducts analysis on the (iii)solutiongenerationand(iv)coderefinement. AST nodes. Subsequently, Bandit, denoted by δ, The complete algorithm for FDSP is provided in generatesareportℜaboutthecodey (seeFigure Algorithm1. 2). Then,wepassthereportℜfromBandittothe LLMstogeneratepotentialsolutionstofixthevul- 3.1 Codegeneration nerabilities. WecandescribetheBanditreportas GivenanaturallanguagedescriptionofaPython follows: functiondenotedasx,anLLMgeneratesaPython 1https://github.com/PyCQA/bandit programy accordingtoP LM(y|x). Next,thepro- 2https://greentreesnakes.readthedocs.io/en/latest/1 # The prompt: Write a Python function AnexampleofBanditreport. to return the total number of rows in SQLite. Issue: [B608:hardcoded_sql_expressions] 2 Possible SQL injection vector through 3 import sqlite3 4 def get_total_rows(database_name, string-basedqueryconstruction. table_name): Line7:cursor.execute("SELECTCOUNT(*) 5 conn = sqlite3.connect( FROM".format(table_name)) database_name) 6 cursor = conn.cursor() 7 cursor.execute("SELECT COUNT(*) FROM {}".format(table_name)) Figure2: AnexampleofthereportgeneratedbyBandit, 8 result = cursor.fetchone() a static code analysis tool, for the vulnerable code in 9 conn.close() CodeSnippet1. 10 return result[0] Code Snippet 1: An example of a prompt from PythonSecurityEval dataset where GPT-4 generates whentheBanditdetectsnosecurityissuesorwhen vulnerablecodeoftheSQLinjectiontype. Thereport theiterationreachesthemaximumnumberofitera- fromBanditisshowninFigure2. tions,K,forallpotentialsolutionsS,asshownin Algorithm1. ℜ = δ(y) (3) y ∼ {{P (y |y ,s )}j }k (5) i+k LM i+k i+k−1 n n=1 i=1 3.3 Generatingpotentialsolutions 4 ExperimentalSettings Staticcodeanalysiscandetectvulnerabilities;how- Inthissection,wediscusstheexperimentalsetup ever,itcannotfixthem. Theprimarycontribution usedtoevaluatetheeffectivenessofourproposed ofFDSPistoleveragestaticcodeanalysistoenable approach,FDSP. LLMstogenerateorsuggestmultiplesolutionsS toaddressthesevulnerabilities, asrepresentedin 4.1 Benchmarks Equation4. Existingbenchmarks,LLMSecEvalandSecurityE- val,areinsufficientforlarge-scaleevaluationdue S ∼ P (y,ℜ,j,p) (4) LM totheirlimitedsizeanddiversity(seeTable1). To The variable J represents the total number of addressthislimitation,weintroducePythonSecuri- generated potential solutions, and the p is the in- tyEval,comprising470naturallanguageprompts struction given to the LLMs. Figure 1 illustrates fordiversereal-worldapplications,collectedfrom thisprocess: theLLMgeneratescodevulnerable StackOverflow. WeutilizePythonSecurityEvalto toSQLinjection, we thenprovidetheLLM with compareFDSPwithexistingstrategiesforfixing report from Bandit, and subsequently, the LLM securityissues. generates three different potential solutions: 1) • LLMSecEval: This dataset contains natu- UseParameterizedQueries,2)ManualEscapeand ral language prompts to evaluate LLMs on Quote Table Name, and 3) Use an ORM (Object- generating secure source code (Tony et al., RelationalMapping)Library(seethefullexample 2023). LLMSecEvaliscomprisedof150to- inFigure7). talprompts(naturallanguagedescriptionsof code), covering the majority of the top 25 3.4 Coderefinement CommonWeaknessEnumeration(CWE). We use the generated potential solutions S ∼ s ⊕ s ... ⊕ s from the previous step as feed- • SecurityEval: This dataset, proposed by 1 2 j backtofixthevulnerablecodey. Intheprevious Siddiq and Santos (2022), is used to evalu- steptheLLMgeneratesJ uniquesolutionstofix ateLLMsontheirabilitytogeneratesecure thesecurityissues. Then, eachpotential solution Pythonprograms. SecurityEvalcomprises121 andvulnerablecodeisfedbackintotheLLMmul- naturallanguagepromptscovering75differ- tipletimes,denotedasK. TheideabehindtheK enttypesofvulnerabilities. Eachpromptin- iterationsforeachsolutionistoallowtheLLMto cludestheheaderofaPythonfunctionalong generatesasmanyfixesaspossibleforthevulner- withcommentsdescribingthepurposeofeach able code. The refinement process is terminated function.PythonSecurityEval LLMSecEval SecurityEval 4.2 Baselines Domain (Ours) Weconsiderfourbaselinerefinementapproaches, Computation 168(35.7%) 44(29.5%) 32(26.4%) System 313(66.6%) 94(63.1%) 68(56.2%) whichareasfollows: Network 147(31.3%) 63(42.3%) 29(24.0%) Cryptography 29(6.2%) 8(5.4%) 16(13.2%) General 414(88.1%) 128(85.9%) 118(97.5%) I. Direct prompting: This approach involves Database 114(24.3%) 23(15.4%) 6(5.0%) sendinggeneratedcodebacktoanLLMwith WebFrameworks 43(9.1%) 46(30.9%) 8(6.6%) the instruction: Does the provided function Total 470 150 121 have a security issue? If yes, please fix the Table1: Comparisonofdomaindiversitystatisticsfor issue. If LLMs detect any security issues in |
thedistributionoffunctionsinPythonSecurityEvaland thecode,theywillfixtheissueandgenerate priorbenchmarks. Thetableshowsthetotalnumberof securecode. functions identified per domain, along with their per- centage distributions. Note that some functions are II. Self-Debugging: The initial step in self- associatedwithmultipledomains(seeTable4). debuggingisforLLMstogeneratethecode. Subsequently,thegeneratedcodeissentback to the same LLMs to generate feedback. Fi- • PythonSecurityEval(Ours): Wecollecteda nally, both the generated code and the feed- new benchmark from Stack Overflow to ad- backarefedbacktotheLLMtocorrectany dress the limitations of the existing datasets. existingbugs. Current datasets are limited in size and di- versity;ergoareinsufficientinevaluatingthe III. Bandit feedback: We develop this baseline ability of LLMs to generate secure code ad- thatutilizesBandittoproduceareportifthere equately addressing security vulnerabilities. areanysecurityissuesinthecode,asshown PythonSecurityEvalincludesnaturallanguage inFigure2. Weusethisreportasfeedbackto prompts intended to generate Python func- enabletheLLMtorefinethevulnerablecode. tions that cover diverse real-world applica- This strategy is similar to prior approached tions. Thisdatasetconsistingof470prompts whereinexternaltoolsprovidefeedbacktothe isthreetimeslargerthanthoseusedinLLM- LLM to refine its outputs (Gao et al., 2023; SecEvalandSecurityEval. Akyüreketal.,2023). Banditfeedbackdoes notprovideasolutiontofixtheissue;itsimply highlights the problematic line and type of issue. PythonSecurityEval is a diverse and extensive benchmark,coveringthemajorityofreal-worldap- IV. Verbalization: We verbalize the feedback plicationsthatconsidertheprimarysourcesofcom- from Bandit, via an LLM, to produce more mon vulnerabilities. For example, SQL injection understandableandactionablefeedbacktore- occurswhenPythoncodeconnectsto,insertsinto, solvesecurityissuesanddefectivecode. The and queries from a SQL database. There are sev- verbalizedfeedbackprovidesadetailedexpla- eralexamplesinourbenchmarkwheretheprompt nationinnaturallanguageofthespecialized involveswritingPythoncodetoinsertavalueinto output from Bandit (see Figure 6). This ex- an SQL database. Another example is command pandedexplanationoffersdeeperinsightsinto injection,wherethePythoncodeinteractsdirectly thesecurityissuesandmaysuggestsolutions with the operating system, allowing attackers to toaddressthevulnerabilities. gainunauthorizedaccesstodataandsystems. We 4.3 Evaluationmetrics includeexamplesthatgeneratePythoncodewith accesstotheoperatingsystemanddiagnosehow This paper evaluates the ability of LLMs to gen- the LLMs generate the code without any issues eratevulnerablecodeandsubsequentlycorrectse- ortheabilitytorefinethecode. Lastly,cross-site curity issues identified in the code. We evaluate scripting(XSS)isacommontypeofsecurityvul- the accuracy of generated and refined vulnerable nerabilitythatoccursinwebapplications. Wein- code by calculating the ratio of vulnerable code cludepromptsthatgeneratePythoncodeforFlask, producedtothetotalamountofgeneratedcode. To whichisaPythonwebframeworkforcreatingweb- verifywhetherthegeneratedcodecontainsvulner- sites(seeTable1). abilities,weutilizetwoevaluationmetrics:I Bandit: Astaticcodeanalysistooldesigned 5 ExperimentalResults toidentifyvulnerabilitiesinPythoncode. Inthissection,wepresenttheempiricalevaluations II CodeQL:Anopen-sourcecodebaseutilized pertainingtoeachresearchquestionandreportthe todiscoverthesimilarityofvulnerabilitypat- resultsininTable2. terns.3 LLMsaresomewhateffectiveatrefiningvul- nerablecodeontheirown(RQ1). Directprompt- 4.4 Models ing and self-debugging are refinement methods Weaimtoevaluatestate-of-the-artLLMsforcode where LLMs refine their generated code without generation, including GPT-4, GPT-3.5 “gpt-3.5- feedback from external tools. Across the three turbo-instruct" (Brown et al., 2020) using Ope- datasets,Banditdetectsbetween28%and46%of nAIAPI,andCodeLlama“CodeLLama-Instruct- the generated code as vulnerable, while CodeQL 34B"(Rozière et al., 2023) from Huggingface, to identifiesbetween9.1%and17%. InBandit’seval- generatesecurecode. Additionally,weevaluatethe uation,directpromptingandself-debuggingresult abilityofLLMstorefineinsecurecodeusingfour inmodestvulnerabilityreductions,withimprove- baseline approaches (see §4.2) and our proposed ments of less than 10% for GPT-3.5 and CodeL- approach,FDSP. lama, andaround15%forGPT-4. Thisindicates thatLLMscanintrinsicallygeneratefeedbacktore- 4.5 ResearchQuestions finetheirvulnerablecode,thoughtheimprovement is limited. CodeQL detects fewer vulnerabilities Thispaperexploresfourresearchquestions,regard- thanBandit,butresultsindicatethatdirectprompt- ingthecapacityofLLMsindetectingandrefining ingandself-debuggingfixapproximately50%of vulnerablecode. thevulnerabilities. Bandit-basedfeedbackisbeneficialtowards RQ1. What is the fundamental capability of correctingsecurityvulnerabilitiesingenerated LLMsinrefiningsecurityvulnerabilities? code (RQ2). Integrating Bandit feedback into This question aims to determine how effec- LLMsenhancestheirabilitytoaddresssecurityvul- tively LLMs can inherently correct insecure nerabilities,asevidencedbynotableimprovements code and highlight their limitations without |
inBanditevaluationsandmodestgainsinCodeQL incorporatingexternalfeedback. evaluations. In contrast, approaches that exclude Bandit’sfeedbackarelesseffective. Whilesimple RQ2. HowdoesBanditfeedbackaffecttheability strategieslikedirectpromptingandself-debugging ofLLMstorefinecodevulnerabilities? This canaddressbasicsecurityissues,theyaregenerally questionexamineshoweffectivelytheLLMs insufficientformorecomplexvulnerabilities. As incorporatefeedbackprovidedbyprovidedby shown in Table 2, methods utilizing Bandit feed- Bandit,astaticcodeanalysistool. backconsistentlyoutperformsimplertechniques, improvingaccuracyacrossallmodelsanddatasets. RQ3. How does FDSP improve LLM perfor- Specifically,LLMsincorporatingBanditfeedback manceinfixingcodevulnerabilities? This provides approximately a 30% improvement for question aims to assess how well the LLMs GPT-4anduptoa24%improvementforGPT-3.5 generatemultiplepotentialsolutionsanditer- andCodeLlamabasedonBanditevaluations. Ad- ateovereachonetorefinevulnerabilities. ditionally, verbalizingBandit’sfeedbackyieldsa slightincreaseof1%to2%inbothevaluationmet- RQ4. Howimportantarethemultiplegenerated rics. solutionsanditerationsofFDSP?Wecon- FDSPshowsconsistentimprovementoverthe duct ablation studies to isolate these factors baseline(RQ3). FDSPbooststheLLMabilityto by restricting FDSP to a single solution or generatepotentialsolutionsbasedonfeedbackpro- iteration. This analysis reveals whether the videdbyBandit. OurFDSPapproachenhancesthe diversityofgeneratedsolutionsanditerative performanceofGPT-3.5andCodaLlama,exceed- refinementcontributetoFDSPeffectiveness. ing the results achieved by either directly incor- porating Bandit’s feedback or verbalizing it. For 3https://codeql.github.com/ PythonSecurityEval, FDSP shows consistent im-Dataset Models GPT4 GPT3.5 CodeLlama Evaluationmetrics Bandit CodeQL Bandit CodeQL Bandit CodeQL Generatedcode 38.2% 10.1% 34.2% 18.1% 28.6% 20.7% Directprompting 35.3%(↓2.6) 4.7%(↓5.4) 28.0%(↓6.0) 7.4%(↓10.7) 24.0%(↓4.6) 11.6%(↓9.1) LLMSecEval Self-debugging 24.0%(↓14.0) 7.4%(↓2.7) 28.0%(↓6.0) 8.7%(↓9.4) 24.6%(↓4.0) 15.7%(↓5.0) Banditfeedback 8.0%(↓30.0) 5.4%(↓4.7) 18.6%(↓15.3) 8.7%(↓9.4) 18.0%(↓10.6) 13.2%(↓7.5) Verbalization 7.3%(↓30.6) 5.4%(↓4.7) 18.0%(↓16.0) 6.7%(↓11.4) 16.6%(↓12.0) 10.7%(↓10.0) FDSP(Ours) 6.0%(↓32.0) 6.7%(↓3.4) 12.6%(↓21.3) 8.1%(↓10) 14.6%(↓14.0) 9.1%(↓11.6) Generatedcode 34.7% 12.4% 38.0% 9.1% 46.2% 32.2% Directprompting 21.4%(↓13.2) 5.8%(↓6.6) 25.6%(↓12.4) 8.3%(↓0.8) 38.0%(↓8.2) 14.1%(↓18.1) SecurityEval Self-debugging 16.5%(↓18.1) 9.1%(↓3.3) 27.2%(↓10.7) 9.1%(↓0.0) 38.8%(↓7.4) 17.4%(↓14.8) Banditfeedback 4.1%(↓30.5) 6.6%(↓5.8) 13.2%(↓24.7) 5.8%(↓3.3) 21.4%(↓24.7) 13.4%(↓18.8) Verbalization 4.9%(↓29.7) 6.6%(↓5.8) 13.2%(↓24.7) 5.8%(↓3.3) 17.3%(↓28.92) 13.4%(↓18.8) FDSP(Ours) 4.1%(↓30.5) 8.3%(↓4.1) 5.7%(↓32.2) 2.5%(↓6.6) 8.2%(↓38.0) 12.1(↓20.1) Generatedcode 40.2% 17.9% 48.5% 13.2% 42.3% 13.2% Directprompting 25.1%(↓15.1) 9.6%(↓8.3) 42.5%(↓5.9) 8.5%(↓7.2) 31.0%(↓11.3) 6.6%(↓6.6) PythonSecurityEval Self-debugging 24.8%(↓15.3) 8.7%(↓9.2) 43.4%(↓5.1) 8.9%(↓7.0) 33.1%(↓9.2) 7.9%(↓5.3) Banditfeedback 9.3%(↓30.8) 9.1%(↓8.8) 26.3%(↓22.1) 6.4%(↓11.5) 20.0%(↓22.3) 6.2%(↓7.0) Verbalization 8.7%(↓31.4) 8.5%(↓9.4) 23.6%(↓24.8) 7.4%(↓10.2) 19.5%(↓22.8) 6.0%(↓7.2) FDSP(Ours) 7.4%(↓32.7) 7.7%(↓10.2) 15.7%(↓32.7) 5.7%(↓11.7) 8.7%(↓33.6) 5.7%(↓7.5) Table 2: The table presents our results from Bandit and CodeQL evaluations metrics across various datasets and approaches. The percentage of vulnerable code is reported for both the generation and refinement phases. The (Generated code) row indicates the percentage of vulnerable code out of the entire dataset. We report the percentageofremainingvulnerablecodeforeachrefinementapproachwiththenumberinparenthesesrepresenting thepercentageofvulnerablecodethatwasfixedrelativetothe(Generatedcode). Generated code Direct prompting Self-Debugging Bandit feedback Verbalization FDSP (Ours) |
CWE-259 CWE-20 CWE-94 CWE-400 CWE-89 The types of code vulnerabilities sehcaorppa tnemenifer ehT Evaluationmetrics 26 24 24 38 20 35 Ablationexperiments Bandit CodeQL 30 7 9 22 28 16 Generatedcode 40.2% 17.9% 25 FDSPwithsinglesolution 10.0%(+2.6%) 8.7%(+1.0%) 7 15 8 32 16 20 FDSPwithsingleiteration 9.5%(+2.1%) 7.9%(+0.2%) 7 10 2 0 16 FDSP 7.4% 7.7% 15 0 0 0 0 20 10 Table3: PerformancecomparisonofFDSPanditsab- 5 latedvariantsonthePythonSecurityEvaldatasetusing 0 0 0 1 16 GPT-4. ThetabledemonstratesFDSP’seffectiveness 0 acrossmultiplesolutionsanditerations. Figure3: Thetotalcountofthefivemostfrequentsecu- rityissuesacrossfiverefinementapproachesforCodeL- thatneitherself-refinementnordirectlypassingthe lamainthePythonSecurityEvaldataset. feedback from Bandit proves useful for CodeL- lama;however,verbalizationandFDSPperforms wellforCodeLlama. provement over the verbalization approach, with Ablationstudy(RQ4). Thegenerationofmulti- improvements for GPT-4 (from 8.7% to 7.4%), plesolutionsandrepeatediterationsplayacritical GPT-3.5(from23.6%to15.7%),andCodeLlama role in FDSP (see §3). To quantify the impact (from21%to13.6%)forBanditevaluation. These of these two factors, we evaluate FDSP with two results support that LLMs can propose potential ablation studies: (i) FDSP with a single solution, solutionsandprovideusefulfeedbacktofixsecu- wherein the LLM generates only one solution in- rity issues when they are supplied with feedback stead of multiple, and (ii) FDSP with a single it- fromstaticcodeanalysis,andoutperformingself- eration,whereintheLLMattemptstoaddressthe refinement or merely passing the feedback from vulnerablecodewithasinglegeneratedsolutions staticcodeanalysisdirectly. rather than making multiple attempts. We con- We evaluate the effectiveness of each method ductedtheseablationstudiesusingGPT-4onthe inaddressingthemostcommonsecurityissuesin PythonSecurityEvaldataset(seeTable3). Theper- CodeLlama(seeFigure3). Theseresultssuggest centageofunsolvedvulnerabilitiesincreasedfrom20 18 16 14 12 10 8 6 4 2 0 C W E-8 C9 W E-7 C8 W E-8 C0 W E- C22 W E-7 C0 W3 E-4 C0 W0 E-6 C0 W5 E-5 C0 W2 E-3 C2 W7 E-2 C59 W E- C20 W E-3 C7 W7 E-295 The types of code vulnerabilities devlosernu fo rebmun latot ehT edoc elbarenluv GPT-4 60 GPT-3.5 CodeLlama 50 40 30 20 10 0 C W E-259 C W E-89 C W E-94 C W E-327 C W E-20 C W E-31 C9 W E-502 C W E-78 C W E-40 C0 W E-732 The types of code vulnerabilities Figure4: Comparisonofthetotalnumberofunresolved vulnerablecodeinstancesidentifiedbythreeLLMson thePythonSecurityEvaldataset. 7.4%to9.5%forsingleiterationand10.0%forsin- glesolution. Theseresultsdemonstratethatwhile multiple iterations offer some improvement, gen- eratingmultiplesolutionsplaysamoresignificant roleinimprovingthegenerationofsecurecode. Qualitativeanalysis. Weperformaqualitative analysis of the solutions generated by FDSP and its iterations for GPT-4. In particular, we manu- ally evaluate 30 randomly selected examples of vulnerable code from PythonSecurityEval, com- prising23fixedand7unfixedcases. Ourfindings showthatthesolutionsgeneratedbyFDSPprovide at least one actionable fix to address security is- sues, with 74% of the solutions offering at least twoactionablerecommendations. AlthoughFDSP consistentlygeneratescorrectsolutions,thereare instanceswheretheLLMsfailtorefinethevulner- able code. Only 26% of the generated solutions includeonegeneralsecuritymeasure,suchaserror handling(e.g.,exceptions)orinputvalidation Inthesevencaseswherethevulnerabilitieswere notfixed,FDSPstillproducedvalidpotentialsolu- tions, but the LLMs did not incorporate the feed- backtorefinethecode. Threeofthesefailuresin- volvedSQLinjectionvulnerabilities,whereFDSP producedvalidpotentialsolutions,buttheLLMs failed to incorporate the feedback and refine the code(seeTable6). Theotherfourfailuresinvolved high-risklibrarycalls(e.g.,subprocess,paramiko), which pose significant security risks if not used properly,potentiallyleadingtoshellinjectionvul- nerabilities. Nevertheless, in the fix cases, FDSP generated accurate and useful solutions, with the vast majority of vulnerabilities being resolved in thefirstiteration. edoc elbarenluv detareneg fo rebmun latot ehT GPT-4 GPT-3.5 CodeLlama Figure5: Thetotalcountofthemostcommonsecurity issuesinthecodegeneratedforthePythonSecurityEval dataset(Top10). Whatarethemostfrequentandunresolved codingvulnerabilitiesproducedbyLLMs. We analyzethemostcommonvulnerabilitiesingener- atedcode,aswellasthosethatremainunresolved, for the PythonSecurityEval dataset. Figure 5 il- lustrates the most common types of code vulner- abilities generated by three LLMs, with the top twobeingCWE-259(useofHard-codedPassword) and CWE-400 (uncontrolled resource consump- tion). However,theLLMsareabletofixmostof thesetypesofvulnerabilities(seeFigure4). Wevi- sualizethemostfrequentunresolvedsecurityissues inFigure4,wherethetoptwoarerelatedtoinjec- |
tion: CWE-78(OSCommandInjection)andCWE- 89(SQLInjection),withpercentageof61.1%and 80.0%respectivelyforGPT-4. Additionally,these injection vulnerabilities are also among the most frequentvulnerabilitiesgeneratedbyLLMs. 6 Conclusion AsLLMsbecomecapableofgeneratingcode,we assessthefrequencywithwhichtheyproducevul- nerablecode. Weconductanempiricalevaluation of LLMs using existing datasets and approaches, anddiscusstheirlimitations. Weproposeanovel approachandintroduceanewdatasettoaddressthe currentlimitationsofbothdatasetsandapproaches. Ourapproach,FDSP,whereintheLLMsgenerate potentialsolutionstofixtheirowngeneratedvul- nerable code by leveraging feedback from static code analysis. Our results demonstrate that the FDSPapproachoutperformsthebaselinesacross allthreebenchmarksandthreemodels,achieving animprovementofabout17.6%inthePythonSe- curityEvaldatasetforGPT-4.7 Limitations Programsynthesiswithlargelanguagemodels. arXiv preprintarXiv:2108.07732. Oneofthelimitationsofourstudyisthatoureval- NikolaosBafatakis,NielsBoecker,WenjieBoon,Mar- uation may not identify all security issues in the tinCabelloSalazar,JensKrinke,GaziOznacar,and code. Detecting vulnerabilities in code is an ac- RobertWhite.2019. Pythoncodingstylecompliance tiveareaofresearch,asrecentstudieshaveshown onstackoverflow. In2019IEEE/ACM16thInterna- thatsomevulnerabilitiescannotyetbedetectedby tionalConferenceonMiningSoftwareRepositories eitherstaticcodeanalysisordeeplearning-based (MSR),pages210–214.IEEE. vulnerabilitydetectionmethods(Chakrabortyetal., TomB.Brown,BenjaminMann,NickRyder,Melanie 2021). Additionally,weevaluatethecapabilitiesof Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind LLMsingeneratingandrefiningvulnerablecode Neelakantan,PranavShyamGirishSastry,Amanda Askell, Sandhini Agarwa, Ariel Herbert-Voss, at the function level; however, in real-life scenar- Gretchen Krueger, Tom Henighan, Rewon Child, ios,vulnerabilitiesoftenoccurwhencodeinteracts Aditya Ramesh, Daniel M. Ziegler, Clemens Win- withothercodeorservices. Lastly,ourapproachto terJeffreyWu,ChristopherHesse,MarkChen,Eric fixingsecurityissuesinvolvesmakingchangesto Sigler,MateuszLitwin,ScottGray,BenjaminChess, Jack Clark, Christopher Berner, Sam McCandlish, thecode,whichhasthepotentialtoinadvertently Alec Radford, Ilya Sutskever, and Dario Amodei. rendertheprogramincorrect. Itisverychallenging 2020. Languagemodelsarefew-shotlearners. toensurethecorrectnessofthecodeortowriteunit tests,asallthegeneratedcodeinteractswithother Saikat Chakraborty, Rahul Krishna, Yangruibo Ding, andBaishakhiRay.2021. Deeplearningbasedvul- servicesorsystemssuchasdatabasesoroperating nerabilitydetection: Arewethereyet? IEEETrans- systems. actionsonSoftwareEngineering,48(9):3280–3296. Acknowledgments Xinyun Chen, Maxwell Lin, Nathanael Schärli, and DennyZhou.2023. Teachinglargelanguagemodels WewouldliketothankAmanMadaanforhiscon- toself-debug. tributions to this paper. Aman contributed in the Ahmed Elgohary, Christopher Meek, Matthew following ways: (1) developing Algorithm 1, (2) Richardson, Adam Fourney, Gonzalo Ramos, and providing feedback on the writing of this paper, AhmedHassanAwadallah.2021. NL-EDIT:Correct- and(3)offeringhelpfuldiscussionsregardingthe ingsemanticparseerrorsthroughnaturallanguage interaction. In Conference of the North American baselineandrelatedworks. Chapter of the Association for Computational Linguistics(NAACL). References Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, PengfeiLiu, YimingYang, JamieCallan, andGra- AfraFeyzaAkyürek, EkinAkyürek, AshwinKalyan, ham Neubig. 2023. Pal: Program-aided language PeterClark,DerryWijaya,andNiketTandon.2023. models. In International Conference on Machine Rl4f: Generatingnaturallanguagefeedbackwithre- Learning,pages10764–10799.PMLR. inforcementlearningforrepairingmodeloutputs. In AnnualMeetingoftheAssociationofComputational Zhibin Gou, Zhihong Shao, Yeyun Gong, Yelong Linguistics2023,pages7716–7733.Associationfor Shen, Yujiu Yang, Nan Duan, and Weizhu Chen. ComputationalLinguistics(ACL). 2023. Critic: Large language models can self- correct with tool-interactive critiquing. Preprint, Miltiadis Allamanis, Henry Jackson-Flux, and Marc arXiv:2305.11738. Brockschmidt.2021. Self-supervisedbugdetection andrepair. AdvancesinNeuralInformationProcess- Marie-AnneLachaux,BaptisteRoziere,LowikChanus- ingSystems,34:27865–27876. sot, and Guillaume Lample. 2020. Unsupervised translation of programming languages. arXiv KamelAlrashedy,VincentJ.Hellendoorn,andAlessan- preprintarXiv:2006.03511. dro Orso. 2023. Learning defect prediction from unrealisticdata. arXivpreprintarXiv:2311.00931. AmanMadaan, NiketTandon,PrakharGupta,Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, BenAthiwaratkun,SanjayKrishnaGouda,andZijian Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Wang.2023. Multi-lingualevaluationofcodegen- Shashank Gupta, Bodhisattwa Prasad Majumder, eration models. The International Conference on Katherine Hermann, Sean Welleck, Amir Yazdan- |
LearningRepresentations(ICLR). bakhsh,andPeterClark.2023. Self-refine: Iterative refinementwithself-feedback. JacobAustin,AugustusOdena,MaxwellNye,Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Daye Nam, Andrew Macvean, Vincent Hellendoorn, Jiang,CarrieCai,MichaelTerry,QuocLe,etal.2021. Bogdan Vasilescu, and Brad Myers. 2024. Usingan llm to help with code understanding. In 2024 Man-FaiWong,ShangxinGuo,Ching-NamHang,Siu- IEEE/ACM 46th International Conference on Soft- Wai Ho, and Chee-Wei Tan. 2023. Natural lan- ware Engineering (ICSE), pages 881–881. IEEE guage generation and understanding of big code ComputerSociety. for ai-assisted programming: A review. Entropy, 25(6):888. Theo X Olausson, Jeevana Priya Inala, Chenglong Wang, Jianfeng Gao, and Armando Solar-Lezama. Kevin Yang, Yuandong Tian, Nanyun Peng, and Dan 2023. Isself-repairasilverbulletforcodegenera- Klein. 2022. Re3: Generating longer stories with tion? In The Twelfth International Conference on recursiverepromptingandrevision. nConferenceon LearningRepresentations. EmpiricalMethodsinNaturalLanguageProcessing. Hammond Pearce, Benjamin Tan, Baleegh Ahmad, Michihiro Yasunaga and Percy Liang. 2020. Graph- RameshKarri,andBrendanDolan-Gavitt.2023. Ex- based,self-supervisedprogramrepairfromdiagnos- aminingzero-shotvulnerabilityrepairwithlargelan- tic feedback. In International Conference on Ma- guagemodels. chineLearning,pages10799–10808.PMLR. BaolinPeng,MichelGalley,PengchengHe,HaoCheng, Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, YujiaXie,YuHu,QiuyuanHuang,LarsLiden,Zhou DongxuWang,ZifanLi,JamesMa,IreneLi,Qingn- Yu,WeizhuChen,etal.2023. Checkyourfactsand ing Yao, Shanelle Roman, et al. 2018. Spider: A try again: Improving large language models with large-scalehuman-labeleddatasetforcomplexand externalknowledgeandautomatedfeedback. arXiv cross-domainsemanticparsingandtext-to-sqltask. preprintarXiv:2302.12813. arXivpreprintarXiv:1809.08887. BaptisteRoziere,Marie-AnneLachaux,LowikChanus- KamalNdousseYuntaoBai,AndyJones.2023. Train- sot, and Guillaume Lample. 2020. Unsupervised ingahelpfulandharmlessassistantwithreinforce- translation of programming languages. Advances mentlearningfromhumanfeedback. inneuralinformationprocessingsystems,33:20601– 20611. QinkaiZheng,XiaoXia,XuZou,YuxiaoDong,Shan Wang,YufeiXue,ZihanWang,LeiShen,AndiWang, Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, YangLi,etal.2023. Codegeex: Apre-trainedmodel Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi forcodegenerationwithmultilingualevaluationson Adi,JingyuLiu,TalRemez,JérémyRapin,Artyom humaneval-x. arXivpreprintarXiv:2303.17568. Kozhevnikov,IvanEvtimov,JoannaBitton,Manish Bhatt,CristianCantonFerrer,AaronGrattafiori,Wen- Shuyan Zhou, Uri Alon, Frank F. Xu, Zhiruo hanXiong,AlexandreDéfossez,JadeCopet,Faisal Wang,ZhengbaoJiang,andGrahamNeubig.2023. Azhar,HugoTouvron,LouisMartin,NicolasUsunier, Docprompting: Generating code by retrieving the ThomasScialom,andGabrielSynnaeve.2023. Code docs. InInternationalConferenceonLearningRep- llama: Openfoundationmodelsforcode. resentations(ICLR),Kigali,Rwanda. Alexander Shypula, Aman Madaan, Yimeng Zeng, TerryYueZhuo,MinhChienVu,JennyChim,HanHu, Uri Alon, Jacob Gardner, Milad Hashemi, Gra- Wenhao Yu, Ratnadira Widyasari, Imam Nur Bani ham Neubig, Parthasarathy Ranganathan, Osbert Yusuf,HaolanZhan,JundaHe,IndraneilPaul,etal. Bastani, andAmirYazdanbakhsh.2023. Learning 2024. Bigcodebench: Benchmarking code genera- performance-improvingcodeedits. arXivpreprint tionwithdiversefunctioncallsandcomplexinstruc- arXiv:2302.07867. tions. arXivpreprintarXiv:2406.15877. MohammedSiddiqandJoannaSantos.2022. Securitye- valdataset: Miningvulnerabilityexamplestoeval- uate machine learning-based code generation tech- niques. InProceedingsofthe1stInternationalWork- shoponMiningSoftwareRepositoriesApplications forPrivacyandSecurity(MSR4PS22). MohammedLatifSiddiq,BeatriceCasey,andJoanna C. S. Santos. 2023. A lightweight framework for high-qualitycodegeneration. CatherineTony,MarkusMutas,NicolasDíazFerreyra, and Riccardo Scandariato. 2023. Llmseceval: A datasetofnaturallanguagepromptsforsecurityeval- uations. In2023IEEE/ACM20thInternationalCon- ferenceonMiningSoftwareRepositories(MSR). JunjieWang,YuchaoHuang,ChunyangChen,ZheLiu, SongWang,andQingWang.2024. Softwaretesting withlargelanguagemodels: Survey,landscape,and vision. IEEETransactionsonSoftwareEngineering.A Appendix Anexampleofverbalization. Thefeedbackfromthestaticcodeanalysisregardingtheissuetaggedas"B608: hardcodedsql expressions"indicatesasecurityconcernwiththewaySQLqueriesarebeingconstructedinthe code. Specifically,theissuepointsoutthatthecodemightbevulnerabletoSQLinjectionattacks duetotheuseofstring-basedqueryconstruction. Thisvulnerabilityariseswheninputdatais concatenateddirectlyintoSQLquerieswithoutpropersanitizationorparameterization,allowing anattackertomanipulatethequeriestoexecutearbitrarySQLcommands. |
Theproblematiccodeisfoundatline7,whereacursor’sexecutemethodiscalledtoperforma SQLquery. Thequeryattemptstocounttherowsinatablebydynamicallyinsertingthetable nameintotheSQLstatement. However,themethodofconstructingthisquerybyformattinga stringisunsafeifthetablenamecomesfromanuntrustedsourceorisnotproperlyvalidated,as itcanbeexploitedtoinjectmaliciousSQL. To mitigate this risk, it’s recommended to use parameterized queries or prepared statements providedbythedatabaseinterface,whichensurethatinputdataistreatedasparametersrather thanpartoftheSQLcommand. ThisapproachhelpspreventSQLinjectionbyseparatingthedata fromthecodethatqueriesthedatabase. Figure6: AnexampleofhowGPT-4verbalizesthefeedbackfromtheBanditshownintheFigure2. Anexampleofgeneratedsolution. 1)UseParameterizedQueries: Parameterizedqueriesensurethatuserinputistreatedasaliteral valueratherthanexecutablecode. Mostdatabaselibrariesprovideawaytocreatethesequeries, alsoknownaspreparedstatements. 2)ManualEscapeandQuoteTableNames: Sinceparameterizedqueriesdonotsupporttable orcolumnnames,youcanmanuallyensurethattablenamesarevalid,usingawhitelistapproach whereonlyapprovedtablenamesareused. 3)UseanORM(Object-RelationalMapping)Library: ORMsprovideanabstractionoverSQL byallowingyoutointeractasSQLAlchemyforPythonhandleescapingandquotinginternallyin asecuremanner Figure7: AnexampleofasolutiongeneratedforthesecurityissuesinFigure2.Domain Library Computation os,pandas,numpy,sklearn,scipy,math,nltk,statistics,cv2,statsmodels,tensorflow, sympy,textblob,skimage System os,json,csv,shutil,glob,subprocess,pathlib,io,zipfile,sys,logging,pickle,struct, psutil Network requests,urllib,bs4,socket,django,flask,ipaddress,smtplib,http,flask_mail,cgi, ssl,email,mechanize,url Cryptography hashlib,base64,binascii,codecs,rsa,cryptography,hmac,blake3,secrets,Crypto General random, re, collections, itertools, string, operator, heapq, ast, functools, regex, bisect,inspect,unicodedata Database sqlite3,mysql,psycopg2,sqlalchemy,pymongo,sql WebFrameworks Django,Flask,FastAPI,Tornado,Pyramid,Bottle Table4: WedeterminedthetypeofdomainforthefunctionpresentedinTable1byidentifyingcallstodomain- specificlibraries(Zhuoetal.,2024) CWEID Description CWE-20 ImproperInputValidation CWE-22 ImproperLimitationofaPathnametoaRestrictedDirectory(’PathTraversal’) CWE-78 ImproperNeutralizationofSpecialElementsusedinanOSCommand(’OSCommandInjection’) CWE-79 ImproperNeutralizationofInputDuringWebPageGeneration(’Cross-siteScripting’) CWE-89 ImproperNeutralizationofSpecialElementsusedinanSQLCommand(’SQLInjection’) CWE-94 ImproperControlofGenerationofCode(’CodeInjection’) CWE-119 ImproperRestrictionofOperationswithintheBoundsofaMemoryBuffer CWE-200 ExposureofSensitiveInformationtoanUnauthorizedActor CWE-284 ImproperAccessControl CWE-287 ImproperAuthentication CWE-306 MissingAuthenticationforCriticalFunction CWE-352 Cross-SiteRequestForgery(CSRF) CWE-400 UncontrolledResourceConsumption CWE-502 DeserializationofUntrustedData CWE-611 ImproperRestrictionofXMLExternalEntityReference(XXE) CWE-703 ImproperHandlingofExceptionalConditions CWE-798 UseofHard-codedCredentials CWE-120 BufferCopywithoutCheckingSizeofInput(’ClassicBufferOverflow’) CWE-125 Out-of-boundsRead CWE-190 IntegerOverfloworWraparound CWE-295 ImproperCertificateValidation CWE-416 UseAfterFree CWE-434 UnrestrictedUploadofFilewithDangerousType CWE-476 NULLPointerDereference CWE-732 IncorrectPermissionAssignmentforCriticalResource CWE-787 Out-of-boundsWrite CWE-862 MissingAuthorization CWE-918 Server-SideRequestForgery(SSRF) CWE-1021 ImproperRestrictionofRenderedUILayersorFrames CWE-1295 DebugFeaturesEnabledinProduction Table5: CommonWeaknessEnumeration(CWE)IDsanddescriptions.Examples Generatedpotentialsolutions Example1 1)UseParameterizedQueries: ThemosteffectivewaytopreventSQLinjection istouseparameterizedqueries,alsoknownaspreparedstatements. Thisstrategy doesn’t form SQL commands by using string operations. Instead, it designates placeholdersforeachparameter,andthedatabaseenginewillfilltheseplaceholders securely. However,SQLitedoesn’tallowplaceholdersfortablenames,soitmight notbepossibletobindtablenamestoplaceholdersinthiscase. 2)InputValidation: Anotherstrategyistovalidatetheuserinputsorinthiscase, the table_name variable. We can ensure that it only contains valid table name charactersanddoesnotcontaindangerousSQLsyntax. Thiscanbedoneusinga whitelistofallowedcharactersoraregularexpression(regex)tomatchvalidtable names. 3)DeviseaWhitelistingMechanism: Thiswouldinvolvecreatingalistoftables that are allowed to be accessed via the "total_rows" function. By checking if the table name provided to the function is present in the whitelisted table names, the codecanensurethatonlylegitimatequeriesareexecuted,provingeffectiveagainst SQLinjection. Example2 1)Parameterized Query: The main solution for preventing SQL injection is to use parameterized queries. This entails using placeholders for data in your SQL statements and then providing the data to the SQL interpreter separately so it’s correctly and safely quoted. Unfortunately, SQLite in Python doesn’t directly |
supportparameterizationoftableorcolumnnames. So,theprogrammerneedsto ensuretheysanitizetheirtableorcolumnnameinputs. 2)UseofSanitizingfunction: Tohandlenotdirectlysupportedparameterization, we can also create a sanitizing function to ensure the validity of the table name providedtopreventSQLinjectionthoughstillpossibletothinkaboutthelimitation ofsanitizingprocess. 3)UseofDatabaseAPI:Insteadofallowingtheprogramusertoprovidearbitrary table or column names, consider using a Database API to generate these names. Thiswillprovideaninterfacefortheusertoviewandinteractwiththedata,butnot changethestructureoftheSQLqueriesthemselves. Thissignificantlyreducesthe riskofSQLinjectionattacksbecausetheusermayonlyaffectthevalueinafield, notthestructureoftheSQLqueriessincethetablenamesarenotuserinputs. Example3 1)UseParameterizedQueries: Thisinvolvesusingplaceholdersinsteadofdirectly inserting values into the statements. Parameters are used to help differentiate between the SQL code and the data passed through. When executed, the SQL statementcompilestogetherwiththeparametersasonecommandwhichprevents attackersfrominterferingwiththestructureofSQLqueries. 2) Use Object-Relational Mapping (ORM) Libraries: Technologies such as SQLAlchemy, Django ORM, can be used to handle operations with databases. TheselibrariesusuallycomewithsecurefeaturesthatprotectfromSQLinjection attacks. 3) Use Database Views: Business logic could be implemented in the database sidetodecreasethecomplexityofSQLbuiltintheapplication,andtoencapsulate the database structure. Views could be used to give simplified perspective of the databaseandtorestrictaccesstosensitivepartsofthedatabaseschema. Reducing database control and visibility minimizes the potential damage of SQL Injection attacks. Table6: ExamplesofcorrectedpotentialsolutionsgeneratedbyFDSPforSQLinjectionvulnerabilities,where GPT-4wasunabletorefinethevulnerablecode. |
2312.01588 ActiveClean: Generating Line-Level Vulnerability Data via Active Learning AshwinKallingalJoshy MirzaSanjidaAlam ShailaSharmin IowaStateUniversity IowaStateUniversity IowaStateUniversity Ames,IA,USA Ames,IA,USA Ames,IA,USA ashwinkj@iastate.edu sanjida@iastate.edu ssharmin@iastate.edu QiLi WeiLe IowaStateUniversity IowaStateUniversity Ames,IA,USA Ames,IA,USA qli@iastate.edu weile@iastate.edu ABSTRACT ofthispaper,weshowedhowaline-levelmodelcanimproveits Deeplearningvulnerabilitydetectiontoolsareincreasinginpop- performanceafterbeingtrainedwithimproveddataset. ularityandhavebeenshowntobeeffective.Thesetoolsrelyon Inthepast,mostofthevulnerabilitydetectiondatasetssuchas largevolumeofhighqualitytrainingdata,whichareveryhardto Devign[53],MSR/Big-Vul[16]andD2A[51]areharvestedfrom get.Mostofthecurrentlyavailabledatasetsprovidefunction-level softwarepatchesinthecoderepositories[5,12,23,28].Forexample, labels,reportingwhetherafunctionisvulnerableornotvulnerable. Devignusedkeywordsinthecommitmessagesandthenapplied However,foravulnerabilitydetectiontobeuseful,weneedtoalso manualinspectiontoproducefunction-levellabels.Whileweneed knowthelinesthatarerelevanttothevulnerability.Thispaper alargenumberofexamplesfortrainingvulnerabilitydetection makeseffortstowardsdevelopingsystematictoolsandproposes models,manuallycleaningthesepatchestoproduceline-levellabels ActiveCleantogeneratethelargevolumeofline-levelvulnerability isveryexpensive.Inarecenteffort,ittook45authorsandmore datafromcommits.Thatis,inadditiontofunction-levellabels,it than6monthstogenerateline-levellabelsfor28Javaprojectsand alsoreportswhichlinesinthefunctionarelikelyresponsiblefor 3,546commits[19].D2Ausedthedifferenceinwarningsgenerated vulnerabilitydetection.Inthepast,staticanalysishasbeenapplied bystaticanalysistoolsfrombeforeandafterthecommittoproduce tocleancommitstogenerateline-leveldata.Ourapproachbased line-leveldatasets.Whilepromising,thisapproachreliesonstatic onactivelearning,whichiseasytouseandscalable,provideacom- analysistool’sabilitytoaccuratelydetectthevulnerability. plementaryapproachtostaticanalysis.Wedesignedsemanticand Inthiswork,wemakeanotherattempttowardsautomatically syntacticpropertiesfromcommitlinesandusethemtotrainthe generatingline-levelvulnerabilitydata.Wedevelopedatoolcalled model.WeevaluatedourapproachonbothJavaandCdatasets ActiveCleanthatusesactivelearningtotrainamachinelearning processingmorethan4.3Kcommitsand119Kcommitlines.Ac- modeltocleanupsoftwarepatches.Weselectedtheactivelearn- tiveCleanachievedanF1scorebetween70–74.Further,wealso ingmethodbecausemanuallylabelingthecommitlinesisexpen- showthatactivelearningiseffectivebyusingjust400training siveandactivelearningcanhelpselectthebestandtheminimum datatoreachF1scoreof70.23.UsingActiveClean,wegeneratethe numberofdatapointstolabel.Weusedmachinelearningbased line-levellabelsfortheentireFFMpegprojectintheDevigndataset, approach,becausethetrainedmodeliseasytouseandscalableto including5Kfunctions,andalsodetectedincorrectfunction-level processalargenumberofdatarequiredbyvulnerabilitydetection. labels.Wedemonstratedthatusingourcleaneddata,LineVul,a ActiveCleantakesinputprogramswithvulnerablecommits,and SOTAline-levelvulnerabilitydetectiontool,detected70morevul- reportwhetheralineisrelatedtothevulnerability.ActiveClean nerablelinesand18morevulnerablefunctions,andimprovedTop alsocantakeasinputexistingfunction-levelvulnerabilitydatasets 10accuracyfrom66%to73%. andreturnline-levellabels.Inthatcase,ActiveCleanalsocanreport thatafunction-levellabelisincorrectwhenitfindsthatallthelines 1 INTRODUCTION inthefunctionsarelabeledasvulnerabilityirrelevant. TotrainthemodelinActiveClean,webuiltatooltoautomatically Deeplearningbasedvulnerabilitydetectionhasshowneffective[30, extractsyntacticandsemanticfeaturesfromthecommitlines.The 37,38,41]andisbecomingimportantformitigatingtheincreas- featureswedesignnotonlyshowcharacteristicsofthecommit ing[2]numberofvulnerabilitiesbeingexploitedinthewild.Despite linesthemselves,butalsohowtheyinteractwithothercommit therapiddevelopmentofvulnerabilitydetectionmodels,obtaining lines as well as their surrounding code. Then we use query-by- highqualitydatahasalwaysremainedachallenge[31].Thecurrent committeebasedactivelearningframeworktofirsttrainasetof vulnerabilitydatasetscontainfalselabels[19,48].Mostofthese committeemodelsfromasmallsetofexistingline-levelvulnerability datasetsusefunction-level labelsandreportwhetherafunction dataset.Duringthesecondphraseofactivelearning,wequerythe isvulnerable[24,53].Ideally,tobemoreuseful,thetoolsshould committeetoselectthebestdatatolabeltocontinuetrainingthe reportline-levelvulnerabilities,e.g.,whichlinesareresponsibleto modelonthenewinputdata. producethevulnerability.Lackinghighqualityline-levelvulnera- Inourevaluation,weusedbothJavaandCdatasetsandpro- bilitydatasets,buildingline-levelvulnerabilitydetectionmodels cessedatotalof4375commitswith119Kcommitlines.Ourresults canbesignificantlychallenging.Laterintheevaluationsection 3202 ceD 4 ]ES.sc[ |
1v88510.2132:viXraArXiv,December2023,Ames,USA AshwinKallingalJoshy,MirzaSanjidaAlam,ShailaSharmin,QiLi,andWeiLe indicatethatActiveCleanisabletocleanthecommitsandreduce 1 @@ -239,9 +239,5 @@ vulnerability-irrelevantlinesacrossallthebenchmarks,withan 2 public double getRMS() { F1scorebetween70–74fordatasetswithgroundtruthavailable. 3 - double criterion =0; 4 - for (int i = 0; i < rows; ++i) { ActiveCleanalsoconsistentlyoutperformedthebaselinesmodels 5 - final double residual = residuals [i]; anddifferentsettingsinablationstudy.Wealsoshowedthatactive 6 - criterion += residual * residual / residualsWeights learningindeedbringsefficiency.Ittook400additionaltraining [i]; datatoachieveanF1scoreof70.23,whilebaselinemodelsneed2K 7 - } 8 - return Math.sqrt(criterion / rows); trainingdata.Ittook400additionaltrainingdatatoachieveanF1 9 + return Math.sqrt(getChiSquare() / rows); scoreof70.23,whilebaselinemodelsonlyreached67evenafter2K 10 } additionaltrainingdata. 11 @@ -254,11 +254,11 @@ UsingActiveCleanwegeneratedtheline-levelvulnerabilitydata 12 public double getChiSquare() { forFFmpegintheDevigndataset.UsingthehelpofActiveClean,we 13 double chiSquare = 0; 14 - double sqHalf = 0; alsodetectedincorrectfunction-levellabelsintheexistingDevign 15 + double chiSqHalf = 0; dataset.Werandomlysampled50falselabelsoutof468reported 16 for (int i = 0; i < rows; ++i) { byActiveClean,andwefound29areindeednon-vulnerable,and8 17 final double residual = residuals[i]; wasnotabletobeconfirmedmanually.Usingthecleaneddataset 18 + if residualsWeights[i] != 0 19 chiSquare += residual * residual / weproduced,theSOTAvulnerabilitydetectionmodelLineVulde- residualsWeights[i]; tected18morevulnerablefunctionsand70morevulnerablelines. 20 } Itachieved87F1scoreforfunction-leveldetectionand73%Top10 21 - sqHalf = chiSquare / 2; accuracyforline-leveldetection,comparedto83F1scoreand66% 22 + chiSqHalf = chiSquare / 2; Top10accuracyachievedusingtheoriginaldataset. 23 return chiSquare; 24 } Insummary,thispaperproducedatoolthatisautomaticand scalabletogenerateline-levelvulnerabilitydata.Althoughwehave Figure1:AnExampleAdaptedfromMath65inDefect4J notachieved100%accuracyandrecall,wedemonstratedthatusing thedatawecleaned,thevulnerabilitydetectionmodelscanimprove theperformance.Wehopethatbyprovidingthedatasetsuchas line-levellabelsforFFmpeg,wecanpromotebettertoolsinthearea refactoringthatrenamesvariablesqHalf tochiSqHalf.Seelines14, ofdeeplearningforvulnerabilitydetection.Meanwhile,wealso 15,21,and22. seethatmanyotherapplicationssuchasautomaticrepairandfault Inthisexample,thepatchingetRMSiscompletelynotrelated localizationscanbenefitfromthecleanedpatches.Inthefuture,we tothevulnerability.Ifsuchpatchisusedasvulnerabilitydetection willcontinuouslyapplyActiveCleantogeneratemoresuchdatasets. data,wewouldintroduceanincorrectlabelforthisfunction.Even Ourcontributionsofthepaperinclude: forgetChiSquare,themajoritylinesinthecommitarenotrelatedto thebug.Creatingline-levelvulnerabilitydatausingsuchcommits (1) Anautomatictechniqueandscalabletoolthatcancleanthe canbetoonoisy.Similarly,uncleanedcommitscanalsoposethreats patchesandgenerateline-levelvulnerabilitydata, for other applications such as automatic program repair (APR). (2) FFmpegvulnerabilitydatasetwithline-levellabels,consist- Forexample,theoriginalpatchforMath65fromDefect4J,caused ingof5Kvulnerablefunctionswith9Klabels, CapGen[43],astate-of-the-artAPRtool,tomislabelacorrectpatch (3) Featureengineeringandanapplicationofactivelearningas itgeneratedasincorrectduetonot“fixing”thegetRMSfunction[48]. anovelmethodforthisproblem, Furthermore,thisalsocaused[40]tomislabelthetypeofchanges (4) Asystematicevaluationtoshowourapproachcanimprove requiredtofixthebug. thequalityofthedatasetandimprovedeeplearningmodels Ourapproachaimstousesyntacticandsemanticfeaturesofcode forvulnerabilitydetection. commitsandthecodearoundthecommitstodistinguishwhether alineisrelevanttoabug.Wehopetousemachinelearningmodels tolearnpatternsfromexistingcommitswhichweknowtheground 2 AMOTIVATINGEXAMPLE truthlabels,andusethemodelstopredictwhetheralineinthe Defect4J[24]isawidelyusedbenchmarkwherepatchesareman- commitisbugrelevantorirrelevant.Forexample,forFigure1,the uallyminimizedtocontainthesmallestchangesforfixingbugs. vulnerabilityfixingcode(line18),introducedanif guardcondition However,duetothechallengesofcleaningcommits,thepatches involvinganon-zerocheckforavariableusedwithintheif block. inDefect4Jsometimesstillcontainedchangesthatareirrelevant Previousresearch[22,34]haveshownsuchchangesarecommonly tothebugs[48].InFigure1,weshownanexampleofuncleaned usedtofixvulnerabilities. patchadaptedfromMath65inDefect4J. Ontheotherhand,regardingthechangeofsqHalf tochiSqHalf Here, the patch affects two functions: getRMS at line 2 and atlines14,15,21,and22,wecanseethattherewerenocontrol |
getChiSquareatline12.Thecommitconsistsof12lines(lines3–9, flowchanges.Therewerealsonochangesindatadependencies 14,15,18,21,and22).Amongthe12lines,onlyline18isforfix- withothervariablesbothwithinthepatchandwithrespecttothe ingthedivide-by-zeroatline19.ThechangesingetRMS iscode entirefunctionorfile.Thesearecommonlyoccurredpatternsin refactoring,i.e.,thedevelopercreatedgetChiSquaretoreplacethe refactoring.ForrefactoringingetRMSfunction(lines3–9),wecan deletedcodeatlines3–7.Similarly,ingetChiSquarethereisanother observethatthedeletedvariablecriterionwasusedinsideaforActiveClean:GeneratingLine-LevelVulnerabilityDataviaActiveLearning ArXiv,December2023,Ames,USA loop(lines4–7),hadsomearithmeticoperationsandfinallyused toidentifyasetofpatternsandpropertiesthatmaybeusefulfordif- asinputtoMath.sqrtfunctionatline8.Meanwhile,afterthepatch, ferentiatingbetweenvulnerability-relevantlinesandvulnerability- thechangedinputtoMath.sqrtisnowafunction(getChiSquare). irrelevantlinesinthecommits. Thesetypeofpatternscanhelpidentifybug-irrelevantlines. InTable1,wepresentthefeatureswedesigned.Thefeatures Wetrainedanactivelearningmodeltoclassifyeachlinewithin can be classified into three categories; (1) features that capture acommitasvulnerability-relevantornotbasedontheextracted characteristicsofeachindividualcommitlinesthemselves(shown features.Wethenusedthemodelonfunctionlevelvulnerability underCommitLines),(2)featuresthatcapturehowdifferentcommit dataandcreatetheline-levelvulnerabilitydata. lines interact with each other (under Within The Commit), and finally(3)featuresthatcapturehowthecommitlinesinteractwith 3 APPROACH restofthecode(underCommitContext). TheintuitionbehindtheCommitLinesfeatureswastoidentify 3.1 AnOverview simplebuggypatternsthatmayexistatlinelevel.Forexample,the Figure2presentsanoverviewofourwork.Figure2ashowsthat comparator,arithmeticandlogicalfeaturesshowninTable1under theinputofActiveCleanisprogramswiththeirvulnerablecommits. CommitLine capturethenumberofeachoperatortypepresent Theoutputisline-levelvulnerabilitydata.ActiveCleancanalsotake inacommitline.Theoftenoccurringoff-by-oneerrorsarefixed inputfunctionlevelvulnerabilitydatawithcommits.Inthatcase, by changing the type or numbers of one of these operator and ActiveCleanwillcleanupthecommitsandgeneratelinelevellabels, canbecapturedbythem.Asanotherexample,commonbugfixes andmeanwhilefindmistakesthatmayexistinthefunction-level typicallyinvolve“returningfromthefunctionandreportingan labelsandsuggestlabelcorrections. errormessage”.ThiscanbecapturedusinghasRet,whichchecksif To achieve the goal, ActiveClean trained a machine learning thelinecontainsareturn statement,inconjunctionwithhasLit- modelviaanactivelearningframework.SeeFigure2b.Theactive eral,whichcheckforstringsinthestatements.Similarly,usinga learningframeworkusedmultiplemachinelearningmodelstoform Booleanvariabletoavoidabuggycontrolflowcanbecaptured acommittee.Thetrainingstartswithasmallsetoflabeleddata.The usingtheflagVarfeature. labeleddataconsistsofasmallamountofline-levelvulnerability ThefeaturesinWithinTheCommitweredesignedtodifferen- data.Theirlabelsmarkwhetheralineinthecommitisrelevant tiatethetypesofchangesoffunctionalityenhancementandrefac- tothebugornot.Oncethemodelsaretrained,theactivelearning toring frombugfixes.Forexample,mostbugfixesonlychange starts.Thecommitteeinstructswhatcommitlinesfromthe(unla- theircontrolanddatadependenciesminimally.Ontheotherhand, beled)datasetweshouldlabelsothatthisdatasetcanachievethe functionalityenhancementandrefactoringsintroducesubstantially bestperformancequickly.Theuserthengenerateslabelsforthe morecontrolanddatadependenciesbothwithinthecommitand commitlinesrecommendedbythecommittee.Thesenewlylabeled aroundit.ThiscanbecapturedusingthechangetocontrolDepend dataisusedtotrainforimprovingthemodels.Thisprocessiterates anddependsfeatures(seeTable1),whichreportsthenumberof untilapredeterminedbudget,e.g.,time,orthedesiredaccuracyis controlanddatadependencyrespectively.Featureslikerepeated, reached.Whenapplyingtoanewdataset,tofurtherimprovethe repeatedCall,andrepeatedControlaredesignedtocapturethepat- performance,ActiveCleancancontinuetrainingwithasmallsetof ternscommonlyfoundinrefactoringchangeswherethesamecode unlabeleddatafromthenewdataset. orfunctioncallsarerepeatedthroughoutthecommits. Oncetheactivelearningmodelistrained,weapplythefinal CommitContext capturesthelocationofthecommitlinesas modelontheinputshownonFigure2atogetline-levelvulnerability well as how they interact with the program in general as they data.Ifweobserveafunctioncontainsanabnormalamountoflines provide valuable clues to determining if the code is relevant to markedasnon-vulnerable,wewillreportthefunctiontotheuserfor thebug.Thisisbasedonourobservationthatmostbugfixestend theirfurtherinspectionandsuggestthatanon-vulnerablefunction tohappenwithinexistingcontrol,functional,orloopbody.The mayhavebeenlabeledasvulnerable.Theusercanconfigurethe controlBlockandotherblockbasedfeatures(seeTable1)captures thresholdtodefinewhatisabnormalintheirprojects. whetherthechangeishappeninginsideitsnamesake’sbodyornot. Functionalityenhancementandrefactoringschanges,ontheother 3.2 FeatureEngineering hand, frequently introduce new functions and variables. These |
Whenmanuallycleaninganumberofpatches,codeinspectorsmay newlyintroducedfunctionsandvariableshaveahighernumber notalwaysbeabletounderstandthedetailedfunctionalityofthe of control and data dependencies among themselves than with code[11,20,48].Thepatternsfromsyntacticandsemanticcode thecontextsurroundingit.Theblockbasedfeaturesalongwith featuresmayprovidethecluestodeterminewhetherthecodeis dependedBy,controlledByandotherfeaturescheckforcontroland relevanttothebug.Forexample,additionofconditionalguardsor datadependencywiththesurroundingcontexttocapturethis. try-catchblocksinvolvingavariablefrominsidetheblock,like Weaimtodesignfeaturesthatwereindependentoftheunderly- showninFigure1,canindicatethepresenceofbuginthecode ingimplementationlanguageandproject.Hence,wedon’tuseany withinthewrappedblock. actualcodetoavoidlearningprojectspecificfunctionorvariable Ourgoalhereistoapplymachinelearningtolearnthepatterns namesandusedgeneralsemanticpatternsinsteadofmoredetailed usingthefeaturesfromcommitsandthecodecontextaroundthe languagedependentones.Wealsodesignedthemtobeeasyfor commitlinestoautomaticallydistinguishwhetheracommitlineis machinelearningconsumptionbyonlyusingbooleanandnumber relevanttothevulnerability.Westudied250commitsfromamanu- basedfeatures. allyminimizedJavadataset[19]aswellthepastresearch[22,34],ArXiv,December2023,Ames,USA AshwinKallingalJoshy,MirzaSanjidaAlam,ShailaSharmin,QiLi,andWeiLe Labeled Unlabeled Data Data Program With Function Level Vulnerable Commits Vulnerability Data Feature Extraction ActiveClean ActiveClean Committee Query Models Committee Line Level Suggested Label Line Level Label Generated Suggestions Vulnerability Data Corrections Vulnerability Data On Demand For Labeling (a)InputsandOutputsforActiveClean (b)TrainingtheModelusingActiveLearning Figure2:OverviewofActiveClean CommitLine CommitContext Name Description Name Description assignment Containsassignmentstatement controlBlock Insideacontrolstructure comparator No.ofcomparatoroperators doBlock Insideado-whileblock arithmetic No.ofarithmeticoperators ifBlock Insideanifblock logical No.oflogicaloperators elseBlock Insideanelse flagVar Sets/Removesbooleanvariable switchBlock Insideaswitchblock hasLiteral Containsliteralstrings tryBlock Insideatry-catch isLocal Involveslocalvariable forBlock Insideaforblockblockblock hasRet Containreturnstatement whileBlock Insideawhileblock funcCall Containfunctioncall dependedBy Totalno.ofcontroldependencies WithinTheCommit controlledBy Totalno.ofcontroldependencies Feature Description reachableOutside Hasdatadependencyoutsidecommitlines controlDepend Controldependentonothercommitlines postDomintaedBy No.oflinesitispostdominatedby depends Datadependentonothercommitlines postDominates No.oflinesitpostdominates repeated No.ofrepetitionwithinthecommit repeatedCall No.ofsamefunctioncallrepetitions repeatedControl No.ofsamecontrolstructurerepetitions Table1:FeaturesDesignedForIdentifyingVulnerability-relevantLines Inthisworkweusecodepropertygraphs(CPG)thatintegrates 3.3 ApplyingActiveLearningforCleaningthe controlflowgraph,abstractsyntaxtreeanddependencygraph[46] Commits torepresentsourcecodefromcommitsandextractthefeatures. Weformulatethepatchcleaningproblemtoamachinelearning Specifically,weuseitsimplementationinJoern[1],acodeanalysis classificationproblem.Heretheinputisthevectorthatrepresents platform,whichsupportsmultiplelanguagesincludingC,C++,Java thefeaturesofacommitline.Theoutputisalabel,“1”indicatesthat andPython. thelineisrelevanttothevulnerability;“0”indicatesthatthelineis Inordertoextractfeatures,foreachcommitweretrievedboth othertypesofchanges,e.g.,refactoringorfunctionenhancement. thecurrentandbeforethepatchversionsofallthesourcecode We used active learning, specifically, the query-by-committee filespresentinacommit.ThenweusedJoerntobuildCPGsfor approach,tolearnthepatternsthatidentifybugrelevantcommit them.Next,weanalyzedthesyntacticandsemanticpropertiesfor lines from the extracted features. Query-by-committee is a very eachindividualcommitlinesusingJoernandextractedthefeatures effectiveactivelearningapproachthathasbeensuccessfullyapplied listedinTable1withcustomCPGquerieswedesigned. todifferentclassificationproblems[33].Specifically,weusedthe query-by-committeeusingvote-entropyasthedisagreementmetricActiveClean:GeneratingLine-LevelVulnerabilityDataviaActiveLearning ArXiv,December2023,Ames,USA frommodAL[13],amachinelearningframeworkbuiltontopof Wesearchedforreadilyavailablebenchmarksbasedontheabove Scikit-learn[35]. criteriaintheliterature[5,7,8,12,14,15,19,20,23–25,28,32,42, Acommitteecanconsistoftwoormoretraditionalmachinelearn- 45,48,51,53].Asaresult,wechooseallthe17Apacheprojects ingapproacheslikeRandomForestorSupportVectorMachines. in[19]forJavaprograms,asitprovidedmanuallyverifiedline-level Thengivenasetofunlabeleddata,query-by-committeeiteratively groundtruthlabelsforbugfixingcommits.Thisgaveus27Kline- selectsdatatobelabeledfortraining.Typically,theselectionisde- levellabelsspreadover365commits.Wewerehowever,unableto terminedbysomemeasureofdisagreementinthecommitteeabout findanequivalentbenchmarkforCprograms.Whilebenchmarks |
itspredictedlabels.Thecommitteetypicallyselectstheexamples likeSIR[15],ManyBugs[28]andDBGBench[5]werepromising, withtheleastcertaintyandmaximumdisagreement,aslabeling SIRonlycontainsseededbugs.ManyBugsandDBGBenchreported suchexamplesbringinthemaximuminformationtothemodels. veryfewbugsperprogram. Usingthisapproach,activelearningtypicallyrequiredmuchfewer Thus, we used FFmpeg and QEMU from Devign [53] in our labelsandcanquicklylearnagoodmodel. study.Thedatasetcontainfunctionlevelvulnerabilitylabelsand Inourapproach,first,wetaketheextractedfeaturesalongwith arewidelycited.Twoauthorsofthispapermanuallyinspectedthe knownline-levelgroundtruthlabelstotrainasetofinitialbase commitsandprovidedline-levellabelsforasmallsetofdataas modelsforthecommittee.Next,weextractfeaturesfromcommit neededbyourevaluation,followingthemanualinspectionprotocol lineswithoutline-levelgroundtruthdata.Wethenusethedisagree- documentedintheliterature[21].Givenabudgetof21days,thetwo mentbetweenthepredictionsfromthedifferentcommitteemodels authorsprovidedlabelsfor260(170FFmpeg+100QEMU)randomly toselectafixedamountofadditionaldatatobelabeledineach selectedcommits.Thereweredisagreementover13(3FFmpeg+10 iteration.Ausermanuallyinspectsandprovidelabelsforthese QEMU)ofthe260commits’labelsevenafterdiscussionsbetween commitlines.Finally,werepeatthequeryingandlabelinguntila theauthors.Hence,weexcludedthemfromourdataset.Intotal, fixedbudget,e.g.,measuredbytimeorthenumbersofdatawecan wegeneratedline-labelsfor4.4K(3.2KFFmpeg+1.2KQEMU)lines label,isexhausted. over257(167FFmpeg+90QEMU)commitsforCcode. Iftheinputtothemodelisprogramwithvulnerablecommits 4.1.3 SelectingMachineLearningModelstoSetuptheActiveLearn- (showninFigure2a),weusedthefinalmodelafteractivelearning ingFramework. Weapplythequery-by-committee[13]basedactive topredictifeachindividualcommitlineisrelevanttothebugor learningframeworktotrainamodel.Thecommitteeconsistedof not.Basedonthepredictionforeachline,thecommitlinesare twomodels.Toselectthemodels,weconsideredfivewell-known encodedas“1”or“0”dependingoniftheywererelevanttothe machinelearningmodels:namelyRandomForest[4],LabelSpread- vulnerabilityornot,respectively,astheoutput. ing[52],LabelPropagation[54],SupportVectorMachine[9],and Forfunctionlevelvulnerabilitydata,ontopofline-leveldata, LogisticRegression[10].Weusedthegroundtruthlabelsfromspike wealsooutputthelinenumbersofthevulnerablelinesfromthe andgiraph,twootherJavaprojectsfrom[19],toselectthecommit- functionandwhetheravulnerablefunctionneedstobereexamined tee.Basedonthisinitialanalysis,ActiveCleanwasimplementedas duetoabnormalamountofnon-vulnerablelinesastheoutput. aquery-by-committeeactivelearningmodelusingRandomForest andSupportVectorMachine. 4 EVALUATION 4.1.4 ExperimentaldesignforRQ1. InRQ1,ourgoalistoevaluate Inthispaper,weaimtoanswerthefollowingresearchquestions: theeffectivenessandefficiencyoftheoflabelsgeneratedusing • RQ1[Validation]Canourapproacheffectivelyandeffi- ActiveClean.Wemeasuredeffectivenessbyevaluatingthereduc- cientlycleanthebuggycommits? tionofcommitsandthecorrectnessofthereduction.Specifically, • RQ2[Comparison]HowdoesActiveCleancomparedto forreductionofcommitswemeasure(1)theaveragenumberof otherbaselinesandsettings? bug-irrelevantlinesremoved,and(2)thenumberofcommitsthat • RQ3[Application]Howourapproachcanhelpdeeplearn- hadtheirbug-irrelevantlinesremoved.Forthecorrectnessofthe ingbasedvulnerabilitydetection? reduction,wereportedtheF1scorefortheprediction.Forefficiency, wereportthenumberoftrainingexamplesneededagainsttheF1 4.1 ExperimentalSetup scoreachievedviaActiveClean. 4.1.1 Implementation. WeimplementedActiveCleanforCandJava FortheJavadataset,weusedtheline-levelgroundtruthlabels programsusingJoern[1],Scikit-learn[35],modAL[13],Python, providedbythebenchmark[19]toevaluatetheeffectivenessand Bash,andScala.Specifically,weusedJoernwithBashscriptsand efficiencyofourapproachusing3-foldcrossvalidation.Oursetup Scalatoextractfeaturesfrompatches.ThenweusedScikit-learn follows the active learning literature [20, 50]. First, we trained alongwithmodALforactivelearning. ActiveCleanusing20%ofthetrainingdatatosetupthebasemodels inthecommittee.Inthesecondstep,wequeriedthecommittee 4.1.2 Subjectselection. Toanswertheresearchquestionsanddemon- for200iterationsand10commitlinesperinteractionfromthe stratethatourtechniquesareapplicableinpractice,weaimtouse remaining80%ofthetrainingdata.Aftertrainingtheadditional2K benchmarksthat(1)arereal-worldopensourceprogramsforJava commitlines,weplottedtheF1scoreatqueryintervalstoevaluate andC,(2)havemanuallylabelledorverifiedbugfixingcommits,so thelearningefficiency.Finally,weusedthepredictionmadeonthe wecanhavegroundtruthtocompareagainst,(3)andareactively testingdatatocollecttheeffectivenessmetrics. usedintheresearchcommunity,soourresultscandirectlybenefit To train base models for C projects, we used the entire Java |
theusers. dataset.ThisisbecauseweonlycanlabelasmallsetofCdataArXiv,December2023,Ames,USA AshwinKallingalJoshy,MirzaSanjidaAlam,ShailaSharmin,QiLi,andWeiLe andtheywereallusedastestdata.Inthesecondstep,wequeried twoapproaches.Forboththebaselineandablationexperiments, thecommitteefor100iterationswith10commitlines(FFmpeg) weusedtheJavadatasettoensurethatwecanusealargeamount periteration.Thelabelsforthese1Kcommitlinesweregenerated oflabeleddata. ondemandbyoneauthor.Afterthetraining,weplottedtheF1 scoreatqueryintervalstoevaluatetheimprovementinefficiency. 4.1.6 ExperimentaldesignforRQ3. RQ3aimstodemonstratethe Finally,forevaluatingeffectiveness,weusedthepredictionmade usefulnessofActiveClean.Forourfirstapplication,weapplyour onthetestingdataaswellastheremainingunlabeledFFmpegdata. cleaneddatasetintheline-levelvulnerabilitydetectionandcompare Sincewedon’thavegroundtruthforunlabeledFFmpegdata,we themodelperformancewithandwithoutourcleaneddata.Weused randomlyselected500commitlines(250eachfromvulnerability LineVul[17],theSOTAandonlyline-levelvulnerabilitydetection fixingandbug-irrelevantpredictions)andmanuallyevaluatedthem modelthatisabletorunonFFmpeg.WetrainedLineVulwith9.5K usingourmanualinspectionprotocolgivenabove. functionsofFFmpeg.Inthevanillasetting,weusecommitsofthe Finally,wewantedtoevaluatetheeffectivenessofActiveCleanfor functionsasline-levellabels,andintheActiveCleansetting,we newprojects.Specifically,wewantedtomeasurehowActiveClean useourcleaneddataset.Tocomparetestsetperformance,weused performedonanunseenCprojectandwhetherwecanquickly themetricsofF1score,thetop10accuracyandIFA(theaverage adaptActiveCleanfornewprojects.Forthisexperiment,weused positionofthefirstvulnerablelinewithinthetop10)tomeasure QEMU,anotherCdatasetpresentinDevign.Twoauthorsmanually theaccuracyofpredictedvulnerablelinescomparedtotheground inspectedandlabeled100commitsusingthemanualinspection truth. protocolgivenabove.Duetodisagreement,wethrewout10com- Inthesecondapplication,weexploredthepossibilityofusing mits’labels.Wecreated2.8Klabelsandusedthislabeleddataset ActiveCleantosuggestlabelcorrections.Specifically,werecom- asthetestset.WethenapplyActiveCleanthatwastrainedonJava mendedanyvulnerablefunctionwithmorethan50%vulnerability- andFFmpegdatasetfromabovetotheQEMUtestset.Theperfor- irrelevantlinesforreclassification.Werandomlysampled50func- mancewasmeasuredusingtheF1score.Next,inordertomeasure tionswhichActiveCleanproposedasvulnerability-irrelevant,and adaptability,weusedthequery-by-committeetoqueryandteach thenappliedthemanualinspectiontodeterminetheircorrectness. 100morelabels.WeusedthechangeinF1scoretomeasurethe SinceActiveCleancancorrectfunction-levellabels,wealsoran adaptability. LineVulforfunctionlevelpredictionusingthelabelsfromDevign andusingthelabelsfrom ActiveClean respectively. WeusedF1 4.1.5 ExperimentaldesignforRQ2. Tocomparetheactivelearning scorestocomparetheperformanceforthetwosettings. approach we used, we selected top 3 machine learning models FortheexperimentswithLineVul,weusedthesametestsetsfor asbaselines,namelyLabelSpreading,RandomForestandSupport baselineandActiveClean.Theyarethemanuallylabeledground VectorMachine.Themodelsarechosenbecausetheyranktopfor truthfromFFmpeg. thespikeandgiraphdatasetsfromSection4.1.3.Wetrainedeach 4.1.7 Runningtheexperiments. Thefeatureextraction,ActiveClean model200times,startingwith20%ofthetrainingdataandthen andbaselinemodelsweretrainedandevaluatedonaVMwith64- adding10moretrainingdataeachtime.Thathelpsusunderstand bit16coreIntelHaswellprocessorand32GBofRAM.TheLineVul whataretheminimumdataneededforthosemodelstoachieve modelswereevaluatedonaVMwitha32coreCPUandGPUwith thetopperformance,sothatwecancomparethemwithactive 16GBRAM.BoththeVMswererunningCentOS8. learning.Wehavealsosearchedtheliteratureforotherbaselines[3, 6,8,14,17,23,27,36,39,42,47]thatcancleanline-levelcommits. 4.2 ResultsforRQ1:Validation Amongthetoolsweareabletorun,(1)FLEXMEhandledonlyC# projects;(2)LineVul[17]isabletopredictline-levelvulnerability, In Table 2, we present the results for the Java dataset labeled butitreportedlessthan15%F1duetoourbenchmarksbeingout from [19], the manually labeled FFmpeg, and Qemu, under the ofdistribution. columnsJavatest,FFmpegtestandQemunewprojecttestrespec- InRQ2,wealsoperformedtwoablationstudies.Inthefirststudy, tively.UnderFFmpegapplication,wereporttheresultsofprocessing weaimtostudytheimpactofmodelselectionforquerycommittee. theentireFFmpegdataset(excludingthemanuallylabeledones Forthis,wetrained3modelswithdifferentquerycommittee.Specif- underFFmpegtest)inDevign. ically,weusedquerycommitteewithRandomForestinconjunction As shown in row ActiveClean F1 Score, ActiveClean reported withLogisticRegression,LabelSpreading,andLabelPropagation, thehighestF1scoreof74.83forFFmpegtest,anditreportedwith asthesegavethebestperformanceinourtuningdataset(seeSec- F1 score 70.23 for Java test. Since we don’t have the line-level tion4.1.3).Thesemodelsweretrainedusingthesameinitialtraining labelsforFFmpegapplication,wedidnotreportF1;however,we |
dataasActiveClean.Thenweusedtheircommitteestoselectthe sampled results to manually confirm the predictions (see later). sameamountofadditionaltrainingdataasActiveCleanfromthe WhenapplyingtoQemuwithoutanyretraining,wereportF1score restofthetrainingdataset. of59.53.Afterproviding100lineadditionallabels,weobtained Inthesecondstudy,weaimtounderstandtheadvantageofap- 64.46F1shownunderQemunewprojecttest. plyingactivelearningcomparedtorandomsamplingofthelabeled ActiveCleanisabletoreducebugirrelevantcommitlinesacross data.Hence,weusedthesamebasemodelstrainedwith20%train- allthedatasets,asshownintherowsTotalBug-irrelevantLines ingdata.Insteadofusingactivelearning,werandomlysampled (Predicted)andTotalBug-irrelevantLines(Confirmed).ForFFmpeg thesameamountofdataateachiterationthesamewaywetrain application,wesampled500dataforinspection(250bug-relevant, ActiveClean.WeplotthechangeofF1scoresoveriterationsforthe 250bug-irrelevant).Amongthe250bugirrelevantlinesreportedbyActiveClean:GeneratingLine-LevelVulnerabilityDataviaActiveLearning ArXiv,December2023,Ames,USA Dataset Javatest FFmpegtest FFmpegapplication Qemunewprojecttest ActiveCleanF1Score 70.23 74.83 — 64.46 TotalCommitLines 8909 3129 98267 1200 TotalBug-irrelevantLines(Predicted) 6832 1202 51950 748 TotalBug-irrelevantLines(Confirmed) 5792 544 185/249 419 TotalCommits 364 167 5246 90 CommitsWithBug-irrelevantLines(Predicted) 342 114 3628 76 CommitsWithBug-irrelevantLines(Confirmed) 332 57 — 43 AverageSizeOfCommits 25.09 18.73 17.01 13.33 AverageBug-irrelevantLinesPerCommits(Predicted) 20.42 10.54 14.31 9.84 AverageBug-irrelevantLinesPerCommits(Confirmed) 17.49 6.03 — 9.74 Table2:ResultsforRQ1:Effectiveness 70 69 0 250 500 750 1000 1250 1500 1750 2000 Number of queries erocS 1F 75 74 73 72 71 0 100 200 300 400 500 600 700 800 900 1000 Number of queries (a)F1ScorevsnumberoflabeleddataforJavatestfor2Kadditional trainingdata erocS 1F (b)F1ScorevsnumberoflabeleddataforFFmpegtestfor1Kaddi- tionaltrainingdata Figure3:ResultsforRQ1:EfficiencymeasuredasF1scoreagainstnumberoftrainedlabeleddata ActiveClean,weconfirmedthat185werecorrectlypredicted,while AverageBug-irrelevantLinesPerCommit(Confirmed),whilethe amongbugrelevantlines,weconfirmedthat187outof250were averagesizeofthecommitsisshowninAverageSizeOfCommit. indeedbugrelevant.Weexcluded8(onefromirrelevantand7from FortheJavatest dataset,onaverage17.5linesoutofacommit relevant)outof500forconsiderationbecausethetwoauthorswere with25lineswerebug-irrelevant.Interestingly,Qemunewproject notabletoachieveanagreement.Thesignificantlyhighernumber test reportedanaverageofmorethan9bug-irrelevantlinesper ofbug-irrelevantlinesinJavatestwereduetodocumentationand commitwhentheaveragesizewasonlyaround13.Thisindicates testcaseschangesthatwerepresentinbugfixingcommitsforthis thatonaverage,therearealargeportionofcommitsthatneedtobe dataset. cleanedoutastheyarenotrelatedtovulnerability.Directlyusing ActiveCleanalsoshowedthatthebug-irrelevantlineswerespread commitsasline-levellabelscanbringinmuchnoiseforline-level across commits in all the datasets, as shown in the rows Com- vulnerabilitydetection. mitsWithBug-irrelevantLines(Predicted)andCommitsWithBug- TheFigure3showstheresultsforefficiencyofActiveClean.Y irrelevantLines(Confirmed).Thus,theline-levelcleaningtoollike axisisF1score.Xaxisplotsthenumberoflabeledlinesprovided ActiveCleancanbeusefulformanycommits.EvenfortheDevign duringtheperiodofactivelearning.Figure3aandFigure3bshow datasetsofFFmpegtestandQemunewprojecttestthatlikelyhave thecorrespondingplotsforJavatestandFFmpegtestrespectively. beencleanedoutdocumentationandtest(unlikeJavatest)butfo- TheresultsshowthatActiveCleanisabletoquicklyimprovethe cusononlythevulnerabilityfunctions,wereportedmanycommits F1scoresforbothdatasets.Figure3ashowsthatonly400additional neededtobecleaned. commitlineswereneededforActiveCleantoreachanF1scoreof The prevalence of bug-irrelevant lines per commit is shown 70forJavatest.Similarly,anadditional400trainingdataimproved inrowsAverageBug-irrelevantLinesPerCommit(Predicted)and theF1scoreforFFmpegtestto74from71.5asshowninFigure3b.ArXiv,December2023,Ames,USA AshwinKallingalJoshy,MirzaSanjidaAlam,ShailaSharmin,QiLi,andWeiLe Togetacompletepictureoftheefficiency,wetrainedamodelwith theentiretrainingdataavailable(atotalof6.6Klabels)forJava test.ThismodelhadanF1scoreof72.82.Thatsaid,ActiveClean onlyneeded400ofdata(ontopofbasemodelstrainedwith20%) toreachclosetothebestperformance. 4.3 ResultsforRQ2:Comparison BaselineModels AverageF1 ActiveClean 70.230 LabelSpreading 67.187 RandomForest 67.175 SupportVectorMachine 67.076 Table3:ResultsforRQ2:BaselineComparison:AverageF1 scoresafterfinaltraining Table3showstheaverageF1scoresforthedifferentbaselines afterthefinaltraining.ActiveCleangotthebestaverageF1scoreof 70.23.Thebaselinesallperformedsimilarlyat67.18,67.17,and67.07 forLabelSpreading,RandomForest,andSupportVectorMachine, respectively. 70 69 68 67 0 250 500 750 1000 1250 1500 1750 2000 Number of additional training data erocS 1F |
ActiveCleanleadswithanF1scoreof70.23followedcloselybythe modelwithRandomForest+LogisticRegressionat69.10.Themodels withLabelPropagationandLabelSpreadingperformedsimilarly at68.812and68.765.However,allthedifferentcommitteebased modelsperformedbetterthanthebaselinemodels. Models AverageF1 ActiveClean 70.230 AblationStudy:ActiveLearning RandomForest+LogisticRegression 69.108 RandomForest+LabelPropagation 68.812 RandomForest+LabelSpreading 68.765 AblationStudy:RandomSelection RandomForest+SupportVectorMachine 69.031 Table4:ResultsforRQ2:AblationStudies:AverageF1scores afterfinaltraining TheefficiencycomparisonforthisstudyisshowninFigure5a. TheF1scoreatdifferentamountofadditionaltrainingdatafor ActiveCleanisshowninorangeandRandomForest+LogisticRe- gression,thenextbestsetting,isshowninblue.Aftertheinitial training,ActiveCleanhadanF1of68.54whileRandomForest+Lo- gisticRegressionhad67.74.ActiveClean,improvedtheF1scoreby1 pointto69.58aftertraining120additionalcommitlines.Meanwhile, ittookRandomForest+LogisticRegression240,double,theaddi- tionaltrainingdatatoimprove1pointto68.74.However,ingeneral allthecommitteebasedmodelsshowssteadyimprovements,butat slowergrowthcurves.Forexample,modelwithRandomForest+ LabelPropagationtook1KadditionaltrainingdatatoreachanF1 of69. Forthesecondablationstudyusingrandomselection,themodel hadanF1scoreof69.03attheendof2Kadditionaltrainingdata compared to ActiveClean’s F1 score 70.23. We found that using Label Spreading random selection with ActiveClean’s committee was still better ActiveClean thanotherselectionsofthecommittees.Figure5b,showstheeffi- ciencycomparisonwhereActiveClean(inorange)showedfaster improvementinF1scoreperadditionaltrainingdatathanthesame Figure4:ResultsforRQ2:BaselineComparison:Efficiencyof committeewhentrainedwithrandomselection(showninblueas LabelSpreading(thebestbaseline)comparedtoActiveClean ActiveClean-Random).Bothmodelsstartedoutat68.54F1score after initial training. But it took ActiveClean only 50 additional InFigure4,wehaveplottedtheF1scoreatdifferentamount trainingdatatopassanF1scoreof69,whileActiveClean-Random oftrainingdataforLabelSpreading (showninblue),whichwas took500.WhileActiveCleanfinallyreachedanF1scoreof70.21, thebestbaseline.WealsoplottheF1scoreforActiveClean(shown ActiveClean-Randomonlyreached69.341with2Kadditiondata. in orange), for comparison. As we can see, just after the initial 4.4 ResultsforRQ3:Applications training,thecommitteebasedActiveCleanreachedanF1scoreof 68.56comparedtotheLabelSpreadingmodel’s66.68.Thisinitial lead,isfurtherimprovedquicklybylearningthedatasuggested Model F1 Top10 IFA CorrectlyPredicted byqueryingthecommittee.ActiveCleantookjust50additional datatogetto69and400forreaching70.Meanwhile,evenafter Functions Lines 2Kadditionaltrainingdata,LabelSpreadingwasunabletoimprove ActiveClean-FFmpeg 87 73% 6.68 238 178 even1fullpoint.Thisclearlydemonstratedtheadvantageofactive FFmpeg 83 66% 8.11 220 108 learningthatitcanlearnveryquicklywithlessertrainingdata. Table5:ResultsforRQ3:LineVulRanwithCleanedDataset WepresenttheresultsforthetwoablationstudyinTable4.Inthe firststudywherewetriedactivelearningusingdifferentcommittees.ActiveClean:GeneratingLine-LevelVulnerabilityDataviaActiveLearning ArXiv,December2023,Ames,USA 70 69 68 0 250 500 750 1000 1250 1500 1750 2000 Number of additional training data erocS 1F 70 69 Random Forest + Logistic Regression ActiveClean 0 250 500 750 1000 1250 1500 1750 2000 Number of additional training data (a)ActiveLearningAblationStudy:RandomForest+LogisticRe- gression(thesecondbestsetting)comparedtoActiveClean erocS 1F ActiveClean-Random ActiveClean (b)RandomLearningAblationStudy:RandomSelectionbasedAc- tiveCleancomparedtoActiveClean Figure5:ResultsforRQ2:EfficiencymeasuredasF1vsamountofadditionaltrainingdata Atthelinelevel,usingthecleandatasetActiveClean-FFmpeg, 1 + static av_cold int alac_decode_close(AVCodecContext * LineVulreported178vulnerablelineswhilethevanilladatasetFFm- avctx) { peg onlyreported108vulnerablelines.Thisisa1.6Ximprove- 2 + ALACContext *alac = avctx->priv_data; 3 + int chan; mentresultingin70morevulnerablelines.TheActiveClean-FFmpeg 4 + for (chan = 0; chan < alac->numchannels; chan++) { modelalsoreported73%Top10accuracy,comparedto66%reported 5 + av_freep(&alac->predicterror_buffer[chan]); byFFmpegmodel.TheActiveClean-FFmpegmodelreportedanIFA 6 + ... of6.68asopposedto8.11,animprovementof1.43.LowerIFAmeans 7 + return 0; 8 + } thelessernumberofbug-irrelevantlineswithintheTop10ranking 9 static av_cold int alac_decode_init(AVCodecContext * ofthemodel.Inanotherword,adeveloperwouldtypicallyhaveto avctx) { inspect1.5lineslessperfunctioninordertoreachthevulnerable 10 int ret; line. 11 ALACContext *alac = avctx->priv_data; 12 alac->avctx = avctx; 13 ... Numberof Total 14 allocate_buffers(alac); 15 - return 0; FunctionsExamined 50 16 - } Non-VulnerableFunctions 29 17 - static av_cold int alac_decode_close(AVCodecContext * VulnerableFunctions 13 avctx) { Undecided 8 18 - ALACContext *alac = avctx->priv_data; 19 - int chan; |
Table6:ResultsforRQ3:HelpCorrectFunction-LevelLabels 20 - for (chan = 0; chan < alac->numchannels; chan++) { 21 - av_freep(&alac->predicterror_buffer[chan]); InTable6,weshowthatActiveCleancanhelpcorrectfunction- 22 - ... 23 + if ((ret = allocate_buffers(alac)) < 0) { level labels. In our experiments, ActiveClean recommended 468 24 + av_log(avctx, AV_LOG_ERROR, "Error allocating label correction for the FFmpeg dataset. We randomly sampled buffers\n"); 50functionsforinspection,andthetwoauthorsagreedthat29 25 + return ret; functionsor69.04%wereindeednon-vulnerable. 26 return 0; 27 } 4.5 InterestingExamples Figure6:anincorrectlabelinFFmpegintheDevigndataset InthissectionwegivetwointerestingexamplesthatActiveClean recommendedasincorrectfunction-levellabels. InFigure6,weshowanexampleinFFmpegwhereDevignpro- vulnerableandthereisnocorrectionaddedforthisfunctioninthe videdanincorrectfunction-levellabelforalac_decode_close(the commits.ActiveCleancorrectlydetectedthis. olderversion).Thediffpatch1isshowninFigure6,whereredindi- Figure7showsapatch2 thatonlyformatsthecode,butthe catesdeletionandgreenindicatesaddition.Here,lines17–22from functionRENAMEatline1waslabeledasvulnerableintheDevign theolderfunctionweremovedtolines1–6.Thisfunctionisnot 1https://github.com/FFmpeg/FFmpeg/commit/53df079a730043cd0aa330c9aba7950034b1424f 2https://github.com/FFmpeg/FFmpeg/commit/6e42e6c4b410dbef8b593c2d796a5dad95f89ee4ArXiv,December2023,Ames,USA AshwinKallingalJoshy,MirzaSanjidaAlam,ShailaSharmin,QiLi,andWeiLe 1 static inline void RENAME(yuvPlanartoyuy2)(const uint8_t inaccuratedatasetlabelsonpredictiondecisions,andChakraborty *ysrc, const uint8_t *usrc, const uint8_t *vsrc, et al. [7] identified that duplicate data, inadequate token-based uint8_t *dst, models, irrelevant feature learning, and imbalanced data cause 2 - long width, long height, poorperformanceofexistingmodelswhenappliedtoreal-world 3 - long lumStride, long chromStride, long dstStride, long vertLumPerChroma) problems.AstudyconductedbyHerboldetal.[19]thatmanually 4 + long width, long height, validates a large corpus of bug fixing commits, highlighted the 5 + long lumStride, long chromStride, long limitationsofmanualvalidationandtheimpactofbug-irrelevant dstStride, long vertLumPerChroma) linesondataquality. 6 { 7 - long y; The granularity of the existing vulnerability related datasets 8 - const long chromWidth= width>>1; variesfromfile-level[49]tofunctionlevel[7,16,37,51].Reveal[7] 9 - for(y=0; y<height; y++) usespubliclyavailablepatch.Devign[53]usesmanuallylabeled 10 - { datafromcommitsfilteredthroughkeywords,andBig-Vul[16] 11 + long y; employsthedescriptionofCommonVulnerabilitiesandExposures 12 + const long chromWidth= width>>1; 13 + for (y=0; y<height; y++) (CVE)databaseaswellascommitstogeneratefunction-levelvul- 14 + { nerabilitydataset.Yuetal.[49]suggestedusingactivelearning withuser-feedbacktoimprovefile-levelvulnerabilitypredictions. Figure7:MislabeledFormattingChangesasVulnerable Comparedtoourwork,theseworkslackthegranularityrequired totrainagoodline-levelvulnerabilitydetectiontool.Atline-level granularity,BugBuilder[23]andD2A[51]usetestcasesanddif- ferentialstaticanalysistoexcludeirrelevantchanges.However, dataset.ActiveCleanrecommendedtocorrectthislabel.Interest- boththeseapproachesarelimitedintypesofvulnerabilitytheycan ingly,thispatchchanged5fileswithover6Kchanges,allofwhich processduetotheirfilteringmethods.Incomparison,ourapproach areformattingchanges.Mostofthefunctionsinsidethiscommit learnstodistinguishbetweenbug-relevantandbug-irrelevantlines wasrecommendedforlabelcorrectionbyActiveClean. usingpatternsintheextractedfeaturesandwouldbefreefrom suchrestrictions. 5 THREATSTOVALIDITY Inthepast,therehavebeenmanyworksonprocessingtangled InternalThreatstoValidity:Oneoftheimportantchallenges commits.Theseworkshaveusedtesting[18,48],graphcluster- wefaceistheneedofgroundtruthforcleanedcommits.Hence, ing [8, 29, 39], or feedback based tools [26, 42] to identify tan- weusedthedatasetprovidedby[19]forourJavaprojects,which gledchanges.DEPTEST[48]andDeltaDebugging[18]usedau- requiredfourdifferentauthorstoreviewthelabelswithatleast tomatedtestingtofilteringtangledcodechanges.Kirinuki[26] 3authorsagreeingwiththelabels.Thegroundtruthgenerated andCoRA[42]createdtoolsthatwarnedorhelpedusersavoid forourCProjectusedtwoauthoragreementfollowing[21].To makingtangledchangesinthefirstplace.ComUnt[8],SmartCom- ensurethereisnomodelparameterselectionbias,weusedentirely mit[39],UTango[29],andFlexeme[36]addressthisissuewith differentprojectsforthemodelselectionthanthoseusedforin variedgraph-basedapproaches,includinggraphpartitioning. theexperiments.Wealsoused3-foldcrossvalidationforourex- perimentstoensurethatareresultsdoesn’tshowselectionbias. Wealsosampledtheresultsthatdidnothavegroundtruthand manuallyvalidatedthemusingtwoauthors. |
ExternalThreatstoValidity:Tomitigateexternalthreats,we 7 CONCLUSIONANDFUTUREWORK usedgroundtruthlabelsfromtwodatasetscontainingJavaandC ThispaperpresentsActiveClean,anautomaticandscalabletool real-worldopen-sourceprojectscoveringmorethan4.3Kcommits forgeneratingline-levelvulnerabilitydata.ActiveCleanusesactive with119Kcommitlines.Theseprojectshaveverydifferentpur- learningtosignificantlyreducethelabeleddataneededtotrain pose,size,andauthorstoensurethatActiveCleandoesn’tlearnand the model. We designed features considering commit lines and reportprojectspecifictraits.WehaveevaluatedActiveCleanon17 theirsurroundingcode.Inourevaluation,weusedbothJavaandC javaprojectsand2Creal-worldrepositoriesfromwidelyusedthe datasetsandprocessedmorethan4.3Kcommitsand119Kcommit vulnerabilitydatasetDevign.Tothebestofourknowledge,there lines.Weshowthatcommitsareverynoisyandmanycommits areLineVul,LineVD,andIVDetectthreeline-levelvulnerability cancontainsuchnoise.ActiveCleanisabletofindsuchlinesthat detectiontools.OnlyLineVulcanberunsuccessfully,anditisthe arenotrelevanttovulnerability,andreportedF1scoresbetween SOTA. 70–74. Using ActiveClean, we generated line-level vulnerability dataforFFmpegintheDevigndataset.Wedemonstratedthatusing 6 RELATEDWORK thiscleaneddataset,LineVulisabletodetect18morevulnerable Severalstudieshaveemphasizedtheneedforlargehighquality functionsat87F1scoreand70morevulnerablelinesat73%top10, dataforvulnerabilitydetectiontasks.Croftetal.[11]investigated comparedto83F1scoresand66%top10reportedbythebaseline.By thesignificanceofdataqualityinsoftwarevulnerabilitydatasets sampling50labelsrecommendedbyActiveCleanforcorrection,we andrevealedthatinaccuraciesinvulnerabilitylabelsandduplicated find29incorrectfunction-levellabelsforFFmpeg.Inthefuture,we datapointscanpotentiallyleadtoineffectivemodeltrainingand willuseActiveCleantocontinuegeneratingmoreline-leveldatasets unreliableresults.Wuetal.[44]discussedtheadverseeffectsof andreportingfalselabelsinthecurrentvulnerabilitydatasets.ActiveClean:GeneratingLine-LevelVulnerabilityDataviaActiveLearning ArXiv,December2023,Ames,USA REFERENCES [20] KimHerzigandAndreasZeller.2013.Theimpactoftangledcodechanges.In [1] 2020.Joern-TheBugHunter’sWorkbench. https://joern.io/ 201310thWorkingConferenceonMiningSoftwareRepositories(MSR).IEEE,San [2] 2023.ZeroDayInitiative—LookingBackattheBugsof2022. https://www.thezdi. Francisco,CA,USA,121–130. https://doi.org/10.1109/MSR.2013.6624018 com/blog/2023/1/4/looking-back-at-the-bugs-of-2022 [21] MdJohirulIslam,GiangNguyen,RangeetPan,andHrideshRajan.2019. A [3] MikeBarnett,ChristianBird,JoaoBrunet,andShuvenduK.Lahiri.2015.Help- comprehensivestudyondeeplearningbugcharacteristics.InProceedingsofthe ingDevelopersHelpThemselves:AutomaticDecompositionofCodeReview 201927thACMJointMeetingonEuropeanSoftwareEngineeringConferenceand Changesets.In2015IEEE/ACM37thIEEEInternationalConferenceonSoftware SymposiumontheFoundationsofSoftwareEngineering.ACM,TallinnEstonia, Engineering.IEEE,Florence,Italy,134–144. https://doi.org/10.1109/ICSE.2015.35 510–520. https://doi.org/10.1145/3338906.3338955 [4] L.Breiman.2001.RandomForests.MachineLearning45(2001),5–32. [22] MdRakibulIslamandMinhazF.Zibran.2020. Howbugsarefixed:exposing [5] Marcel Böhme, Ezekiel O. Soremekun, Sudipta Chattopadhyay, Emamurho bug-fixpatternswitheditsandnestinglevels.InProceedingsofthe35thAnnual Ugherughe,andAndreasZeller.2017. Whereisthebugandhowisitfixed? ACMSymposiumonAppliedComputing.ACM,BrnoCzechRepublic,1523–1531. anexperimentwithpractitioners.InProceedingsofthe201711thJointMeeting https://doi.org/10.1145/3341105.3373880 onFoundationsofSoftwareEngineering.ACM,PaderbornGermany,117–128. [23] YanjieJiang,HuiLiu,XiaoqingLuo,ZhihaoZhu,XiayeChi,NanNiu,Yuxia https://doi.org/10.1145/3106237.3106255 Zhang,YaminHu,PanBian,andLuZhang.2023.BugBuilder:AnAutomated [6] SicongCao,XiaobingSun,LiliBo,RongxinWu,BinLi,andChuanqiTao. ApproachtoBuildingBugRepository.IEEETransactionsonSoftwareEngineering 2022.MVD:Memory-RelatedVulnerabilityDetectionBasedonFlow-Sensitive 49,4(April2023),1443–1463. https://doi.org/10.1109/TSE.2022.3177713 GraphNeuralNetworks.InProceedingsofthe44thInternationalConference [24] RenéJust,DarioushJalali,andMichaelD.Ernst.2014.Defects4J:adatabaseof onSoftwareEngineering.1456–1468. https://doi.org/10.1145/3510003.3510219 existingfaultstoenablecontrolledtestingstudiesforJavaprograms.InProceed- arXiv:2203.02660[cs]. ingsofthe2014InternationalSymposiumonSoftwareTestingandAnalysis.ACM, [7] SaikatChakraborty,RahulKrishna,YangruiboDing,andBaishakhiRay.2020. SanJoseCAUSA,437–440. https://doi.org/10.1145/2610384.2628055 DeepLearningbasedVulnerabilityDetection:AreWeThereYet?arXiv:2009.07235 [25] HiroyukiKirinuki,YoshikiHigo,KeisukeHotta,andShinjiKusumoto.2014.Hey! |
[cs](Sept.2020). http://arxiv.org/abs/2009.07235arXiv:2009.07235. areyoucommittingtangledchanges?.InProceedingsofthe22ndInternational [8] SiyuChen,ShengbinXu,YuanYao,andFengXu.2022.UntanglingComposite ConferenceonProgramComprehension-ICPC2014.ACMPress,Hyderabad,India, CommitsbyAttributedGraphClustering.In13thAsia-PacificSymposiumon 262–265. https://doi.org/10.1145/2597008.2597798 Internetware.ACM,HohhotChina,117–126. https://doi.org/10.1145/3545258. [26] HiroyukiKirinuki,YoshikiHigo,KeisukeHotta,andShinjiKusumoto.2014.Hey! 3545267 areyoucommittingtangledchanges?.InProceedingsofthe22ndInternational [9] CorinnaCortesandVladimirVapnik.1995.Support-vectornetworks.Machine ConferenceonProgramComprehension.262–265. learning20,3(1995),273–297. [27] HiroyukiKirinuki,YoshikiHigo,KeisukeHotta,andShinjiKusumoto.2016. [10] DavidRCox.1958.Theregressionanalysisofbinarysequences.Journalofthe SplittingCommitsviaPastCodeChanges.In201623rdAsia-PacificSoftware RoyalStatisticalSociety:SeriesB(Methodological)20,2(1958),215–232. EngineeringConference(APSEC).129–136. https://doi.org/10.1109/APSEC.2016. [11] RolandCroft,MAliBabar,andMMehdiKholoosi.2023.Dataqualityforsoftware 028ISSN:1530-1362. vulnerabilitydatasets.In2023IEEE/ACM45thInternationalConferenceonSoftware [28] ClaireLeGoues,NealHoltschulte,EdwardK.Smith,YuriyBrun,Premkumar Engineering(ICSE).IEEE,121–133. Devanbu,StephanieForrest,andWestleyWeimer.2015. TheManyBugsand [12] ValentinDallmeierandThomasZimmermann.2007. Extractionofbuglocal- IntroClassBenchmarksforAutomatedRepairofCPrograms.IEEETransactions izationbenchmarksfromhistory.InProceedingsofthetwenty-secondIEEE/ACM onSoftwareEngineering41,12(Dec.2015),1236–1256. https://doi.org/10.1109/ internationalconferenceonAutomatedsoftwareengineering.ACM,AtlantaGeor- TSE.2015.2454513 giaUSA,433–436. https://doi.org/10.1145/1321631.1321702 [29] YiLi,ShaohuaWang,andTienNNguyen.2022.UTANGO:untanglingcommits [13] TivadarDankaandPeterHorvath.[n.d.].modAL:Amodularactivelearning withcontext-aware,graph-based,codechangeclusteringlearningmodel.In frameworkforPython. ([n.d.]). https://github.com/modAL-python/modAL Proceedingsofthe30thACMJointEuropeanSoftwareEngineeringConferenceand availableonarXivathttps://arxiv.org/abs/1805.00979. SymposiumontheFoundationsofSoftwareEngineering.221–232. [14] MartínDias,AlbertoBacchelli,GeorgiosGousios,DamienCassou,andStéphane [30] ZhenLi,DeqingZou,ShouhuaiXu,XinyuOu,HaiJin,SujuanWang,Zhijun Ducasse.2015.Untanglingfine-grainedcodechanges.In2015IEEE22ndInter- Deng,andYuyiZhong.2018. VulDeePecker:ADeepLearning-BasedSystem nationalConferenceonSoftwareAnalysis,Evolution,andReengineering(SANER). forVulnerabilityDetection.InProceedings2018NetworkandDistributedSystem 341–350. https://doi.org/10.1109/SANER.2015.7081844ISSN:1534-5351. SecuritySymposium.InternetSociety,SanDiego,CA. https://doi.org/10.14722/ [15] HyunsookDo,SebastianElbaum,andGreggRothermel.2005.SupportingCon- ndss.2018.23158 trolledExperimentationwithTestingTechniques:AnInfrastructureandits [31] GuanjunLin,ShengWen,Qing-LongHan,JunZhang,andYangXiang.2020. PotentialImpact. EmpiricalSoftwareEngineering10,4(Oct.2005),405–435. SoftwareVulnerabilityDetectionUsingDeepNeuralNetworks:ASurvey.Proc. https://doi.org/10.1007/s10664-005-3861-2 IEEE108,10(Oct.2020),1825–1848. https://doi.org/10.1109/JPROC.2020.2993293 [16] JiahaoFan,YiLi,ShaohuaWang,andTienN.Nguyen.2020. AC/C++Code ConferenceName:ProceedingsoftheIEEE. VulnerabilityDatasetwithCodeChangesandCVESummaries.InProceedingsof [32] ShanLu,ZhenminLi,FengQin,LinTan,PinZhou,andYuanyuanZhou.[n.d.]. the17thInternationalConferenceonMiningSoftwareRepositories.ACM,Seoul BugBench:BenchmarksforEvaluatingBugDetectionTools.([n.d.]). RepublicofKorea,508–512. https://doi.org/10.1145/3379597.3387501 [33] PremMelvilleandRaymondJ.Mooney.2004. Diverseensemblesforactive [17] MichaelFuandChakkritTantithamthavorn.2022.LineVul:atransformer-based learning.InTwenty-firstinternationalconferenceonMachinelearning-ICML’04. line-levelvulnerabilityprediction.InProceedingsofthe19thInternationalConfer- ACMPress,Banff,Alberta,Canada,74. https://doi.org/10.1145/1015330.1015385 enceonMiningSoftwareRepositories.ACM,PittsburghPennsylvania,608–620. [34] KaiPan,SunghunKim,andE.JamesWhitehead.2009.Towardanunderstanding https://doi.org/10.1145/3524842.3528452 ofbugfixpatterns.EmpiricalSoftwareEngineering14,3(June2009),286–315. [18] MasatomoHashimoto,AkiraMori,andTomonoriIzumida.2018. Automated https://doi.org/10.1007/s10664-008-9077-5 patchextractionviasyntax-andsemantics-awareDeltadebuggingonsource [35] F.Pedregosa,G.Varoquaux,A.Gramfort,V.Michel,B.Thirion,O.Grisel,M. |
codechanges.InProceedingsofthe201826thACMJointMeetingonEuropean Blondel,P.Prettenhofer,R.Weiss,V.Dubourg,J.Vanderplas,A.Passos,D.Cour- SoftwareEngineeringConferenceandSymposiumontheFoundationsofSoftware napeau,M.Brucher,M.Perrot,andE.Duchesnay.2011.Scikit-learn:Machine Engineering(ESEC/FSE2018).AssociationforComputingMachinery,NewYork, LearninginPython.JournalofMachineLearningResearch12(2011),2825–2830. NY,USA,598–609. https://doi.org/10.1145/3236024.3236047 [36] Profir-PetruPârt,achi,SantanuKumarDash,MiltiadisAllamanis,andEarlT.Barr. [19] SteffenHerbold,AlexanderTrautsch,BenjaminLedel,AlirezaAghamohammadi, 2020.Flexeme:untanglingcommitsusinglexicalflows.InProceedingsofthe28th TaherAhmedGhaleb,KuljitKaurChahal,TimBossenmaier,BhaveetNagaria, ACMJointMeetingonEuropeanSoftwareEngineeringConferenceandSymposium PhilipMakedonski,MatinNiliAhmadabadi,KristofSzabados,HelgeSpieker, onthe FoundationsofSoftwareEngineering.ACM,VirtualEventUSA,63–74. MatejMadeja,NathanielHoy,ValentinaLenarduzzi,ShangwenWang,Gema https://doi.org/10.1145/3368089.3409693 Rodríguez-Pérez,RicardoColomo-Palacios,RobertoVerdecchia,ParamvirSingh, [37] RebeccaL.Russell,LouisKim,LeiH.Hamilton,TomoLazovich,JacobA.Harer, YihaoQin,DebasishChakroborti,WillardDavis,VijayWalunj,HongjunWu, OnurOzdemir,PaulM.Ellingwood,andMarcW.McConley.2018.Automated DiegoMarcilio,OmarAlam,AbdullahAldaeej,IdanAmit,BurakTurhan,Si- VulnerabilityDetectioninSourceCodeUsingDeepRepresentationLearning. monEismann,Anna-KatharinaWickert,IvanoMalavolta,MatusSulir,Fatemeh arXiv:1807.04320[cs,stat](Nov.2018). http://arxiv.org/abs/1807.04320 arXiv: Fard,AustinZ.Henley,StratosKourtzanidis,ErayTuzun,ChristophTreude, 1807.04320. SiminMalekiShamasbi,IvanPashchenko,MarvinWyrich,JamesDavis,Alexan- [38] CarsonD.Sestili,WilliamS.Snavely,andNathanM.VanHoudnos.2018. To- derSerebrenik,EllaAlbrecht,EthemUtkuAktas,DanielStrüber,andJohannes wardssecuritydefectpredictionwithAI. http://arxiv.org/abs/1808.09897 Erbel.2021. AFine-grainedDataSetandAnalysisofTanglinginBugFixing arXiv:1808.09897[cs,stat]. Commits.arXiv:2011.06244[cs](Oct.2021). http://arxiv.org/abs/2011.06244arXiv: [39] BoShen,WeiZhang,ChristianKästner,HaiyanZhao,ZhaoWei,GuangtaiLiang, 2011.06244. andZhiJin.2021.SmartCommit:agraph-basedinteractiveassistantforactivity- orientedcommits.InProceedingsofthe29thACMJointMeetingonEuropean SoftwareEngineeringConferenceandSymposiumonthe FoundationsofSoftwareArXiv,December2023,Ames,USA AshwinKallingalJoshy,MirzaSanjidaAlam,ShailaSharmin,QiLi,andWeiLe Engineering.ACM,AthensGreece,379–390. https://doi.org/10.1145/3468264. [47] SatoshiYamashita,ShinpeiHayashi,andMotoshiSaeki.2020. ChangeBead- 3468551 sThreader:AnInteractiveEnvironmentforTailoringAutomaticallyUntan- [40] VictorSobreira,ThomasDurieux,FernandaMadeiral,MartinMonperrus,and gledChanges.In2020IEEE27thInternationalConferenceonSoftwareAnaly- MarceloA.Maia.2018. DissectionofaBugDataset:Anatomyof395Patches sis,EvolutionandReengineering(SANER).IEEE,London,ON,Canada,657–661. fromDefects4J.In2018IEEE25thInternationalConferenceonSoftwareAnalysis, https://doi.org/10.1109/SANER48275.2020.9054861 EvolutionandReengineering(SANER).130–140. https://doi.org/10.1109/SANER. [48] DehengYang,YanLei,XiaoguangMao,DavidLo,HuanXie,andMengYan.2021. 2018.8330203arXiv:1801.06393[cs]. IstheGroundTruthReallyAccurate?DatasetPurificationforAutomatedPro- [41] SahilSuneja,YunhuiZheng,YufanZhuang,JimLaredo,andAlessandroMorari. gramRepair.In2021IEEEInternationalConferenceonSoftwareAnalysis,Evolution 2020. Learningtomapsourcecodetosoftwarevulnerabilityusingcode-as-a- andReengineering(SANER).96–107. https://doi.org/10.1109/SANER50967.2021. graph.(2020). 00018ISSN:1534-5351. [42] MinWang,ZeqiLin,YanzhenZou,andBingXie.2019. CoRA:Decomposing [49] ZheYu,ChristopherTheisen,LaurieWilliams,andTimMenzies.2019.Improving andDescribingTangledCodeChangesforReviewer.In201934thIEEE/ACM vulnerabilityinspectionefficiencyusingactivelearning.IEEETransactionson InternationalConferenceonAutomatedSoftwareEngineering(ASE).1050–1061. SoftwareEngineering47,11(2019),2401–2420. https://doi.org/10.1109/ASE.2019.00101ISSN:2643-1572. [50] ZheYu,ChristopherTheisen,LaurieWilliams,andTimMenzies.2021.Improving [43] MingWen,JunjieChen,RongxinWu,DanHao,andShing-ChiCheung.2018. VulnerabilityInspectionEfficiencyUsingActiveLearning.IEEETransactionson Context-awarepatchgenerationforbetterautomatedprogramrepair.InProceed- SoftwareEngineering47,11(Nov.2021),2401–2420. https://doi.org/10.1109/TSE. |
ingsofthe40thInternationalConferenceonSoftwareEngineering.ACM,Gothen- 2019.2949275arXiv:1803.06545[cs]. burgSweden,1–11. https://doi.org/10.1145/3180155.3180233 [51] YunhuiZheng,SaurabhPujar,BurnLewis,LucaBuratti,EdwardEpstein,Bo [44] XiaoxueWu,WeiZheng,XinXia,andDavidLo.2021.Dataqualitymatters:A Yang,JimLaredo,AlessandroMorari,andZhongSu.2021. D2A:ADataset casestudyondatalabelcorrectnessforsecuritybugreportprediction. IEEE BuiltforAI-BasedVulnerabilityDetectionMethodsUsingDifferentialAnaly- TransactionsonSoftwareEngineering48,7(2021),2541–2556. sis. arXiv:2102.07995[cs](Feb.2021). http://arxiv.org/abs/2102.07995 arXiv: [45] CongyingXu,BihuanChen,ChenhaoLu,KaifengHuang,XinPeng,andYangLiu. 2102.07995. 2022.Trackingpatchesforopensourcesoftwarevulnerabilities.InProceedings [52] DengyongZhou,OlivierBousquet,ThomasNLal,JasonWeston,andBernhard ofthe30thACMJointEuropeanSoftwareEngineeringConferenceandSymposium Schölkopf.[n.d.].LearningwithLocalandGlobalConsistency.([n.d.]). ontheFoundationsofSoftwareEngineering(ESEC/FSE2022).Associationfor [53] YaqinZhou,ShangqingLiu,JingkaiSiow,XiaoningDu,andYangLiu.2019.De- ComputingMachinery,NewYork,NY,USA,860–871. https://doi.org/10.1145/ vign:effectivevulnerabilityidentificationbylearningcomprehensiveprogram 3540250.3549125 semanticsviagraphneuralnetworks.InProceedingsofthe33rdInternationalCon- [46] FabianYamaguchi,NicoGolde,DanielArp,andKonradRieck.2014.Modelingand ferenceonNeuralInformationProcessingSystems.Number915.CurranAssociates DiscoveringVulnerabilitieswithCodePropertyGraphs.In2014IEEESymposium Inc.,RedHook,NY,USA,10197–10207. onSecurityandPrivacy.IEEE,SanJose,CA,590–604. https://doi.org/10.1109/SP. [54] XiaojinZhuandZoubinGhahramani.2002.Learningfromlabeledandunlabeled 2014.44 datawithlabelpropagation. |
2312.02003 A Survey on Large Language Model (LLM) Security and Privacy: The Good, the Bad, and the Ugly YifanYao, JinhaoDuan, KaidiXu, YuanfangCai, ZhiboSun and YueZhang aDrexelUniversity,3675MarketSt.,Philadelphia,PA,19104,USA ARTICLE INFO ABSTRACT Keywords: LargeLanguageModels(LLMs),suchasChatGPTandBard,haverevolutionizednaturallanguage LargeLanguageModel(LLM),LLM understandingandgeneration.Theypossessdeeplanguagecomprehension,human-liketextgenera- Security,LLMPrivacy,ChatGPT,LLM tioncapabilities,contextualawareness,androbustproblem-solvingskills,makingtheminvaluablein Attacks,LLMVulnerabilities variousdomains(e.g.,searchengines,customersupport,translation).Inthemeantime,LLMshave alsogainedtractioninthesecuritycommunity,revealingsecurityvulnerabilitiesandshowcasingtheir potentialinsecurity-relatedtasks.ThispaperexplorestheintersectionofLLMswithsecurityandpri- vacy.Specifically,weinvestigatehowLLMspositivelyimpactsecurityandprivacy,potentialrisksand threatsassociatedwiththeiruse,andinherentvulnerabilitieswithinLLMs.Throughacomprehensive literaturereview,thepapercategorizesthepapersinto“TheGood”(beneficialLLMapplications), “TheBad”(offensiveapplications),and“TheUgly”(vulnerabilitiesofLLMsandtheirdefenses). Wehavesomeinterestingfindings.Forexample,LLMshaveproventoenhancecodesecurity(code vulnerabilitydetection)anddataprivacy(dataconfidentialityprotection),outperformingtraditional methods.However,theycanalsobeharnessedforvariousattacks(particularlyuser-levelattacks)due totheirhuman-likereasoningabilities.Wehaveidentifiedareasthatrequirefurtherresearchefforts. Forexample,Researchonmodelandparameterextractionattacksislimitedandoftentheoretical, hinderedbyLLMparameterscaleandconfidentiality.Safeinstructiontuning,arecentdevelopment, requiresmoreexploration.WehopethatourworkcanshedlightontheLLMs’potentialtobothbolster andjeopardizecybersecurity. LLMs are gaining popularity within the security com- 1. Introduction munity.AsofFebruary2023,aresearchstudyreportedthat Alargelanguagemodelisthelanguagemodelwithmassive GPT-3uncovered213securityvulnerabilities(only4turned parameters that undergoes pretraining tasks (e.g., masked outtobefalsepositives)[141]inacoderepository.Incon- language modeling and autoregressive prediction) to un- trast,oneoftheleadingcommercialtoolsinthemarketde- derstand and process human language, by modeling the tectedonly99vulnerabilities.Morerecently,severalLLM- contextualized text semantics and probabilities from large poweredsecuritypapershaveemergedinprestigiousconfer- amounts of text data. A capable LLM should have four ences.Forinstance,inIEEES&P2023,HammondPearceet key features [323]: (i) profound comprehension of natural al.[211]conductedacomprehensiveinvestigationemploy- language context; (ii) ability to generate human-like text; ingvariouscommerciallyavailableLLMs,evaluatingthem (iii)contextualawareness,especiallyinknowledge-intensive across synthetic, hand-crafted, and real-world security bug domains; (iv) strong instruction-following ability which is scenarios.Theresultsarepromising,asLLMssuccessfully usefulforproblem-solvinganddecision-making. addressedallsyntheticandhand-craftedscenarios.InNDSS There are a number of LLMs that were developed and 2024, a tool named Fuzz4All [313] showcased the use of released in 2023, gaining significant popularity. Notable LLMs for input generation and mutation, accompanied by examples include OpenAI’s ChatGPT [203], Meta AI’s aninnovativeautopromptingtechniqueandfuzzingloop. LLaMA [4], and Databricks’ Dolly 2.0 [50]. For instance, Theseremarkableinitialattemptspromptustodelveinto ChatGPTaloneboastsauserbaseofover180million[69]. threecrucialsecurity-relatedresearchquestions: LLMsnowofferawiderangeofversatileapplicationsacross variousdomains.Specifically,theynotonlyprovidetechni- • RQ1. How do LLMs make a positive impact on se- calsupporttodomainsdirectlyrelatedtolanguageprocess- curityandprivacyacrossdiversedomains,andwhat ing(e.g.,searchengines[352,13],customersupport[259], advantagesdotheyoffertothesecuritycommunity? translation [327, 138]) but also find utility in more general scenarios such as code generation [118], healthcare [274], • RQ2.Whatpotentialrisksandthreatsemergefromthe finance [310], and education [186]. This showcases their utilizationofLLMswithintherealmofcybersecurity? adaptability and potential to streamline language-related • RQ3. What vulnerabilities and weaknesses within tasksacrossdiverseindustriesandcontexts. LLMs,andhowtodefendagainstthosethreats? yy566@drexel.edu(Y.Yao);jd3734@drexel.edu(J.Duan); k (Zx4 .6 S@ ud nre );xe yl z. 8e 9d 9u @d( rK e. xeX lu .) e; duyf (c Ya .i@ Zc hs a. nd gre )xel.edu(Y.Cai);zs384@drexel.edu Findings.Tocomprehensivelyaddressthesequestions,we ORCID(s): conducted a meticulous literature review and assembled a collection of 281 papers pertaining to the intersection Yifan Yao et al.: PreprintsubmittedtoElsevier Page1of24 4202 raM 02 ]RC.sc[ 3v30020.2132:viXraA Survey on Large Language Model (LLM) Security and Privacy: The Good, the Bad, and Ugly |
of LLMs with security and privacy. We categorized these andethicalalignmentofLLMs.Wefoundthatmodel papersintothreedistinctgroups:thosehighlightingsecurity- extraction, parameter extraction, and similar attacks beneficial applications (i.e., the good), those exploring ap- have received limited research attention, remaining plications that could potentially exert adverse impacts on primarily theoretical with minimal practical explo- security (i.e., the bad), and those focusing on the discus- ration.ThevastscaleofLLMparametersmakestradi- sionofsecurityvulnerabilities(alongsidepotentialdefense tionalapproacheslesseffective,andtheconfidential- mechanisms) within LLMs (i.e., the ugly). To be more ityofpowerfulLLMsfurthershieldsthemfromcon- specific: ventional attacks. Strict censorship of LLM outputs challenges even black-box ML attacks. Meanwhile, • TheGood(§4):LLMshaveapredominantlypositive researchontheimpactofmodelarchitectureonLLM impactonthesecuritycommunity,asindicatedbythe safety is scarce, partly due to high computational most significant number of papers dedicated to en- costs. Safe instruction tuning, a recent development, hancingsecurity.Specifically,LLMshavemadecon- requiresfurtherinvestigation. tributionstobothcodesecurityanddatasecurityand privacy. In the context of code security, LLMs have been used for the whole life cycle of the code (e.g., Contributions.Ourworkmakesadualcontribution.First, we are pioneers summarizing the role of LLMs in security secure coding, test case generation, vulnerable code and privacy. We delve deeply into the positive impacts of detection,maliciouscodedetection,andcodefixing). LLMs on security, their potential risks and threats, vulner- In data security and privacy, LLMs have been ap- abilities in LLMs, and the corresponding defense mecha- pliedtoensuredataintegrity,dataconfidentiality,data nisms. Other surveys may focus on one or two specific as- reliability, and data traceability. Meanwhile, Com- pects,suchasbeneficialapplications,offensiveapplications, pared to state-of-the-art methods, most researchers vulnerabilities, or defenses. To the best of our knowledge, found LLM-based methods to outperform traditional our survey is the first to cover all three key aspects related approaches. to security and privacy for the first time. Second, we have • TheBad(§5):LLMsalsohaveoffensiveapplications made several interesting discoveries. For instance, our re- against security and privacy. We categorized the at- search reveals that LLMs contribute more positively than tacks into five groups: hardware-level attacks (e.g., negatively to security and privacy. Moreover, we observe side-channel attacks), OS-level attacks (e.g., analyz- that most researchers concur that LLMs outperform state- ing information from operating systems), software- of-the-art methods when employed for securing code or level attacks (e.g., creating malware), network-level data.Concurrently,itbecomesevidentthatuser-levelattacks attacks(e.g.,networkphishing),anduser-levelattacks are the most prevalent, largely owing to the human-like (e.g., misinformation, social engineering, scientific reasoningabilitiesexhibitedbyLLMs. misconduct). User-level attacks, with 32 papers, are the most prevalent due to LLMs’ human-like rea- Roadmap.Therestofthepaperisorganizedasfollows.We soning abilities. Those attacks threaten both security begin with a brief introduction to LLM in §2. §3 presents (e.g.,malwareattacks)andprivacy(e.g.,socialengi- the overview of our work. In §4, we explore the beneficial neering). Nowadays, LLMs lack direct access to OS impactsofemployingLLMs.§5discussesthenegativeim- andhardware-levelfunctions.Thepotentialthreatsof pactsonsecurityandprivacy.In§6,wediscusstheprevalent LLMscouldescalateiftheygainsuchaccess. threats,vulnerabilitiesassociatedwithLLMsaswellasthe countermeasures to mitigate these risks. §7 discuss LLMs • The Ugly (§6): We explore the vulnerabilities and in other security related topics and possible directions. We defenses in LLMs, categorizing vulnerabilities into concludethepaperin§9. two main groups: AI Model Inherent Vulnerabili- ties (e.g., data poisoning, backdoor attacks, training 2. Background data extraction) and Non-AI Model Inherent Vul- nerabilities (e.g., remote code execution, prompt in- 2.1. LargeLanguageModels(LLMs) jection, side channels). These attacks pose a dual LargeLanguageModels(LLMs)[347]representsanevolu- threat, encompassing both security concerns (e.g., tionfromlanguagemodels.Initially,languagemodelswere remote code execution attacks) and privacy issues statistical in nature and laid the groundwork for compu- (e.g.,dataextraction).DefensesforLLMsaredivided tational linguistics. The advent of transformers has signif- intostrategiesplacedinthearchitecture,andapplied icantly increased their scale. This expansion, along with during the training and inference phases. Training the use of extensive training corpora and advanced pre- phase defenses involve corpora cleaning, and opti- trainingtechniquesispivotalinareassuchasAIforscience, mization methods, while inference phase defenses logicalreasoning,andembodiedAI.Thesemodelsundergo include instruction pre-processing, malicious detec- extensive training on vast datasets to comprehend and pro- tion, and generation post-processing. These defenses duce text that closely mimics human language. Typically, collectively aim to enhance the security, robustness, |
LLMsareendowedwithhundredsofbillions,orevenmore, Yifan Yao et al.: PreprintsubmittedtoElsevier Page 2 of 24A Survey on Large Language Model (LLM) Security and Privacy: The Good, the Bad, and Ugly parameters, honed through the processing of massive tex- Table 1 tualdata.Theyhavespearheadedsubstantialadvancements Comparison of Popular LLMs in the realm of Natural Language Processing (NLP) [82] Model Date Provider Open-Source Params Tunability and find applications in a multitude of fields (e.g., risk gpt-4[64] 2023.03 OpenAI ✗ 1.7T ✗ assessment [202], programming [26], vulnerability detec- gpt-3.5-turbo 2021.09 OpenAI ✗ 175B ✗ gpt-3[24] 2020.06 OpenAI ✗ 175B ✗ tion [118], medical text analysis [274], and search engine cohere-medium[170] 2022.07 Cohere ✗ 6B ✓ optimization[13]). cohere-large[170] 2022.07 Cohere ✗ 13B ✓ cohere-xlarge[170] 2022.06 Cohere ✗ 52B ✓ Based on Yang’s study [323], an LLM should have BERT[61] 2018.08 Google ✓ 340M ✓ at least four key features. First, an LLM should demon- T5[225] 2019 Google ✓ 11B ✓ PaLM[198] 2022.04 Google ✓ 540B ✓ strate a deep understanding and interpretation of natural LLaMA[4] 2023.02 MetaAI ✓ 65B ✓ languagetext,enablingittoextractinformationandperform CTRL[229] 2019 Salesforce ✓ 1.6B ✓ Dolly2.0[50] 2023.04 Databricks ✓ 12B ✓ various language-related tasks (e.g., translation). Second, it should have the capacity to generate human-like text (e.g., completing sentences, composing paragraphs, and pinpointing gaps in our collective knowledge. While it is evenwritingarticles)whenprompted.Third,LLMsshould true that LLMs wield multifaceted applications extending exhibit contextual awareness by considering factors such beyond security considerations (e.g., social and financial as domain expertise, a quality referred to as “Knowledge- impacts), our primary focus remains steadfastly on matters intensive”. Fourth, these models should excel in problem- of security and privacy. Moreover, it is noteworthy that solvinganddecision-making,leveraginginformationwithin GPT models have attained significant prominence within text passages to make them invaluable for tasks such as this landscape. Consequently, when delving into specific informationretrievalandquestion-answeringsystems. content and examples, we aim to employ GPT models as illustrativebenchmarks. 2.2. ComparisonofPopularLLMs As shown in Table 1 [276, 235], there is a diversity of 3.2. TheResearchQuestions providers for language models, including industry leaders LLMs have carried profound implications across diverse such as OpenAI, Google, Meta AI, and emerging players domains.However,itisessentialtorecognizethat,aswith suchasAnthropicandCohere.Thereleasedatesspanfrom any powerful technology, LLMs bear a significant respon- 2018to2023,showcasingtherapiddevelopmentandevolu- sibility. Our paper delves deeply into the multifaceted role tionoflanguagemodelsinrecentyears.Newermodelssuch of LLMs in the context of security and privacy. We intend as“gpt-4”haveemergedin2023,highlightingtheongoing to scrutinize their positive contributions to these domains, innovation in this field. While most of the models are not explorethepotentialthreatstheymayengender,anduncover open-source,itisinterestingtonotethatmodelslikeBERT, thevulnerabilitiesthatcouldcompromisetheirintegrity.To T5,PaLM,LLaMA,andCTRLareopen-source,whichcan accomplishthis,ourstudywillconductathoroughliterature facilitate community-driven development and applications. reviewcenteredaroundthreepivotalresearchquestions: Larger models tend to have more parameters, potentially indicating increased capabilities but also greater computa- • TheGood(§4):HowdoLLMspositivelycontribute tional demands. For example, “PaLM” stands out with a tosecurityandprivacyinvariousdomains,andwhat massive 540 billion parameters. It can also be observed are the potential benefits they bring to the security that LLMs tend to have more parameters, potentially indi- community? catingincreasedcapabilitiesbutalsogreatercomputational • TheBad(§5):Whatarethepotentialrisksandthreats demands. The “Tunability” column suggests whether these associatedwiththeuseofLLMsinthecontextofcy- modelscanbefine-tunedforspecifictasks.Inotherwords, bersecurity?Specifically,howcanLLMsbeusedfor it is possible to take a large, pre-trained language model malicious purposes, and what types of cyber attacks andadjustitsparametersandtrainingonasmaller,domain- canbefacilitatedoramplifiedusingLLMs? specific dataset to make it perform better on a particular task.Forinstance,withtunability,onecanfine-tuneBERT • TheUgly(§6):Whatvulnerabilitiesandweaknesses onadatasetofmoviereviewstomakeithighlyeffectiveat exist within LLMs, and how do these vulnerabilities sentimentanalysis. poseathreattosecurityandprivacy? Motivatedbythesequestions,weconductedasearchon 3. Overview GoogleScholarandcompiledpapersrelatedtosecurityand 3.1. Scope privacy involving LLMs. As shown in Figure 1, we gath- ered a total of 83 “good” papers that highlight the positive Ourpaperendeavorstoconductathoroughliteraturereview, contributionsofLLMstosecurityandprivacy.Additionally, with the objective of collating and scrutinizing existing weidentified54“bad”papers,inwhichattackersexploited researchandstudiesabouttherealmsofsecurityandprivacy LLMs to target users, and 144 “ugly” papers, in which in the context of LLMs. The effort is geared towards both authorsdiscoveredvulnerabilitieswithinLLMs.Mostofthe establishing the current state of the art in this domain and |
Yifan Yao et al.: PreprintsubmittedtoElsevier Page 3 of 24A Survey on Large Language Model (LLM) Security and Privacy: The Good, the Bad, and Ugly 4.1. LLMsforCodeSecurity AsshowninTable2,LLMshaveaccesstoavastrepository yaM rpA beF of code snippets and examples spanning various program- guA luJ nuJ raM Jan ceD voN tcO m lanin gg uala gn eg uu na dg ee rs sta an nd did no gm aa ni dns c. oT nh tee xy tule av le ar na ag le ysth isei cr aa pd av ba iln itc ie ed s peS peS to thoroughly examine code and code-related text. More tcO specifically, LLMs can play a pivotal role throughout the voN Good: 83 guA luJ entirecodesecuritylifecycle,includingcoding(C),testcase ceD Ugly: 144 nuJ generation(TCG),execution,andmonitoring(RE). yaM rpA SecureCoding(C).WefirstdiscusstheuseofLLMsinthe naJ Bad: 54 raM contextofsecurecodeprogramming[75](orgeneration[63, beF raM naJ beF 285,199,90]).Sandovaletal.[234]conductedauserstudy rpA yaM nuJ luJ guA peS O NovDec ( t Ti5 c h8 u el yu as r ee l vyr as O) lut p ao e tea n ds As ce I os C ds eoth d we e rxs i,e ttc a eu s nr ci bt o yy d si em tuap dsl s ei i nc sa ttat pi no rt on s gs f roo arf mL d meL v eM re sls o w, pp hea r esr n- . ct assisted by LLMs and found that participants assisted by LLMsdidnotintroducenewsecurityrisks:theAI-assisted group produced critical security bugs at a rate no greater Figure 1: An overview of our collected papers. than10%higherthanthecontrolgroup(non-assisted).Heet al.[98,99]focusedonenhancingthesecurityofcodegener- atedbyLLMs.TheyproposedanovelmethodcalledSVEN, paperswerepublishedin2023,withonly82ofthemreleased which leverages continuous prompts to control LLMs in in between 2007 and 2022. Notably, there is a consistent generating secure code. With this method, the success rate upwardtrendinthenumberofpapersreleasedeachmonth, improved from 59.1% to 92.3% when using the CodeGen with October reaching its peak, boasting the highest num- LM.Mohammedetal.introduceSALLM[254],aframework ber of papers published (38 papers in total, accounting for consisting of a new security-focused dataset, an evaluation 15.97% of all the collected papers). It is conceivable that environment,andnovelmetricsforsystematicallyassessing more security-related LLM papers will be published in the LLMs’abilitytogeneratesecurecode.Madhavetal.[197] nearfuture. evaluatethesecurityaspectsofcodegenerationprocesseson theChatGPTplatform,specificallyinthehardwaredomain. Finding I. In terms of security-related applications Theyexplorethestrategiesthatadesignercanemploytoen- (i.e., the “good” and the “bad” parts), it is evident that ableChatGPTtoprovidesecurehardwarecodegeneration. the majority of researchers are inclined towards using LLMs to bolster the security community, such as in Test Case Generating (TCG). Several papers [33, 6, 238, vulnerability detection and security test generation, 316, 156, 253, 335] discuss the utilization of LLMs for despite the presence of some vulnerabilities in LLMs generatingtestcases,withourparticularemphasisonthose at this stage. There are relatively few researchers addressingsecurityimplications.Zhangetal.[343]demon- who employ LLMs as tools for conducting attacks. stratedtheuseofChatGPT-4.0forgeneratingsecuritytests In summary, LLMs contribute more positively than to assess the impact of vulnerable library dependencies on negativelytothesecuritycommunity. softwareapplications.TheyfoundthatLLMscouldsuccess- fully generate tests that demonstrated various supply chain attacks,outperformingexistingsecuritytestgenerators.This 4. PositiveImpactsonSecurityandPrivacy approach resulted in 24 successful attacks across 55 appli- cations.Similarly,Libro[136],aframeworkthatusesLLMs Inthissection,weexplorethebeneficialimpactsofemploy- to automatically generate test cases to reproduce software ing LLMs. In the context of code or data privacy, we have securitybugs. opted to use the term “privacy” to characterize scenarios In the realm of security, fuzzing stands [325, 109, inwhichLLMsareutilizedtoensuretheconfidentialityof 337, 345, 272] out as a widely employed technique for either code or data. However, given that we did not come generatingtestcases.Dengetal.introducedTitanFuzz[56], acrossanypapersspecificallyaddressingcodeprivacy,our an approach that harnesses LLMs to generate input pro- discussion focuses on code security (§4.1) as well as both gramsforfuzzingDeepLearning(DL)libraries.TitanFuzz datasecurityandprivacy(§4.2). demonstrates impressive code coverage (30.38%/50.84%) and detects previously unknown bugs (41 out of 65) in popular DL libraries. More recently, Deng et al. [58, 57] refined LLM-based fuzzing (named FuzzGPT), aiming to generate unusual programs for DL library fuzzing. While Yifan Yao et al.: PreprintsubmittedtoElsevier Page 4 of 24A Survey on Large Language Model (LLM) Security and Privacy: The Good, the Bad, and Ugly Table 2 LLMs for Code Security and Privacy LifeCycle TestCase RunningandExecuting(RE) Whencomparedto Work Coding(C) LLM(s) Domain Generating SOTAways? Bug Malicious Vulnerability (TCG) Fixing DetectingCodeDetecting Detecting Sandovaletal.[234] ○ ○␣ ○␣ ○␣ ○␣ ○␣ Codex - Negligiblerisks SVEN[98] ○ ○␣ ○␣ ○␣ ○␣ ○␣ CodeGen - Morefaster/secure SALLM[254] ○ ○␣ ○␣ ○␣ ○␣ ○␣ ChatGPTetc. - - Madhavetal.[197] ○ ○␣ ○␣ ○␣ ○␣ ○␣ ChatGPT Hardware - Zhangetal.[343] ○␣ ○ ○␣ ○␣ ○ ○␣ ChatGPT Supplychain Morevalidcases |
Libro[136] ○␣ ○ ○␣ ○␣ ○ ○␣ LLaMA - HigherFP/FN TitanFuzz[56] ○␣ ○ ○ ○␣ ○ ○␣ Codex DLlibs Highercoverage FuzzGPT[57] ○␣ ○ ○ ○␣ ○ ○␣ ChatGPT DLlibs Highercoverage Fuzz4All[313] ○␣ ○ ○ ○␣ ○ ○␣ ChatGPT Languages Highercoverage WhiteFox[321] ○␣ ○ ○ ○␣ ○ ○␣ GPT4 Compiler High-qualitytests Zhangetal.[337] ○␣ ○ ○ ○␣ ○ ○␣ ChatGPT API - CHATAFL[190] ○␣ ○ ○ ○␣ ○ ○␣ ChatGPT Protocol Highercoverage Henrik[105] ○␣ ○␣ ○␣ ○ ○␣ ○␣ ChatGPT - HigerFP/FN Apiiro[74] ○␣ ○␣ ○␣ ○ ○␣ ○␣ N/A - - Noever[201] ○␣ ○␣ ○␣ ○␣ ○ ○ ChatGPT - 4Xfaster Bakhshandehetal.[15] ○␣ ○␣ ○␣ ○␣ ○ ○␣ ChatGPT - LowFP/FN Moumitaetal.[218] ○␣ ○␣ ○␣ ○␣ ○ ○␣ ChatGPT - HigherFP/FN Cheshkovetal.[41] ○␣ ○␣ ○␣ ○␣ ○ ○␣ ChatGPT - Nobetter LATTE[174] ○␣ ○␣ ○␣ ○␣ ○ ○␣ GPT - Costeffective DefectHunter[296] ○␣ ○␣ ○␣ ○␣ ○ ○␣ Codex - - Chenetal.[37] ○␣ ○␣ ○␣ ○␣ ○ ○␣ ChatGPT Blockchain - Huetal.[110] ○␣ ○␣ ○␣ ○␣ ○ ○␣ ChatGPT Blockchain - KARTAL[233] ○␣ ○␣ ○␣ ○␣ ○ ○␣ ChatGPT Webapps Lessmanual VulLibGen[38] ○␣ ○␣ ○␣ ○␣ ○ ○␣ LLaMa Libs Higheraccuracy/speed Ahmadetal.[3] ○␣ ○␣ ○␣ ○␣ ○ ○ Codex Hardware Fixmorebugs InferFix[125] ○␣ ○␣ ○ ○␣ ○ ○ Codex - CIPipeline Pearceetal.[211] ○␣ ○␣ ○ ○␣ ○ ○␣ Codexetc. - Zero-shot Fuetal.[83] ○␣ ○␣ ○ ○␣ ○ ○ ChatGPT APR Higheraccuracy Sobaniaetal.[257] ○␣ ○␣ ○␣ ○␣ ○␣ ○ ChatGPTetc. APR Higheraccuracy Jiangetal.[123] ○␣ ○␣ ○␣ ○␣ ○␣ ○ ChatGPT APR Higheraccuracy TitanFuzz leverages LLMs’ ability to generate ordinary exposing a noticeable performance gap when compared to code, FuzzGPT addresses the need for edge-case testing conventional static analysis tools. This disparity primarily by priming LLMs with historical bug-triggering program. arises from the relatively higher occurrence of false alerts Fuzz4All[313]leveragesLLMsasinputgeneratorsandmu- generated by LLMs. Similarly, Cheshkov et al. [41] point tationengines,creatingdiverseandrealisticinputsforvari- out that the ChatGPT model performed no better than a ouslanguages(e.g.,C,C++),improvingthepreviousstate- dummy classifier for both binary and multi-label classifi- of-the-art coverage by 36.8% on average. WhiteFox [321], cation tasks in code vulnerability detection. Wang et al. a novel white-box compiler fuzzer that utilizes LLMs to introduceDefectHunter [296],anovelmodelthatemploys testcompileroptimizations,outperformsexistingfuzzers(it LLM-driven techniques for code vulnerability detection. generateshigh-qualitytestsforintricateoptimizations,sur- They demonstrate the potential of combining LLMs with passing state-of-the-art fuzzers by up to 80 optimizations). advancedmechanisms(e.g.,Conformer)toidentifysoftware Zhangetal.[337]explorethegenerationoffuzzdriversfor vulnerabilitiesmoreeffectively.Thiscombinationshowsan library API fuzzing using LLMs. Results show that LLM- improvement in effectiveness, approximately from 14.64% basedgenerationispractical,with64%ofquestionssolved to 20.62%, compared with Pongo-70B. LATTE [174] is a entirelyautomaticallyandupto91%withmanualvalidation. novelstaticbinarytaintanalysismethodpoweredbyLLMs. CHATAFL[190]isanLLM-guidedprotocolfuzzerthatcon- LATTE surpasses existing state-of-the-art techniques (e.g., structs grammars for message types and mutates messages Emtaint, Arbiter, and Karonte), demonstrating remarkable or predicts the next messages based on LLM interactions, effectivenessinvulnerabilitydetection(37newbugsinreal- achievingbetterstateandcodecoveragecomparedtostate- worldfirmware)withlowercost. of-the-artfuzzers(e.g.,AFLNET[217],NSFUZZ[222]). Efforts in leveraging LLMs for vulnerability detection extend to specialized domains (e.g.,blockchain [110, 37], Vulnerable Code Detecting (RE). Noever [201] explores kernel [104] mobile [303]). For instance, Chen et al. [37] the capability of LLMs, particularly OpenAI’s GPT-4, in and Hu et al. [110] focus on the application of LLMs detecting software vulnerabilities. This paper shows that in identifying vulnerabilities within blockchain smart con- GPT-4 identified approximately four times the number of tracts. Sakaoglu’s study introduces KARTAL [233], a pio- vulnerabilitiescomparedtotraditionalstaticcodeanalyzers neeringapproachthatharnessesLLMsforwebapplication (e.g.,SnykandFortify).Parallelconclusionshavealsobeen vulnerability detection. This method achieves an accuracy drawn in other efforts [141, 15]. However, Moumita et al.[218]appliedLLMsforsoftwarevulnerabilitydetection, Yifan Yao et al.: PreprintsubmittedtoElsevier Page 5 of 24A Survey on Large Language Model (LLM) Security and Privacy: The Good, the Bad, and Ugly observed thatLLMs can repairinsecure code ina range of Table 3 contextsevenwithoutbeingexplicitlytrainedonvulnerabil- LLMs for Data Security and Privacy ityrepairtasks. Prop. Comparedto ChatGPTisnotedforitsabilityincodebugdetectionand Work Model Domain SOTAways? correction.Fuetal.[83]assessedChatGPTinvulnerability- I CRT relatedtaskslikepredictingandclassifyingvulnerabilities, Fang[294] ○○␣○○␣ ChatGPT Ransomware - severity estimation, and analyzing over 190,000 C/C++ Liuetal.[187] ○○␣○○␣ ChatGPT Ransomware - Amineetal.[73] ○○○○␣ ChatGPT Semantic Alignedw/SOTA functions. They found that ChatGPT’s performance was HuntGPT[8] ○○○○␣ ChatGPT Network Moreeffective |
behind other LLMs specialized in vulnerability detection. Chrisetal.[71] ○○○○␣ ChatGPT Log Lessmanual AnomalyGPT[91] ○○○○␣ ChatGPT Video Lessmanual However, Sobania et al. [257] found ChatGPT’s bug fix- LogGPT[221] ○○○○␣ ChatGPT Log Lessmanual ing performance competitive with standard program repair Arpitaetal.[286] ○␣○○␣○␣ BERTetc. - - Takashietal.[142]○␣○␣○○␣ ChatGPT Phishing Highprecision methods, as demonstrated by its ability to fix 31 out of Fredriketal.[102] ○␣○␣○○␣ChatGPTetc Phishing Effective IPSDM[119] ○␣○␣○○␣ BERT Phishing - 40 bugs. Xia et al. [315] presented ChatRepair, leveraging Kwonetal.[149] ○␣○○␣○␣ ChatGPT - Non-expfriendly pre-trainedlanguagemodels(PLMs)forgeneratingpatches Scanlonetal.[237]○␣○␣○␣○ ChatGPT Forensic Moreeffective without dependency on bug-fixing datasets, aiming to en- Sladićetal.[255] ○␣○␣○␣○ ChatGPT Honeypot Morerealistic WASA[297] ○␣○␣○○ - Watermark Moreeffective hance performance to generate patches without relying on REMARK[340] ○␣○␣○○ - Watermark Moreeffective bug-fixing datasets, aiming to improve ChatGPT’s code- SWEET[154] ○␣○␣○○ - Watermark Moreeffective fixing abilities using a mix of successful and failure tests. Asaresult,theyfixed162outof337bugsatacostof$0.42 of up to 87.19% and is capable of conducting 539 pre- each. dictions per second. Additionally, Chen et al. [38] make a noteworthy contribution with VulLibGen, a generative FindingII.AsshowninTable2,acomparisonwithstate- methodologyutilizingLLMstoidentifyvulnerablelibraries. of-the-artmethodsrevealsthatthemajorityofresearchers Ahmad etal. [3]shift the focusto hardware security. They (17 out of 25) have concluded that LLM-based methods investigate the use of LLMs, specifically OpenAI’s Codex, outperform traditional approaches (advantages include in automatically identifying and repairing security-related highercodecoverage,higherdetectingaccuracy,lesscost bugs in hardware designs. PentestGPT [55], an automated etc.). Only four papers argue that LLM-based methods penetrationtestingtool,usesthedomainknowledgeinherent do not surpass the state-of-the-art appoarches. The most in LLMs to address individual sub-tasks of penetration frequently discussed issue with LLM-based methods is testing,improvingtaskcompletionratessignificantly. their tendency to produce both high false negatives and falsepositiveswhendetectingvulnerabilitiesorbugs. Malicious Code Detecting (RE). Using LLM to detect malware is a promising application. This approach lever- 4.2. LLMsforDataSecurityandPrivacy ages the natural language processing capabilities and con- As demonstrated in Table 3, LLMs make valuable contri- textual understanding of LLMs to identify malicious soft- butions to the realm of data security, offering multifaceted ware. In experiments with GPT-3.5 conducted by Henrik approaches to safeguarding sensitive information. We have Plate[105],itwasfoundthatLLM-basedmalwaredetection organizedtheresearchpapersintodistinctcategoriesbased can complement human reviews but not replace them. Out onthespecificfacetsofdataprotectionthatLLMsenhance. of 1800 binary classifications performed, there were both These facets encompass critical aspects such as data in- false-positivesandfalse-negatives.Theuseofsimpletricks tegrity (I), which ensures that data remains uncorrupted could also deceive the LLM’s assessments. More recently, throughoutitslifecycle;datareliability(R),whichensures thereareafewattemptshavebeenmadeinthisdirection.For theaccuracyofdata;dataconfidentiality(C),whichfocuses example,Apiiro[74]isamaliciouscodeanalysistoolusing on guarding against unauthorized access and disclosure LLMs.Apiiro’sstrategyinvolvesthecreationofLLMCode of sensitive information; and data traceability (T), which Patterns(LCPs)torepresentcodeinvectorformat,makingit involvestrackingandmonitoringdataaccessandusage. easiertoidentifysimilaritiesandclusterpackagesefficiently. ItsLCPdetectorincorporatesLLMs,proprietarycodeanal- DataIntegrity(I).DataIntegrityensuresthatdataremains ysis,probabilisticsampling,LCPindexing,anddimension- unchangedanduncorruptedthroughoutitslifecycle.Asof alityreductiontoidentifypotentiallymaliciouscode. now,thereareafewworksthatdiscusshowtouseLLMsto protectdataintegrity.Forexample,ransomwareusuallyen- Vulnerable/BuggyCodeFixing(RE).Severalpapers[123, cryptsavictim’sdata,makingthedatainaccessiblewithout 211, 314] has focused on evaluate the performance of adecryptionkeythatisheldbytheattacker,whichbreaksthe LLMs trained on code in the task of program repair. Jin data integrity. Wang Fang’s research [294] examines using etal.[125]proposedInferFix,atransformer-basedprogram LLMsforransomwarecybersecuritystrategies,mostlytheo- repairframeworkthatworksintandemwiththecombination reticallyproposingreal-timeanalysis,automatedpolicygen- ofcutting-edgestaticanalyzerwithtransformer-basedmodel eration, predictive analytics, and knowledge transfer. How- to address and fix critical security and performance issues ever, these strategies lack empirical validation. Similarly, with accuracy between 65% to 75%. Pearce et al. [211] Yifan Yao et al.: PreprintsubmittedtoElsevier Page 6 of 24A Survey on Large Language Model (LLM) Security and Privacy: The Good, the Bad, and Ugly Liuetal.[187]exploredthepotentialofLLMsforcreating dependedupontobeaccurate,andfreefromerrorsorbias. |
cybersecurity policies aimed at mitigating ransomware at- Takashi et al. [142] proposed to use ChatGPT for the de- tackswithdataexfiltration.TheycomparedGPT-generated tectionofsitesthatcontainphishingcontent.Experimental Governance,RiskandCompliance(GRC)policiestothose resultsusingGPT-4showpromisingperformance,withhigh fromestablishedsecurityvendorsandgovernmentcyberse- precision and recall rates. Fredrik et al. [102] assessed the curityagencies.Theyrecommendedthatcompaniesshould abilityoffourlargelanguagemodels(GPT,Claude,PaLM, incorporateGPTintotheirGRCpolicydevelopment. andLLaMA)todetectmaliciousintentinphishingemails, Anomaly detection is a key defense mechanism that andfoundthattheyweregenerallyeffective,evensurpassing identifiesunusualbehavior.Whileitdoesnotdirectlyprotect human detection, although occasionally slightly less accu- dataintegrity,itidentifiesabnormalorsuspiciousbehavior rate. IPSDM [119] is a model fine-tuned from the BERT that can potentially compromise data integrity (as well as family to identify phishing and spam emails effectively. dataconfidentialityanddatareliability).Amineetal.[73]in- IPSDM demonstrates superior performance in classifying troducedanLLM-basedmonitoringframeworkfordetecting emails,bothinunbalancedandbalanceddatasets. semantic anomalies in vision-based policies and applied it tobothfinitestatemachinepoliciesforautonomousdriving Data Traceability (T). Data traceability is the capability to track and document the origin, movement, and history andlearnedpoliciesforobjectmanipulation.Experimental of data within a single system or across multiple systems. resultsdemonstratethatitcaneffectivelyidentifysemantic This concept is particularly vital in fields such as incident anomalies, aligning with human reasoning. HuntGPT [8] managementandforensicinvestigations,whereunderstand- is an LLM-based intrusion detection system for network ing the journey and transformations of events to resolving anomalydetection.Theresultsdemonstrateitseffectiveness issuesandconductingthoroughanalyses.LLMshavegained in improving user understanding and interaction. Chris et tractioninforensicinvestigations,offeringnovelapproaches al.[71]andLogGPT[221]exploreChatGPT’spotentialfor foranalyzingdigitalevidence.Scanlonetal.[237]explored log-basedanomalydetectioninparallelfilesystems.Results how ChatGPT assists in analyzing OS artifacts like logs, show that it addresses the issues in traditional manual files,cloudinteractions,executablebinaries,andinexamin- labelingandinterpretability.AnomalyGPT[91]usesLarge ingmemorydumpstodetectsuspiciousactivitiesorattack Vision-Language Models to detect industrial anomalies. It patterns.Additionally,Sladićetal.[255]proposedthatgen- eliminatesmanualthresholdsettingandsupportsmulti-turn erativemodelslikeChatGPTcanbeusedtocreaterealistic dialogues. honeypotstodeceivehumanattackers. DataConfidentiality(C).Dataconfidentialityreferstothe Watermarking involves embedding a distinctive, typ- practice of protecting sensitive information from unautho- ically imperceptible or hard-to-identify signal within the rized access or disclosure, a topic extensively discussed in outputs of a model. Wang et al. [297] discusses concerns LLMprivacydiscussions[214,242,286,1].However,most regardingtheintellectualpropertyoftrainingdataforLLMs of these studies concentrate on enhancing LLMs through and proposed WASA framework to learn the mapping be- state-of-the-art Privacy Enhancing Techniques (e.g., zero- tweenthetextsofdifferentdataproviders.Zhangetal.[340] knowledgeproofs[224],differentialprivacy(e.g.,[242,184, developedREMARK-LLMthatfocusedonmonitortheuti- 166],andfederatedlearning[145,122,78]).Thereareonly lization of their content and validate their watermark re- a few attempts that utilize LLMs to enhance user privacy. trieval. This helps protect against malicious uses such as For example, Arpita et al. [286] use LLMs to preserve spamming and plagiarism. Furthermore, identifying code privacybyreplacingidentifyinginformationintextualdata produced by LLMs is vital for addressing legal and ethical with generic markers. Instead of storing sensitive user in- issues concerning code licensing, plagiarism, and malware formation, such as names, addresses, or credit card num- creation.Similarly,Lietal.[169]proposethefirstwatermark bers, the LLMs suggest substitutes for the masked tokens. techniquetoprotectlargelanguagemodel-basedcodegen- This obfuscation technique helps to protect user data from erationAPIsfromremoteimitationattacks.Leeetal.[154] being exposed to adversaries. By using LLMs to generate developed SWEET, a tool that implements watermarking substitutesformaskedtokens,themodelscanbetrainedon specificallyontokenswithinprogramminglanguages. obfuscateddatawithoutcompromisingtheprivacyandsecu- rityoftheoriginalinformation.Similarideashavealsobeen Finding III. Likewise, it is noticeable that LLMs explored in other studies [1, 262]. Hyeokdong et al. [149] excelindataprotection,surpassingcurrentsolutionsand explore implementing cryptography with ChatGPT, which requiringfewermanualinterventions.Table2andTable3 ultimately protects data confidentiality. Despite the lack revealthatChatGPTisthepredominantLLMextensively of extensive coding skills or programming knowledge, the employed in diverse security applications. Its versatility |
authorswereabletosuccessfullyimplementcryptographic and effectiveness make it a preferred choice for various algorithms through ChatGPT. This highlights the potential security-related tasks, further reinforcing its position as forindividualstoutilizeChatGPTforcryptographytasks. a go-to solution in the field of artificial intelligence and cybersecurity. DataReliability(R).Inourcontext,datareliabilityrefersto theaccuracyofdata.Itisameasureofhowwelldatacanbe Yifan Yao et al.: PreprintsubmittedtoElsevier Page 7 of 24A Survey on Large Language Model (LLM) Security and Privacy: The Good, the Bad, and Ugly informationassociatedwiththehardware.Side-channelat- tack[260,107,189]isoneattackthatcanbepoweredbythe LLMs. Side-channel attacks typically entail the analysis of unintentionalinformationleakagefromaphysicalsystemor implementation,suchasacryptographicdeviceorsoftware, withtheaimofinferringsecretinformation(e.g.,keys). Fileless CAPTCHA Yaman[319]hasexploredtheapplicationofLLMtech- Breaking Ransomware niques to develop side-channel analysis methods. The re- Phishing Attacks searchevaluatestheeffectivenessofLLM-basedapproaches Keylogger Fingerprinting in analyzing side-channel information in two hardware- Worm Attacks Scientific misconduct related scenarios: AES side-channel analysis and deep- Web Side- Brute-force Channel Impersonation learningacceleratorside-channelanalysis.Experimentsare Attacks Spear Phishing conducted to determine the success rates of these mPrievitlehgeo ds Malware Web DoS Escalation Attacks Misinformation inbothsituations. OS RCE Software CSRF Attacks Exploitation Fraud OS OS-LevelAttacks.LLMsoperateatahigHahrdlwearveelof Sida eb -Cs hat nr na ec l - Software XSS Attacks tionandprimarilyengagewithtext-baseSi dd Ae- tiC tnah ca pkn sune tl andAottuactkpsut. SoD ftwoS are EngS io nc ei ea rl i ng SQL Injection They lack the necessary low-level systemFaualtc cess eOsSs eDonStial Side-Channel Injection Attacks Directory Credential forexecutingOS-levelattacks[114,288,128].NoneMthemeolrey ss, Race Traversal Stuffing they can be utilized for the analysis ofE Aivi ntl t aM fcoka si rd matioA ntta gck as th- Conditions Cookie Theft Tailgating ered from operating systems, thus potentMiaalilclioyus aidingRooitnkitthe BOF Attacks Firmware execution of such attacks. Andreas et al. [94] establish a Network Attacks User Attacks OS Software feedbackloopconnectingLLMtoavulnerableHavrdiwratrue almAatt-acks Attacks Attacks chinethroughSSH,allowingLLMtoanalyzethemachine’s state, identify vulnerabilities, and propose concrete attack strategies, which are then executed automatically within Cyber Attacks thevirtualmachine.Morerecently,they[95]introducedan automatedLinuxprivilege-escalationbenchmarkusinglocal Figure 2: Taxonomy of Cyberattacks. The colored boxes rep- virtual machines and an LLM-guided privilege-escalation resent attacks that have been demonstrated to be executable tool to assess various LLMs and prompt strategies against using LLMs, whereas the gray boxes indicate attacks that thebenchmark. cannot be executed with LLMs. Software-LevelAttacks.SimilartohowtheyemployLLM to target hardware and operating systems, there are also instances where LLM has been utilized to attack software (e.g., [343, 209, 212, 32]). However, the most prevalent software-level use case involves malicious developers uti- lizing LLMs to create malware. Mika et al. [17] present a proof-of-concept in which ChatGPT is utilized to dis- Hardware OS Software Network User tribute malicious software while avoiding detection. Yin et al.[207]investigatethepotentialmisuseofLLMbycreating a number of malware programs (e.g., ransomware, worm, keylogger, brute-force malware, Fileless malware). Anto- Figure 3: Prevalence of the existing attacks User-based Attacks nio Monje et al. [194] demonstrate how to trick ChatGPT Network Attacks into quickly generating ransomware. Marcus Botacin [22] Software Attacks explores different coding strategies (e.g., generating entire OS Attacks 5. NegativeImpactsonSecurityandPrivacy malware, creating malware functions) and investigates the Hardware Attacks LLM’scapacitiestorewritemalwarecode.Thefindingsre- AsshowninFigure2,wehavecategorizedtheattacksinto vealthatLLMexcelsinconstructingmalwareusingbuilding five groups based on their respective positions within the blockdescriptions.Meanwhile,LLMcangeneratemultiple systeminfrastructure.Thesecategoriesencompasshardware- versions of the same semantic content (malware variants), levelattacks,OS-levelattacks,software-levelattacks,network- withvaryingdetectionratesbyVirustotalAV(rangingfrom level attacks, and user-level attacks. Additionally, we have 4%to55%). quantified the number of associated research papers pub- lishedforeachgroup,asillustratedinFigure3. Network-Level Attacks. LLMs can also be employed for initiatingnetworkattacks.Aprevalentexampleofanetwork- Hardware-Level Attacks. Hardware attacks typically in- level attack utilizing LLM is phishing attacks [18, 43]. volvephysicalaccesstodevices.However,LLMscannotdi- Fredriketal. [102]comparedAI-generatedphishingemails rectlyaccessphysicaldevices.Instead,theycanonlyaccess Yifan Yao et al.: PreprintsubmittedtoElsevier Page 8 of 24A Survey on Large Language Model (LLM) Security and Privacy: The Good, the Bad, and Ugly using GPT-4 with manually designed phishing emails cre- social engineers involves tactics such as psychologi- |
ated using the V-Triad, alongside a control group exposed calmanipulation,targeted phishing,andthecrisisof to generic phishing emails. The results showed that per- authenticity. sonalized phishing emails, whether generated by AI or de- signedmanually,hadhigherclick-throughratescomparedto • Scientific Misconduct. Irresponsible use of LLMs can result in issues related to scientific misconduct, genericones.Tysonetal.[151]investigatedhowmodifying stemming from their capacity to generate original, ChatGPT’s input can affect the content of the generated coherenttext.Theacademiccommunity[45,265,215, emails, making them more convincing. Julian Hazell [97] 46, 179, 72, 200, 223, 87, 139, 226], encompass- demonstratedthescalabilityofspearphishingcampaignsby ing diverse disciplines from various countries, has generatingrealisticandcost-effectivephishingmessagesfor raised concerns about the increasing difficulties in over600BritishMembersofParliamentusingChatGPT.In detecting scientific misconduct in the era of LLMs. anotherstudy,Wangetal.[295]discusshowthetraditional Concerns arise from LLMs’ ability to generate co- defensesmayfailintheeraofLLMs.CAPTCHAchallenges, herent and original content, including complete pa- involvingdistortedlettersanddigits,struggletodetectchat- pers from unreliable sources [283, 287, 232]. Re- bots relying on text and voice. However, LLMs may break searchers are also actively engaged in the effort to thechallenges,astheycanproducehigh-qualityhuman-like detect such misconduct. For example, Kavita Ku- text and mimic human behavior effectively. There is one study that utilizes LLM for deploying fingerprint attacks. mari et al. [146, 147] proposed DEMASQ, a precise Armin et al. [236] employed density-based clustering to ChatGPT-generatedcontentdetector.DEMASQcon- siders biases in text composition and evasion tech- cluster HTTP banners and create text-based fingerprints niques, achieving high accuracy across diverse do- for annotating scanning data. When these fingerprints are mainsinidentifyingChatGPT-generatedcontent. compared to an existing database, it becomes possible to identifynewIoTdevicesandserverproducts. • Fraud.Cybercriminalshavedevisedanewtoolcalled FraudGPT[76,10],whichoperateslikeChatGPTbut User-LevelAttacks.Recentdiscussionshaveprimarilyfo- facilitatescyberattacks.Itlacksthesafetycontrolsof cused on user-level attacks, as LLM demonstrates its ca- ChatGPT and is sold on the dark web and Telegram pabilitytocreateremarkablyconvincingbutultimatelyde- for $200 per month or $1,700 annually. FraudGPT ceptive content, as well as establish connections between can create fraud emails related to banks, suggesting seemingly unrelated pieces of information. This presents malicious links’ placement in the content. It can opportunities for malicious actors to engage in a range of also list frequently targeted sites or services, aiding nefariousactivities.Hereareafewexamples: hackers in planning future attacks. WormGPT [52], a cybercrime tool, offers features such as unlimited • Misinformation. Overreliance on content generated character support and chat memory retention. The byLLMswithoutoversightisraisingseriousconcerns toolwastrainedonconfidentialdatasets,withafocus regarding the safety of online content [206]. Numer- on malware-related and fraud-related data. It can ousstudieshavefocusedondetectingmisinformation guide cybercriminals in executing Business Email produced by LLMs. Several study [35, 308, 324] Compromise(BEC)attacks. reveal content generated by LLMs are harder to de- tect and may use more deceptive styles, potentially causing greater harm. Canyu Chen et al. [35] pro- FindingIV.AsillustratedinFigure3,whencomparedto otherattacks,itbecomesapparentthatuser-levelattacks poseataxonomyforLLM-generatedmisinformation arethemostprevalent,boastingasignificantcountof33 andvalidatemethods.Countermeasuresanddetection papers.Thisdominancecanbeattributedtothefactthat methods [308, 280, 40, 267, 36, 341, 19, 155, 263] LLMs have increasingly human-like reasoning abilities, have also been developed to address these emerging enablingthemtogeneratehuman-likeconversationsand issues. content (e.g., scientific misconduct, social engineering). • SocialEngineering.LLMsnotonlyhavethepotential Presently,LLMsdonotpossessthesamelevelofaccessto to generate content from training data, but they also OS-levelorhardware-levelfunctionalities.Thisobserva- offerattackersanewperspectiveforsocialengineer- tionremainsconsistentwiththeattackobservedinother ing.WorkfromStabbetal.[261]highlightsthecapa- levels as well. For instance, at the network level, LLMs bilityofwell-trainedLLMstoinferpersonalattributes can be abused to create phishing websites and bypass fromtext,suchaslocation,income,andgender.They CAPTCHAmechanisms. also reveals how these models can extract personal information from seemingly benign queries. Tong et al.[275]investigatedthecontentgeneratedbyLLMs 6. VulnerabilitiesandDefensesinLLMs mayincludeuserinformation.Moreover,PolraVictor In the following section, we embark on an in-depth ex- Falade [76] stated the exploitation by LLM-driven ploration of the prevalent threats and vulnerabilities asso- ciated with LLMs (§6.1). We will examine the specific Yifan Yao et al.: PreprintsubmittedtoElsevier Page 9 of 24Fileless CAPTCHA Breaking Ransomware Phishing Attacks Keylogger Fingerprinting Worm Attacks Scientific misconduct Web Side- Brute-force Channel Impersonation Attacks Spear Phishing Privilege Malware Web DoS |
Escalation Attacks Misinformation OS RCE Software CSRF Attacks Exploitation Fraud OS Hardware Side-Channel Software XSS Attacks Side-Channel Attacks DoS Social Attacks Software Engineering SQL Injection Fault OS DoS Side-Channel Injection Attacks Directory Credential Memory Race Traversal Stuffing Evil Maid Attacks Conditions Attacks Cookie Theft Tailgating Malicious Rootkit BOF Attacks Firmware Network Attacks User Attacks OS Software Hardware Attacks Attacks Attacks Cyber Attacks A Survey on Large Language Model (LLM) Security and Privacy: The Good, the Bad, and Ugly risks and challenges that arise in the context of LLMs. In a i p F dnd r eit gad fo ec ui ntt t rii h e sto i een o 4 sc n .t io eo llu r usnd sti h tes rac r avmu tees e ss dai tn s e hug v er eet rlh os ee lpa as en te d id oc ntsh ot sa r ha ml il t pe e in tg big gi eee a tss wte, t ehw t ea h ne t er s tw e hesi eel rl a i asra tkc tl ashs co e ( kr §d ss 6e a a.l 2n nv ) dde . Ad Bv D ae car ks t daa r oPi oa o rl i sA Aot tnt ta ain cc gk kss CM ogo Kn nd iL L ote L L i wl vM M A e le r c ASc da p rh gp cai eht a r e isc Gtc ii et rtt y ay cu t pr ue hres erutc ee st ni eh fc e r n DA i Corpora Cleaning A6 I a O d 6n b i .. Iu s 11 i tt l r M .. ih i 1nt ei i .oV cs xe t ds a Au s g emel a Ilrn c oin I Inte nudi naor hpt hn aa esio et, : rb t en rAawi n ecl Is ntei k et M V tsi eae Vk ui ots m h s l ud naa e lt et notn l ro eaImd c n b rd aa ah iT e t ly e be ilh trv g iieb le er o ine tse r iti .ia en zd V st t eios ur ae t li h nnt cn h et e dee s rL d e a Tp bL htto iho lM rt irwe ete in aa as et tr t si ssda asl i nnv dL tu oL Nln M t owe nsr o- -. staerhT dna seitilibarenluV tnereh In A-I M BEeI An ixm at tf t se rbr ar i aebe c nru tn s dit h oc ee Ui n p I nA n AI ff nt t aet tf ia r e a rec nr ck n e k es c n s se c se s L SAOa adApn fv edtg D eD i vu m IDe re nea s- et r siD aig zo d s td re ae rax e iu u b at riI n p ic i cd li o a tai l tFe f ii n l is cfi i on c T nii acn Ma nt r etai g at -if -t o ei Ti i to ic ntn u uona h i n nn nt oi i igo n nd gn gs gniniarT M LL ni seigetartS esnefeD Exploitation Differential Privacy Thesearevulnerabilitiesandthreatsthatstemfromthevery natureandarchitectureofLLMs,consideringthatLLMsare Instruction Tuning Attacks Instruction Processing fundamentally AI models themselves. For example, attack- (Pre-process) Jailbreaking ers may manipulate the input data to generate incorrect or Instruction Manipulation undesirableoutputsfromtheLLM. Prompt Injection Instruction Purification (A1) Adversarial Attacks. Adversarial attacks in machine D Me afe ln i (cs Ii i nv oe -u p D s re oDm co e en t ss e str c )a tt iio on ns ecnerefnI M l t T a W ne e n aha n ed mer tin s o eo aei lninn yma lag ,ytl t dlr a tye f aoc of tm ke acesr uxa pt sa pn oo r li ioe opa sni ou ts ty nle ta vp it ht nuieo ec glf a no m alt er le ny r odc a dsh e c b t bcn a ile ar ei iq ri cxtv i iu kte ee ee dd s ns m oso ia on ia un v rc td et ah hw l ti ys e tnt i ar te d cmha kit l se ome scg a .d uari ee ln slis s’i c sn eiu dog bs ue em asd h to ti at ando v cte ie ki oln ns sr- t. . , staerhT dna seitilibarenluV Remo St ie d eC o Cd he a nE nxe ec lution GCo eGEn (nn r Pf aCti erd d oo Ooe riep sn an uny s tc t t- t li -e i- isb Peb o- ta reb a rn s na Ds oe cs e e Pd cye d t e red D C D oc s eD h te scit e oe e t ieec nnt ce c skt gic t so i )t o ii no nnn g LL ni seigetartS esnefeD • D i cn ia oflt uua sen dP c ao i ti n as go inn t ti ohn e tg h. t erD a tria ant iia nn ip g no gi ps dro o an c ti aen ssg es t.s bt Ta y hn id i sns cjef ao c nr ti ina ngt tt ra omc dk uae l cr i es - tnere nh oIn NA --I VS uu lp np el ry a bC ih lia tii en s Defenses( M S foa ej l ro f- r ) Ni Cty or iV t niqo -ut Ae e I-Inherent Threats (Out of Scope) vulnerabilities or biases, compromising the security, Vulnerabilities and Threats Defenses effectiveness,orethicalbehavioroftheresultingmod- els[206].Variousstudy[148,290,289,2,291,239] have demonstrated that pre-trained models are vul- nerable to compromise via methods such as using Figure 4: Taxonomy of Threats and the Defenses. The line untrusted weights or content, including the insertion represents a defense technique that can defend against either of poisoned examples into their datasets. By their a specific attack or a group of attacks. inherentnatureaspre-trainedmodels,LLMsaresus- ceptibletodatapoisoningattacks[227,251,245].For introducing hidden triggers into the model to ma- example,Alexanderetal.[290]showedthatevenwith nipulate specific behaviors or responses when the just100poisonexamples,LLMscanproduceconsis- triggerisencountered.LLMsaresubjecttobackdoor tentlynegativeresultsorflawedoutputsacrossvarious attacks[161,331,167].Forexample,Yaoetal.[329]a tasks.Largerlanguagemodelsaremoresusceptibleto |
bidirectionalbackdoor,whichcombinestriggermech- poisoning,andexistingdefenseslikedatafilteringor anismswithprompttuning. modelcapacityreductionofferonlymoderateprotec- tionwhilehurtingtestaccuracy. (A2) Inference Attacks. Inference attacks in the context • BackdoorAttacks.Backdoorattacksinvolvethema- of machine learning refer to a class of attacks where an licious manipulation of training data and model pro- adversary tries to gain sensitive information or insights cessing, creating a vulnerability where attackers can about a machine learning model or its training data by embedahiddenbackdoorintothemodel[322].Both makingspecificqueriesorobservationstothemodel.These backdoor attacks and data poisoning attacks involve attacks often exploit unintended information leakage from manipulating machine learning models, which can theresponses. includemanipulationofinputs.However,thekeydis- tinctionisthatbackdoorattacksspecificallyfocuson • AttributeInferenceAttacks.AttributeinferenceAt- tack[208,181,133,258,183,160]isatypeofthreat Yifan Yao et al.: PreprintsubmittedtoElsevier Page 10 of 24A Survey on Large Language Model (LLM) Security and Privacy: The Good, the Bad, and Ugly where an attacker attempts to deduce sensitive or has prompted discussions about the ethical responsibilities personalinformationofindividualsorentitiesbyana- of organizations and researchers developing and deploying lyzingthebehaviororresponsesofamachinelearning thesemodels.Thishasledtoincreasedscrutinyandresearch models. It works against the LLMs as well. Robin onbiasandfairness.Concernsofbiaswereraisedfromvar- et al. [261] presented the first comprehensive exam- ious fields, encompassing gender and minority groups [65, ination of pretrained LLMs’ ability to infer personal 144,81,244],theidentificationofmisinformation,political informationfromtext.UsingadatasetofrealReddit aspects. Multiple studies [269, 281] revealed biases in the profiles, the study demonstrated that current LLMs language used while querying LLMs. Moreover, Urman et canaccuratelyinferavarietyofpersonalinformation al.[282]discoveredthatbiasesmayarisefromadherenceto (e.g.,location,income,sex)withhighaccuracy. governmentcensorshipguidelines.Biasinprofessionalwrit- ing[292,263,79]involvingLLMsisalsoaconcernwithin • MembershipInferences.MembershipinferenceAt- the community, as it can significantly damage credibility. tack is a specific type of inference attack in the field The biases of LLMs may also lead to negative side effects ofdatasecurityandprivacythatdeterminingwhether inareasbeyondtext-basedapplications.Daietal.[47]noted a data record was part of a model’s training dataset, that content generated by LLMs might introduce biases in given white-/black-box access to the model and the neuralretrievalsystems,andHuangetal.[111]discovered specificdatarecord[250,68,143,85,84,191,112].A thatbiasescouldalsobepresentinLLMgeneratedcode. numberofresearchstudieshaveexploredtheconcept of membership inference, each adopting a unique (A5) Instruction Tuning Attacks. Instruction tuning, also perspective and methodology. These studies have knownasinstruction-basedfine-tuning,isamachine-learning exploredvariousmembershipinferenceattacksbyan- technique used to train and adapt language models for alyzingthelabel[42],determiningthethreshold[120, specifictasksbyprovidingexplicitinstructionsorexamples 28, 96], developing a generalized formulation [278], duringthefine-tuningprocess.InLLMs,instruction-tuning among other methods. Mireshghallah et al. [192] attacks refer to a class of attacks or manipulations that foundthatfine-tuningtheheadofthemodelexhibits target instruction-tuned LLMs. These attacks are aimed at greater susceptibility to attacks when compared to exploiting vulnerabilities or limitations in LLMs that have fine-tuningsmalleradapters. been fine-tuned with specific instructions or examples for particulartasks. (A3)ExtractionAttacks.Extractionattackstypicallyrefer • Jailbreaking.JailbreakinginLLMsinvolvesbypass- to attempts by adversaries to extract sensitive information ing security features to enable responses to other- or insights from machine learning models or their asso- wise restricted or unsafe questions, unlocking capa- ciated data. Extraction attacks and inference attacks share bilitiesusuallylimitedbysafetyprotocols.Numerous similaritiesbutdifferintheirspecificfocusandobjectives. studies have demonstrated various methods for suc- Extraction attacks aim to acquire specific resources (e.g., cessfully jailbreaking LLMs [159, 271, 248]. Wei et model gradient, training data) or confidential information al. [301] emphasized that the alignment capabilities directly.Inferenceattacksseektogainknowledgeorinsights of LLMs can be influenced or manipulated through aboutthemodelordata’scharacteristics,oftenbyobserving in-contextdemonstrations.Inadditiontothis,several the model’s responses or behavior. Various types of data researches [300, 132] also demonstrated similar ma- extractionattacksexist,includingmodeltheftattacks[130, nipulationusingvariousapproaches,highlightingthe 137], gradient leakage [158], and training data extraction versatility of methods that can jailbreaking LLMs. attacks[29].Asofthecurrentwriting,ithasbeenobserved Morerecently,MASTERKEY[54]employedatime- thattrainingdataextractionattacksmaybeeffectiveagainst based method for dissecting defenses, and demon- LLMs. Training data extraction [29] refers to a method |
stratedproof-of-conceptattacks.Itautomaticallygen- whereanattackerattemptstoretrievespecificindividualex- erates jailbreak prompts with a 21.58Moreover, di- amplesfromamodel’strainingdatabystrategicallyquery- verse methods have been employed in jailbreaking ingthemachinelearningmodels.Numerousresearch[344, LLMs,suchasconductingfuzzing[328],implement- 210, 326] studies have shown that it is possible to extract ingoptimizedsearchstrategies[353],andeventrain- training data from LLMs, which may include personal and ing LLMs specifically to jailbreak other LLMs [53, privateinformation[113,339].Notably,theworkbyTruong 353].Meanwhile,Caoetal.[27]developedRA-LLM, et al. [279] stands out for its ability to replicate the model amethodtolowersthesuccessrateofadversarialand withoutaccessingtheoriginalmodeldata. jailbreakingpromptswithoutneedingofretrainingor (A4) Bias and Unfairness Exploitation. Bias and unfair- accesstomodelparameters. nessinLLMspertaintothephenomenonwherethesemod- • PromptInjection.Promptinjectionattackdescribes elsdemonstrateprejudicedoutcomesordiscriminatorybe- a method of manipulating the behavior of LLMs to haviors. While bias and fairness issues are not unique to elicit unexpected and potentially harmful responses. LLMs,theyhavereceivedmoreattentionduetotheethical This technique involves crafting input prompts in a andsocietalconcerns.Thatis,thesocietalimpactofLLMs Yifan Yao et al.: PreprintsubmittedtoElsevier Page 11 of 24A Survey on Large Language Model (LLM) Security and Privacy: The Good, the Bad, and Ugly way that bypasses the model’s safeguards or trig- (A6) Remote Code Execution (RCE). RCE attacks typ- gers undesirable outputs. A substantial amount of ically target vulnerabilities in software applications, web research[177,332,135,299,173,124]hasalreadyau- services,orserverstoexecutearbitrarycoderemotely.While tomated the process of identifying semantic preserv- RCE attacks are not typically applicable directly to LLMs, ing payload in prompt injections with various focus. if an LLM is integrated into a web service (e.g.,https: Facilitatedbythecapabilityforfine-tuning,backdoors //chat.openai.com/) and if there are RCE vulnerabilities may be introduced through prompt attacks [12, 133, in the underlying infrastructure or code of that service, it 346, 243]. Moreover, Greshake et al. [89] expressed could potentially lead to the compromise of the LLM’s concerns about the potential for new vulnerabilities environment. Tong et al. [175] identified 13 vulnerabilities arisingfromLLMsinvokingexternalresources.Other in six frameworks, including 12 RCE vulnerabilities and 1 studies have also demonstrated the ability to take arbitrary file read/write vulnerability. Additionally, 17 out advantage of prompt injection attacks, such as un- of 51 tested apps were found to have vulnerabilities, with veiling guide prompts [342], virtualizing prompt in- 16beingvulnerabletoRCEand1toSQLinjection.These jection [320], and integrating applications [178]. He vulnerabilities allow attackers to execute arbitrary code on et al. [100, 101] explored a shift towards leveraging appserversthroughpromptinjections. LLMs, trained on extensive datasets, for mitigating suchattacks. (A7) Side Channel. While LLMs themselves do not typi- callyleakinformationthroughtraditionalsidechannelssuch • Denial of Service. A Denial of Service (DoS) at- aspowerconsumptionorelectromagneticradiation,theycan tack is a type of cyber attack that aims to exhaust be vulnerable to certain side-channel attacks in practical computationalresources,causinglatencyorrendering deployment scenarios. For example, Edoardo et al. [51] resources unavailable. Due to the nature of LLMs introduce privacy side channel attacks, which are attacks requiresignificantamountofresources,attackersuse that exploit system-level components (e.g., data filtering, deliberatelyconstructpromptstoreducetheavailabil- outputmonitoring)toextractprivateinformationatamuch ityofmodels[59].Shumailovetal.[252]provedthe higher rate thanwhat standalone models canachieve. Four possibility of conducting sponge attacks in the field categoriesofsidechannelscoveringtheentireMLlifecycle of LLMs, specifically designed to maximize energy areproposed,enablingenhancedmembershipinferenceat- consumption and latency (by a factor of 10 to 200). tacksandnovelthreats(e.g.,extractingusers’testqueries). Thisstrategyaimstodrawthecommunity’sattention For instance, the research demonstrates how deduplicating to their potential impact on autonomous vehicles, as training data before applying differentially-private training wellasscenariosrequiringmakingdecisionsintimely createsasidechannelthatcompromisesprivacyguarantees. manner. (A8)SupplyChainVulnerabilities.SupplyChainVulner- FindingV.Currently,thereislimitedresearchonmodel abilities refer to the risks in the lifecycle of LLM appli- extraction attacks [68], parameter extraction attacks, or cations that may arise from using vulnerable components the extraction of other intermediate esults [279]. While or services. These include third-party datasets, pre-trained thereareafewmentionsofthesetopics,theytendtore- models, and plugins, any of which can compromise the mainprimarilytheoretical(e.g.,[172]),withlimitedprac- application’s integrity [206]. Most research in this field is |
ticalimplementationorempiricalexploration.Webelieve focused on the security of plugins. An LLM plugin is an that the sheer scale of parameters in LLMs complicates extension or add-on module that enhances the capabilities thesetraditionalapproaches,renderingthemlesseffective of an LLM. Third-party plug-ins have been developed to oreveninfeasible.Additionally,themostpowerfulLLMs expand its functionality, enabling users to perform various are privately owned, with their weights, parameters, and tasks, including web searches, text analysis, and code exe- other details kept confidential, further shielding them cution. However, some of the concerns raised by security from conventional attack strategies. Strict censorship of experts [206, 25] include the possibility of plug-ins being outputsgeneratedbytheseLLMschallengesevenblack- usedtostealchathistories,accesspersonalinformation,or box traditional ML attacks, as it limits the attackers’ execute code on users’ machines. These vulnerabilities are abilitytoexploitoranalyzethemodel’sresponses. associatedwiththeuseofOAuthinplug-ins,awebstandard for data sharing across online accounts. Umar et al. [115] 6.1.2. Non-AIInherentVulnerabilitiesandThreats attempted to address this problem by designing a frame- We also need to consider non-AI Inherent Attacks, which work.Theframeworkformulatesanextensivetaxonomyof encompass external threats and new vulnerabilities (which attacks specific to LLM platforms, taking into account the have not been observed or investigated in traditional AI capabilities of plugins, users, and the LLM platform itself. models) that LLMs might encounter. These attacks may Byconsideringtherelationshipsbetweenthesestakeholders, not be intricately linked to the internal mechanisms of the theframeworkhelpsidentifypotentialsecurity,privacy,and AImodel,yettheycanpresentsignificantrisks.Illustrative safetyrisks. instances of non-AI Inherent Attacks involve system-level vulnerabilities(e.g.,remotecodeexecution). Yifan Yao et al.: PreprintsubmittedtoElsevier Page 12 of 24A Survey on Large Language Model (LLM) Security and Privacy: The Good, the Bad, and Ugly 6.2. DefensesforLLMs de-identification (personally identifiable information In this section, we examine the range of existing defense (PII)) [264, 284], and deduplication [153, 134, 106, methods against various attacks and vulnerabilities associ- 157]. Debiasing and detoxification aimed to remove atedwithLLMs1. undesirablecontentfromtrainingcorpora. 6.2.1. DefenseinModelArchitecture • OptimizationMethods.Optimizationobjectivesare Model architectures determine how knowledge and con- crucial in directing how LLMs learn from training ceptsarestored,organized,andcontextuallyinteractedwith, data, influencing which behaviors are encouraged or which is crucial in the safety of Large Language Models. penalized. These objectives affect the prioritization Therehavebeenalotofworks[165,351,168,333]delved ofknowledgeandconceptswithincorpora,ultimately into how model capacities affect the privacy preservation impactingtheoverallsafetyandethicalalignmentof and robustness of LLMs. Li et al. [165] revealed that lan- LLMs. In this context, robust training methods like guage models with larger parameter sizes can be trained adversarial training [176, 293, 350, 330, 163] and more effectively in the differential privacy manner using robust fine-tuning [66, 121] have shown resilience appropriate non-standard hyper-parameters, in comparison against perturbation-based text attacks. Drawing in- to smaller models. Zhu et al. [351] and Li et al. [168] spiration from traditional adversarial training in the foundthatLLMswithlargercapacities,suchasthosewith imagefield[182],Ivgietal.[116]andYooetal.[330] more extensive parameter sizes, generally show increased applied adversarial training to LLMs by generating robustnessagainstadversarialattacks.Thiswasalsoverified perturbations concerning discrete tokens. Wang et in the Out-of-distribution (OOD) robustness scenarios by al. [293] extended this approach to the continuous Yuan et al. [333]. Beyond the architecture of LLMs them- embedding space, facilitating more practical conver- selves, studies have focused on improving LLM safety by gence,asfollowedbysubsequentresearch[176,350, combiningthemwithexternalmodulesincludingknowledge 163]. Safety alignments [205], an emerging learning graphs [39] and cognitive architectures (CAs) [150, 11]. paradigm, guide LLM behavior using well-aligned Romero et al. [231] proposed improving AI robustness by additional models or human annotations, proving ef- incorporating various cognitive architectures into LLMs. fective for ethical alignment. Efforts to align LLMs Zafar et al. [336] aimed to build trust in AI by enhancing withotherLLMs[334]andLLMsthemselves[268]. thereasoningabilitiesofLLMsthroughknowledgegraphs. Intermsofhumanannotations,Zhouetal.[349]and Shi et al. [249] emphasized the importance of high- 6.2.2. DefensesinLLMTrainingandInference qualitytrainingcorporawithcarefullycuratedinstruc- tionsandoutputsforenhancinginstruction-following Defense Strategies in LLM Training. The core compo- capabilities in LLMs. Bianchi et al. [20] highlighted nentsofLLMtrainingincludemodelarchitectures,training thatthesafetyofLLMscanbesubstantiallyimproved data,andoptimizationmethods.Regardingmodelarchitec- by incorporating a limited percentage (e.g., 3%) of |
tures,weexaminetrustworthydesignsthatexhibitincreased safeexamplesduringfine-tuning. robustness against malicious use. For training corpora, our investigation focuses on methods aimed at mitigating un- desired properties during the generation, collection, and DefenseStrategiesinLLMInference.WhenLLMsarede- cleaning of training data. In the context of optimization ployedascloudservices,theyoperatebyreceivingprompts methods,wereviewexistingworksthatdevelopedsafeand or instructions from users and generating completed sen- secureoptimizationframeworks. tences in response. Given this interaction model, the im- plementation of test-time LLM defense becomes a neces- • CorporaCleaning.LLMsareshapedbytheirtrain- sary and critical aspect of ensuring safe and appropriate ingcorpora,fromwhichtheylearnbehavior,concepts, outputs. Generally, test-time defense encompasses a range and data distributions [302]. Therefore, the safety of strategies, including the pre-processing of prompts and of LLMs is crucially influenced by the quality of instructionstofilterormodifyinputs,thedetectionofabnor- the training corpora [86, 204]. However, it has been maleventsthatmightsignalmisuseorproblematicqueries, widelyacknowledgedthatrawcorporacollectedfrom and the post-processing of generated responses to ensure thewebarefullofissuesoffairness[14],toxicity[88], theyadheretosafetyandethicalguidelines.Test-timeLLM privacy[208],truthfulness[171],etc.Alotofefforts defensesareessentialtomaintaintheintegrityandtrustwor- havebeenmadetocleanrawcorporaandcreatehigh- thinessofLLMsinreal-timeapplications. qualitytrainingcorporaforLLMs[129,306,152,307, 213,277].Ingeneral,thesepipelinesconsistofthefol- • Instruction Processing (Pre-Processing). Instruc- lowingsteps:languageidentification[129,9],detox- tion pre-processing applies transformations over in- ification [88, 48, 180, 195], debiasing [188, 21, 16], structions sent by users, in order to destroy potential adversarial contexts or malicious intents. It plays a 1Please be aware that we will not delve into solutions for non-AI inherentvulnerabilitiesastheytendtobehighlyspecifictoindividualcases. vital role as it blocks out most malicious usage and Yifan Yao et al.: PreprintsubmittedtoElsevier Page 13 of 24A Survey on Large Language Model (LLM) Security and Privacy: The Good, the Bad, and Ugly prevents LLMs from receiving suspicious instruc- 7. Discussion tions. In general, instruction pre-processing methods can be categorized as instruction manipulation [246, 7.1. LLMinOtherSecurityRelatedTopics 230,140,117,318],purification[164],anddefensive LLMsinCybersecurityEducation.LLMscanbeusedin demonstrations [172, 193, 301]. Jain et al. [117] security practices and education [80, 162, 270]. For exam- and Kirchenbauer et al. [140] evaluated multiple ple, in a software security course, students are tasked with baseline preprocessing methods against jailbreaking identifyingandresolvingvulnerabilitiesinawebapplication attacks, including retokenization and paraphrase. Li usingLLMs.Jingyueetal.[162]investigatedhowChatGPT et al. [164] proposed to purify instructions by first canbeusedbystudentsfortheseexercises.WesleyTannet masking the input tokens and then predicting the al.[270]focusedontheevaluationofLLMsinthecontext maskedtokenswithotherLLMs.Thepredictedtokens ofcybersecurityCapture-The-Flag(CTF)exercises(partici- willserveasthepurifiedinstructions.Weietal.[301] pantsfind“flags”byexploitingsystemvulnerabilities).The and Mo et al. [193] demonstrated that inserting pre- studyfirstassessedthequestion-answeringperformanceof defined defensive demonstrations into instructions these LLMs on Cisco certifications with varying difficulty effectivelydefendsjailbreakingattacksofLLMs. levels, then examined their abilities in solving CTF chal- lenges.Jinetal.[126]conductedacomprehensivestudyon LLMs’understandingofbinarycodesemantics[127]across • MaliciousDetection(In-Processing).Maliciousde- differentarchitecturesandoptimizationlevels,providingkey tectionprovidesin-depthexaminationsofLLMinter- insightsforfutureresearchinthisarea. mediate results, such as neuron activation, regarding thegiveninstructions,whicharemoresensitive,accu- LLMsinCybersecurityLaws,PoliciesandCompliance. rate,andspecifiedformalicioususage.Sunetal.[266] LLMs can assist in drafting security policies, guidelines, proposedtodetectbackdooredinstructionswithback- andcompliancedocumentation,ensuringthatorganizations wardprobabilitiesofgenerations.Xietal.[312]dif- meetregulatoryrequirementsandindustrystandards.How- ferentiatednormalandpoisonedinstructionsfromthe ever,it’simportanttorecognizethattheutilizationofLLMs perspective of mask sensitivities. Shao et al. [246] canpotentiallynecessitatechangestocurrentcybersecurity- identified suspicious words according to their tex- related laws and policies. The introduction of LLMs may tual relevance. Wang et al. [298] detected adversar- raisenewlegalandregulatoryconsiderations,asthesemod- ial examples according to the semantic consistency elscanimpactvariousaspectsofcybersecurity,dataprotec- amongmultiplegenerations,whichhasbeenexplored tion, and privacy. Ekenobi et al. [273] examined the legal intheuncertaintyquantificationofLLMsbyDuanet implications arising from the introduction of LLMs, with |
al.[67].ApartfromtheintrinsicpropertiesofLLMs, aparticularfocusondataprotectionandprivacyconcerns. therehavebeenworksleveragingthelinguisticstatis- It acknowledges that ChatGPT’s privacy policy contains ticproperties,suchasdetectingoutlierwords[220], commendableprovisionsforsafeguardinguserdataagainst potentialthreats.Thepaperalsoadvocatedforemphasizing therelevanceofthenewlaw. • Generation Processing (Post-Processing). Genera- tion post processing refers to examining the proper- 7.2. FutureDirections ties(e.g.,harmfulness)ofthegeneratedanswersand Wehavegleanedvaluablelessonsthatwebelievecanshape applyingmodificationsifnecessary,whichisthefinal futuredirections. step before delivering responses to users. Chen et al. [34] proposed to mitigate the toxicity of genera- • Using LLMs for ML-Specific Tasks. We noticed tions by comparing with multiple model candidates. thatLLMscaneffectivelyreplacetraditionalmachine Helbling et al. [103] incorporated individual LLMs learningmethodsandinthiscontext,iftraditionalma- toidentifytheharmfulnessofthegeneratedanswers, chinelearningmethodscanbeemployedinaspecific which shared similar ideas as Xiong et al. [317] and securityapplication(whetheroffensiveordefensivein Kadavathetal.[131]wheretheyrevealedthatLLMs nature),itishighlyprobablethatLLMscanalsobeap- canbepromptedtoanswertheconfidencesregarding pliedtoaddressthatparticularchallenge.Forinstance, thegeneratedresponses. traditionalmachinelearningmethodshavefoundutil- ityinmalwaredetection,andLLMscansimilarlybe Finding VI. For defense in LLM training, there’s a no- harnessedforthispurpose.Therefore,onepromising tablescarcityofresearchexaminingtheimpactofmodel avenue is to harness the potential of LLMs in secu- architecture on LLM safety, which is likely due to the rity applications where machine learning serves as a highcomputationalcostsassociatedwithtrainingorfine- foundationalorwidelyadoptedtechnique.Assecurity tuning large language models. We observed that safe researchers,wearecapableofdesigningLLM-based instruction tuning is a relatively new development that approachestotacklesecurityissues.Subsequently,we warrantsfurtherinvestigationandattention. can compare these approaches with state-of-the-art methodstopushtheboundaries. Yifan Yao et al.: PreprintsubmittedtoElsevier Page 14 of 24A Survey on Large Language Model (LLM) Security and Privacy: The Good, the Bad, and Ugly • Replacing Human Efforts. It is evident that LLMs ChatGPT) could potentially alter the current cybersecurity have the potential to replace human efforts in both landscape by blending technical and social aspects. Their offensiveanddefensivesecurityapplications.Forin- emphasis leans more towards the social aspects. Muna et stance, tasks involving social engineering, tradition- al. [5] and Marshall et al. [185] discussed the impact of allyreliantonhumanintervention,cannowbeeffec- ChatGPT in cybersecurity, highlighting its practical appli- tivelyexecutedusingLLMtechniques.Therefore,one cations (e.g., code security, malware detection). Dhoni et promising avenue for security researchers is to iden- al.[62]demonstratedhowLLMscanassistsecurityanalysts tify areas within traditional security tasks where hu- indevelopingsecuritysolutionsagainstcyberthreats.How- maninvolvementhasbeenpivotalandexploreoppor- ever, their work does not extensively address the potential tunities to substitute these human efforts with LLM cybersecurity threats that LLM may introduce. A number capabilities. of surveys (e.g., [92, 59, 247, 49, 60, 228, 240, 241, 7]) highlight the threats and attacks against LLMs. In com- • Modifying Traditional ML Attacks for LLMs. we parison to our work, they do not dedicate as much text to have observed that many security vulnerabilities in thevulnerabilitiesthattheLLMmaypossess.Instead,their LLMs are extensions of vulnerabilities found in tra- primary focus lies in the realm of security applications, as ditional machine-learning scenarios. That is, LLMs they delve into utilizing LLMs for launching cyberattacks. remainaspecializedinstanceofdeepneuralnetworks, Attia Qammar et al. [219] and Maximilian et al. [196] inheritingcommonvulnerabilitiessuchasadversarial discussed vulnerabilities exploited by cybercriminals, with attacksandinstructiontuning attacks.Withtheright a specific focus on the risks associated with LLMs. Their adjustments(e.g.,thethreatmodel),traditionalMLat- works emphasized the need for strategies and measures tackscanstillbeeffectiveagainstLLMs.Forinstance, to mitigate these threats and vulnerabilities. Haoran Li et thejailbreakingattackisaspecificformofinstruction al.[166]analyzedcurrentprivacyconcernsonLLMs,cate- tuningattackaimedatproducingrestrictedtexts. gorizingthembasedonadversarycapabilities,andexplored existingdefensestrategies.GlorinSebastian[242]explored • AdaptingTraditionalMLDefensesforLLMs.The theapplicationofestablishedPrivacy-EnhancingTechnolo- countermeasures traditionally employed for vulnera- gies(e.g.,differentialprivacy[70],federatedlearning[338], bilitymitigationcanalsobeleveragedtoaddressthese and data minimization [216]) for safeguarding the privacy security issues. For example, there are existing ef- of LLMs. Smith et al. [256] also discussed the privacy fortsthatutilizetraditionalPrivacy-EnhancingTech- |
risks of LLMs. Our study comprehensively examined both nologies(e.g.,zero-knowledgeproofs,differentialpri- the security and privacy aspects of LLMs. In summary, vacy, and federated learning [304, 305] ) to tackle ourresearchconductedanextensivereviewoftheliterature privacy challenges posed by LLMs. Exploring addi- on LLMs from a three-fold perspective: beneficial security tional PETs techniques, whether they are established applications(e.g.,vulnerabilitydetection,securecodegen- methods or innovative approaches, to address these eration),adverseimplications(e.g.,phishingattacks,social challenges represents another promising research di- engineering), and vulnerabilities (e.g., jailbreaking attacks, rection. prompt attacks), along with their corresponding defensive • Solving Challenges in LLM-Specific Attacks. As measures. previouslydiscussed,thereareseveralchallengesas- sociated with implementing model extraction or pa- 9. Conclusion rameter extraction attacks (e.g., vast scale of LLM parameters, private ownership and confidentiality of Our work represents a pioneering effort in systematically powerful LLMs). These novel characteristics intro- examining the multifaceted role of LLMs in security and duced by LLMs represent a significant shift in the privacy.Onthepositiveside,LLMshavesignificantlycon- landscape, potentially leading to new challenges and tributed to enhancing code and data security, while their necessitating the evolution of traditional ML attack versatile nature also opens the door to malicious applica- methodologies. tions.Wealsodelvedintotheinherentvulnerabilitieswithin thesemodels,anddiscusseddefensemechanisms.Wehave illuminated the path forward for harnessing the positive 8. RelatedWork aspects of LLMs while mitigating their potential risks. As LLMs continue to evolve and find their place in an ever- TherehavealreadybeenanumberofLLMsurveysreleased expanding array of applications, it is imperative that we with a variety of focuses (e.g., LLM evolution and taxon- remainvigilantinaddressingsecurityandprivacyconcerns, omy [31, 347, 309, 93, 311, 23, 348], software engineer- ensuringthatthesepowerfulmodelscontributepositivelyto ing [77, 108], and medicine [274, 44]). In this paper, our thedigitallandscape. primary emphasis is on the security and privacy aspects of LLMs. We now delve into an examination of the ex- isting literature pertaining to this particular topic. Peter J. Caven [30] specifically explores how LLMs (particularly, Yifan Yao et al.: PreprintsubmittedtoElsevier Page 15 of 24A Survey on Large Language Model (LLM) Security and Privacy: The Good, the Bad, and Ugly Acknowledgement [20] F. Bianchi, M. Suzgun, G. Attanasio, P. Röttger, D. Jurafsky, T.Hashimoto,andJ.Zou,“Safety-tunedllamas:Lessonsfromim- We thank the anonymous reviewers and Xin Jin from The provingthesafetyoflargelanguagemodelsthatfollowinstructions,” Ohio State University for their invaluable feedback. This arXivpreprintarXiv:2309.07875,2023. research was supported partly by the NSF award FMitF- [21] S.BordiaandS.R.Bowman,“Identifyingandreducinggenderbias 2319242. Any opinions, findings, conclusions, or recom- inword-levellanguagemodels,”arXivpreprintarXiv:1904.03035, 2019. mendationsexpressedarethoseoftheauthorsandnotnec- [22] M. Botacin, “Gpthreats-3: Is automatic malware generation a essarilyoftheNSF. threat?” in 2023 IEEE Security and Privacy Workshops (SPW). IEEE,2023,pp.238–254. [23] S.R.Bowman,“Eightthingstoknowaboutlargelanguagemodels,” References arXivpreprintarXiv:2304.00612,2023. [24] T.B.Brown,B.Mann,N.Ryder,M.Subbiah,J.Kaplan,P.Dhari- [1] M.Abbasian,I.Azimi,A.M.Rahmani,andR.Jain,“Conversational wal,A.Neelakantan,P.Shyam,G.Sastry,A.Askell,S.Agarwal, healthagents:Apersonalizedllm-poweredagentframework,”2023. A.Herbert-Voss,G.Krueger,T.Henighan,R.Child,A.Ramesh, [2] H. Aghakhani, W. Dai, A. Manoel, X. Fernandes, A. Kharkar, D. M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, C. Kruegel, G. Vigna, D. Evans, B. Zorn, and R. Sim, “Trojan- M.Litwin,S.Gray,B.Chess,J.Clark,C.Berner,S.McCandlish, puzzle:Covertlypoisoningcode-suggestionmodels,”arXivpreprint A. Radford, I. Sutskever, and D. Amodei, “Language models are arXiv:2301.02344,2023. few-shotlearners,”2020. [3] B. Ahmad, S. Thakur, B. Tan, R. Karri, and H. Pearce, [25] M.Burgess,“Chatgpthasaplug-inproblem,”https://www.wired. “Fixing hardware security bugs with large language models,” com/story/chatgpt-plugins-security-privacy-risk/,2023. arXiv preprint arXiv:2302.01215, 2023. [Online]. Available: [26] Y.Cai,S.Mao,W.Wu,Z.Wang,Y.Liang,T.Ge,C.Wu,W.You, https://doi.org/10.48550/arXiv.2302.01215 [4] M. AI, “Introducing llama: A foundational, 65-billion- T.Song,Y.Xiaetal.,“Low-codellm:Visualprogrammingover parameter language model,” https://ai.meta.com/blog/ llms,”arXivpreprintarXiv:2304.08103,2023. |
[27] B.Cao,Y.Cao,L.Lin,andJ.Chen,“Defendingagainstalignment- large-language-model-llama-meta-ai/, feb 2023, accessed: breakingattacksviarobustlyalignedllm,”2023. 2023-11-13. [28] N.Carlini,S.Chien,M.Nasr,S.Song,A.Terzis,andF.Tramer, [5] M.Al-Hawawreh,A.Aljuhani,andY.Jararweh,“Chatgptforcyber- security:practicalapplications,challenges,andfuturedirections,” “Membershipinferenceattacksfromfirstprinciples,”in2022IEEE ClusterComputing,vol.26,no.6,pp.3421–3436,2023. SymposiumonSecurityandPrivacy(SP). IEEE,2022,pp.1897– 1914. [6] S. Alagarsamy, C. Tantithamthavorn, and A. Aleti, “A3test: [29] N. Carlini, F. Tramer, E. Wallace, M. Jagielski, A. Herbert-Voss, Assertion-augmented automated test case generation,” arXiv preprintarXiv:2302.10352,2023. K.Lee,A.Roberts,T.Brown,D.Song,U.Erlingssonetal.,“Ex- [7] M. Alawida, B. A. Shawar, O. I. Abiodun, A. Mehmood, A. E. tractingtrainingdatafromlargelanguagemodels,”in30thUSENIX Omolara et al., “Unveiling the dark side of chatgpt: Exploring SecuritySymposium(USENIXSecurity21),2021,pp.2633–2650. [30] P.Caven,“Amoreinsecureecosystem?chatgpt’sinfluenceoncyber- cyberattacksandenhancinguserawareness,”2023. [8] T. Ali and P. Kostakos, “Huntgpt: Integrating machine learning- security,”ChatGPT’sInfluenceonCybersecurity(April30,2023), 2023. based anomaly detection and explainable ai with large language [31] Y.Chang,X.Wang,J.Wang,Y.Wu,K.Zhu,H.Chen,L.Yang, models(llms),”arXivpreprintarXiv:2309.16021,2023. [9] E.Ambikairajah,H.Li,L.Wang,B.Yin,andV.Sethu,“Language X.Yi,C.Wang,Y.Wangetal.,“Asurveyonevaluationoflarge identification: A tutorial,” IEEE Circuits and Systems Magazine, languagemodels,”arXivpreprintarXiv:2307.03109,2023. [32] P. V. S. Charan, H. Chunduri, P. M. Anand, and S. K. Shukla, vol.11,no.2,pp.82–108,2011. “Fromtexttomitretechniques:Exploringthemalicioususeoflarge [10] Z. Amos, “What is fraudgpt?” https://hackernoon.com/ languagemodelsforgeneratingcyberattackpayloads,”2023. what-is-fraudgpt,2023. [33] B. Chen, F. Zhang, A. Nguyen, D. Zan, Z. Lin, J.-G. Lou, and [11] J.R.AndersonandC.J.Lebiere,Theatomiccomponentsofthought. PsychologyPress,2014. W. Chen, “Codet: Code generation with generated tests,” arXiv [12] Anonymous, “On the safety of open-sourced large language preprintarXiv:2207.10397,2022. [34] B. Chen, A. Paliwal, and Q. Yan, “Jailbreaker in jail: Mov- models:Doesalignmentreallypreventthemfrombeingmisused?” in Submitted to The Twelfth International Conference on ing target defense for large language models,” arXiv preprint LearningRepresentations,2023,underreview.[Online].Available: arXiv:2310.02417,2023. [35] C. Chen and K. Shu, “Can llm-generated misinformation be de- https://openreview.net/forum?id=E6Ix4ahpzd tected?”2023. [13] B.B.Arcila,“Isitaplatform?isitasearchengine?it’schatgpt!the [36] ——,“Combatingmisinformationintheageofllms:Opportunities europeanliabilityregimeforlargelanguagemodels,”J.FreeSpeech L.,vol.3,p.455,2023. andchallenges,”arXivpreprintarXiv:2311.05656,2023. [37] C.Chen,J.Su,J.Chen,Y.Wang,T.Bi,Y.Wang,X.Lin,T.Chen, [14] A.H.Bailey,A.Williams,andA.Cimpian,“Basedonbillionsof and Z. Zheng, “When chatgpt meets smart contract vulnerability words on the internet, people= men,” Science Advances, vol. 8, no.13,p.eabm2463,2022. detection:Howfararewe?”arXivpreprintarXiv:2309.05520,2023. [Online].Available:https://doi.org/10.48550/arXiv.2309.05520 [15] A. Bakhshandeh, A. Keramatfar, A. Norouzi, and M. M. [38] T. Chen, L. Li, L. Zhu, Z. Li, G. Liang, D. Li, Chekidehkhoun, “Using chatgpt as a static application security Q. Wang, and T. Xie, “Vullibgen: Identifying vulnerable testingtool,”arXivpreprintarXiv:2308.14434,2023. [16] S.Barikeri,A.Lauscher,I.Vulić,andG.Glavaš,“Redditbias:Areal- third-party libraries via generative pre-trained model,” arXiv worldresourceforbiasevaluationanddebiasingofconversational preprint arXiv:2308.04662, 2023. [Online]. Available: https: //doi.org/10.48550/arXiv.2308.04662 languagemodels,”arXivpreprintarXiv:2106.03521,2021. [39] X.Chen,S.Jia,andY.Xiang,“Areview:Knowledgereasoningover [17] M.Beckerich,L.Plein,andS.Coronado,“Ratgpt:Turningonline llmsintoproxiesformalwareattacks,”2023. knowledgegraph,”ExpertSystemswithApplications,vol.141,p. 112948,2020. [18] S.Ben-Moshe,G.Gekker,andG.Cohen,“Opwnai:Aithatcansave [40] Y. Chen, A. Arunasalam, and Z. B. Celik, “Can large language |
thedayorhackitaway.checkpointresearch(2022),”2023. modelsprovidesecurity&privacyadvice?measuringtheabilityof [19] A.-R.BhojaniandM.Schwarting,“Truthandregret:Largelanguage llmstorefutemisconceptions,”2023. models,thequran,andmisinformation,”pp.1–7,2023. Yifan Yao et al.: PreprintsubmittedtoElsevier Page 16 of 24A Survey on Large Language Model (LLM) Security and Privacy: The Good, the Bad, and Ugly [41] A. Cheshkov, P. Zadorozhny, and R. Levichev, “Evaluation [61] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre- of chatgpt model for vulnerability detection,” arXiv preprint trainingofdeepbidirectionaltransformersforlanguageunderstand- arXiv:2304.07232, 2023. [Online]. Available: https://doi.org/10. ing,”2019. 48550/arXiv.2304.07232 [62] P. Dhoni and R. Kumar, “Synergizing generative ai and cyberse- [42] C. A. Choquette-Choo, F. Tramer, N. Carlini, and N. Papernot, curity: Roles of generative ai entities, companies, agencies, and “Label-onlymembershipinferenceattacks,”inInternationalconfer- governmentinenhancingcybersecurity,”2023. enceonmachinelearning. PMLR,2021,pp.1964–1974. [63] H. Ding, V. Kumar, Y. Tian, Z. Wang, R. Kwiatkowski, X. Li, [43] M.Chowdhury,N.Rifat,S.Latif,M.Ahsan,M.S.Rahman,and M.K.Ramanathan,B.Ray,P.Bhatia,S.Senguptaetal.,“Astatic R.Gomes,“Chatgpt:Thecuriouscaseofattackvectors’supplychain evaluation of code completion by large language models,” arXiv managementimprovement,”in2023IEEEInternationalConference preprintarXiv:2306.03203,2023. onElectroInformationTechnology(eIT),2023,pp.499–504. [64] X.Ding,L.Chen,M.Emani,C.Liao,P.-H.Lin,T.Vanderbruggen, [44] J.Clusmann,F.R.Kolbinger,H.S.Muti,Z.I.Carrero,J.-N.Eckardt, Z. Xie, A. Cerpa, and W. Du, “Hpc-gpt: Integrating large N.G.Laleh,C.M.L.Löffler,S.-C.Schwarzkopf,M.Unger,G.P. languagemodelforhigh-performancecomputing,”inProceedings Veldhuizenetal.,“Thefuturelandscapeoflargelanguagemodelsin of the SC ’23 Workshops of The International Conference on medicine,”CommunicationsMedicine,vol.3,no.1,p.141,2023. High Performance Computing, Network, Storage, and Analysis, [45] D.R.Cotton,P.A.Cotton,andJ.R.Shipway,“Chattingandcheat- ser. SC-W 2023. ACM, Nov. 2023. [Online]. Available: ing:Ensuringacademicintegrityintheeraofchatgpt,”Innovations http://dx.doi.org/10.1145/3624062.3624172 inEducationandTeachingInternational,pp.1–12,2023. [65] X.Dong,Y.Wang,P.S.Yu,andJ.Caverlee,“Probingexplicitand [46] G. M. Currie, “Academic integrity and artificial intelligence: is implicitgenderbiasthroughllmconditionaltextgeneration,”arXiv chatgpthype,heroorheresy?”inSeminarsinNuclearMedicine. preprintarXiv:2311.00306,2023. Elsevier,2023. [66] X.Dong,A.T.Luu,M.Lin,S.Yan,andH.Zhang,“Howshouldpre- [47] S.Dai,Y.Zhou,L.Pang,W.Liu,X.Hu,Y.Liu,X.Zhang,andJ.Xu, trainedlanguagemodelsbefine-tunedtowardsadversarialrobust- “Llmsmaydominateinformationaccess:Neuralretrieversarebi- ness?”AdvancesinNeuralInformationProcessingSystems,vol.34, asedtowardsllm-generatedtexts,”arXivpreprintarXiv:2310.20501, pp.4356–4369,2021. 2023. [67] J. Duan, H. Cheng, S. Wang, C. Wang, A. Zavalny, R. Xu, [48] D.Dale,A.Voronov,D.Dementieva,V.Logacheva,O.Kozlova, B.Kailkhura,andK.Xu,“Shiftingattentiontorelevance:Towards N.Semenov,andA.Panchenko,“Textdetoxificationusinglargepre- theuncertaintyestimationoflargelanguagemodels,”arXivpreprint trainedneuralmodels,”arXivpreprintarXiv:2109.08914,2021. arXiv:2307.01379,2023. [49] B. Dash and P. Sharma, “Are chatgpt and deepfake algorithms [68] J.Duan,F.Kong,S.Wang,X.Shi,andK.Xu,“Arediffusionmodels endangering the cybersecurity industry? a review,” International vulnerable to membership inference attacks?” in Proceedings of JournalofEngineeringandAppliedSciences,vol.10,no.1,2023. the40thInternationalConferenceonMachineLearning,2023,pp. [50] Databricks, “Free dolly: Introducing the world’s 8717–8730. first open and commercially viable instruction-tuned [69] F. Duarte, “Number of chatgpt users (nov 2023),” https:// llm,” https://www.databricks.com/blog/2023/04/12/ explodingtopics.com/blog/chatgpt-users,2023,accessed:2023-11- dolly-first-open-commercially-viable-instruction-tuned-llm, 13. 2023,accessed:2023-11-13. [70] C. Dwork, “Differential privacy,” in International colloquium on [51] E. Debenedetti, G. Severi, N. Carlini, C. A. Choquette-Choo, automata,languages,andprogramming. Springer,2006,pp.1–12. |
M. Jagielski, M. Nasr, E. Wallace, and F. Tramèr, “Privacy [71] C.Egersdoerfer,D.Zhang,andD.Dai,“Earlyexplorationofusing side channels in machine learning systems,” arXiv preprint chatgpt for log-based anomaly detection on parallel file systems arXiv:2309.05610,2023. logs,”2023. [52] D. Delley, “Wormgpt – the generative ai tool cybercriminals are [72] D.O.Eke,“Chatgptandtheriseofgenerativeai:threattoacademic usingtolaunchbusinessemailcompromiseattacks,”https://shorturl. integrity?”JournalofResponsibleTechnology,vol.13,p.100060, at/iwFL7,2023. 2023. [53] G.Deng,Y.Liu,Y.Li,K.Wang,Y.Zhang,Z.Li,H.Wang,T.Zhang, [73] A. Elhafsi, R. Sinha, C. Agia, E. Schmerling, I. A. Nesnas, and andY.Liu,“Jailbreaker:Automatedjailbreakacrossmultiplelarge M.Pavone,“Semanticanomalydetectionwithlargelanguagemod- languagemodelchatbots,”arXivpreprintarXiv:2307.08715,2023. els,”AutonomousRobots,pp.1–21,2023. [54] ——,“Masterkey:Automatedjailbreakingoflargelanguagemodel [74] S. Eli and D. Gil, “Self-enhancing pattern detection with llms: chatbots,” in Proceedings of the 31th Annual Network and Dis- Our answer to uncovering malicious packages at scale,” https: tributedSystemSecuritySymposium(NDSS’24),2024. //apiiro.com/blog/llm-code-pattern-malicious-package-detection/, [55] G.Deng,Y.Liu,V.Mayoral-Vilches,P.Liu,Y.Li,Y.Xu,T.Zhang, 2023,accessed:2023-11-13. Y.Liu,M.Pinzger,andS.Rass,“Pentestgpt:Anllm-empoweredau- [75] T.EspinhaGasiba,K.Oguzhan,I.Kessba,U.Lechner,andM.Pinto- tomaticpenetrationtestingtool,”arXivpreprintarXiv:2308.06782, Albuquerque, “I’m sorry dave, i’m afraid i can’t fix your code: 2023. Onchatgpt,cybersecurity,andsecurecoding,”in4thInternational [56] Y. Deng, C. S. Xia, H. Peng, C. Yang, and L. Zhang, “Fuzzing Computer Programming Education Conference (ICPEC 2023). deep-learninglibrariesvialargelanguagemodels,”arXivpreprint Schloss-Dagstuhl-LeibnizZentrumfürInformatik,2023. arXiv:2212.14834,2022. [76] P. V. Falade, “Decoding the threat landscape: Chatgpt, fraudgpt, [57] Y.Deng,C.S.Xia,C.Yang,S.D.Zhang,S.Yang,andL.Zhang, andwormgptinsocialengineeringattacks,”InternationalJournal “Largelanguagemodelsareedge-casefuzzers:Testingdeeplearn- of Scientific Research in Computer Science, Engineering and inglibrariesviafuzzgpt,”arXivpreprintarXiv:2304.02014,2023. InformationTechnology,p.185–198,Oct.2023.[Online].Available: [58] ——,“Largelanguagemodelsareedge-casegenerators:Crafting http://dx.doi.org/10.32628/CSEIT2390533 unusual programs for fuzzing deep learning libraries,” in 2024 [77] A. Fan, B. Gokkaya, M. Harman, M. Lyubarskiy, S. Sengupta, IEEE/ACM46thInternationalConferenceonSoftwareEngineering S. Yoo, and J. M. Zhang, “Large language models for software (ICSE),2024,pp.830–842. engineering:Surveyandopenproblems,”2023. [59] E. Derner, K. Batistič, J. Zahálka, and R. Babuška, “A secu- [78] T.Fan,Y.Kang,G.Ma,W.Chen,W.Wei,L.Fan,andQ.Yang, rity risk taxonomy for large language models,” arXiv preprint “Fate-llm:Aindustrialgradefederatedlearningframeworkforlarge arXiv:2311.11415,2023. languagemodels,”arXivpreprintarXiv:2310.10049,2023. [60] E.DernerandK.Batistič,“Beyondthesafeguards:Exploringthe [79] X.Fang,S.Che,M.Mao,H.Zhang,M.Zhao,andX.Zhao,“Bias securityrisksofchatgpt,”2023. ofai-generatedcontent:Anexaminationofnewsproducedbylarge Yifan Yao et al.: PreprintsubmittedtoElsevier Page 17 of 24A Survey on Large Language Model (LLM) Security and Privacy: The Good, the Bad, and Ugly languagemodels,”arXivpreprintarXiv:2309.09825,2023. [101] ——,“Youonlypromptonce:Onthecapabilitiesofpromptlearning [80] J.C.Farah,B.Spaenlehauer,V.Sharma,M.J.Rodríguez-Triana, on large language models to tackle toxic content,” in 2024 IEEE S.Ingram,andD.Gillet,“Impersonatingchatbotsinacodereview SymposiumonSecurityandPrivacy(SP),2024. exercisetoteachsoftwareengineeringbestpractices,”in2022IEEE [102] F.Heiding,B.Schneier,A.Vishwanath,andJ.Bernstein,“Devising Global Engineering Education Conference (EDUCON). IEEE, anddetectingphishing:Largelanguagemodelsvs.smallerhuman 2022,pp.1634–1642. models,”2023. [81] V.K.Felkner,H.-C.H.Chang,E.Jang,andJ.May,“Winoqueer: [103] A.Helbling,M.Phute,M.Hull,andD.H.Chau,“Llmselfdefense: Acommunity-in-the-loopbenchmarkforanti-lgbtq+biasinlarge By self examination, llms know they are being tricked,” arXiv languagemodels,”arXivpreprintarXiv:2306.15087,2023. preprintarXiv:2308.07308,2023. |
[82] S.Y.Feng,V.Gangal,J.Wei,S.Chandar,S.Vosoughi,T.Mitamura, [104] R.HelmkeandJ.vomDorp,“Checkforextendedabstract:Towards andE.Hovy,“Asurveyofdataaugmentationapproachesfornlp,” reliableandscalablelinuxkernelcveattributioninautomatedstatic arXivpreprintarXiv:2105.03075,2021. firmware analyses,” in Detection of Intrusions and Malware, and [83] M.Fu,C.Tantithamthavorn,V.Nguyen,andT.Le,“Chatgptfor Vulnerability Assessment: 20th International Conference, DIMVA vulnerabilitydetection,classification,andrepair:Howfararewe?” 2023, Hamburg, Germany, July 12–14, 2023, Proceedings, vol. 2023. 13959. SpringerNature,2023,p.201. [84] W.Fu,H.Wang,C.Gao,G.Liu,Y.Li,andT.Jiang,“Practicalmem- [105] P.Henrik,“Llm-assistedmalwarereview:Aiandhumansjoinforces bershipinferenceattacksagainstfine-tunedlargelanguagemodels tocombatmalware,”https://shorturl.at/loqT4,2023,accessed:2023- viaself-promptcalibration,”2023. 11-13. [85] ——,“Aprobabilisticfluctuationbasedmembershipinferenceat- [106] D.Hernandez,T.Brown,T.Conerly,N.DasSarma,D.Drain,S.El- tackfordiffusionmodels,”2023. Showk,N.Elhage,Z.Hatfield-Dodds,T.Henighan,T.Humeetal., [86] P.Ganesh,H.Chang,M.Strobel,andR.Shokri,“Ontheimpact “Scalinglawsandinterpretabilityoflearningfromrepeateddata,” ofmachinelearningrandomnessongroupfairness,”inProceedings arXivpreprintarXiv:2205.10487,2022. of the 2023 ACM Conference on Fairness, Accountability, and [107] B.Hettwer,S.Gehrer,andT.Güneysu,“Applicationsofmachine Transparency,2023,pp.1789–1800. learningtechniquesinside-channelattacks:asurvey,”Journalof [87] C.A.Gao,F.M.Howard,N.S.Markov,E.C.Dyer,S.Ramesh, CryptographicEngineering,vol.10,pp.135–162,2020. Y.Luo,andA.T.Pearson,“Comparingscientificabstractsgener- [108] X. Hou, Y. Zhao, Y. Liu, Z. Yang, K. Wang, L. Li, X. Luo, atedbychatgpttooriginalabstractsusinganartificialintelligence D.Lo,J.Grundy,andH.Wang,“Largelanguagemodelsforsoft- outputdetector,plagiarismdetector,andblindedhumanreviewers,” ware engineering: A systematic literature review,” arXiv preprint BioRxiv,pp.2022–12,2022. arXiv:2308.10620,2023. [88] S.Gehman,S.Gururangan,M.Sap,Y.Choi,andN.A.Smith,“Re- [109] J.Hu,Q.Zhang,andH.Yin,“Augmentinggreyboxfuzzingwith altoxicityprompts:Evaluatingneuraltoxicdegenerationinlanguage generativeai,”arXivpreprintarXiv:2306.06782,2023. models,”arXivpreprintarXiv:2009.11462,2020. [110] S.Hu,T.Huang,F.İlhan,S.F.Tekin,andL.Liu,“Largelanguage [89] K. Greshake, S. Abdelnabi, S. Mishra, C. Endres, T. Holz, and model-powered smart contract vulnerability detection: New M.Fritz,“Morethanyou’veaskedfor:Acomprehensiveanalysis perspectives,” arXiv preprint arXiv:2310.01152, 2023, 10 pages. of novel prompt injection threats to application-integrated large [Online].Available:https://doi.org/10.48550/arXiv.2310.01152 languagemodels,”arXivpreprintarXiv:2302.12173,2023. [111] D. Huang, Q. Bu, J. Zhang, X. Xie, J. Chen, and H. Cui, “Bias [90] Q. Gu, “Llm-based code generation method for golang compiler assessment and mitigation in llm-based code generation,” arXiv testing,”2023. preprintarXiv:2309.14345,2023. [91] Z.Gu,B.Zhu,G.Zhu,Y.Chen,M.Tang,andJ.Wang,“Anoma- [112] H. Huang, W. Luo, G. Zeng, J. Weng, Y. Zhang, and A. Yang, lygpt: Detecting industrial anomalies using large vision-language “Damia:leveragingdomainadaptationasadefenseagainstmem- models,”arXivpreprintarXiv:2308.15366,2023. bershipinferenceattacks,”IEEETransactionsonDependableand [92] M. Gupta, C. Akiri, K. Aryal, E. Parker, and L. Praharaj, “From SecureComputing,vol.19,no.5,pp.3183–3199,2021. chatgpttothreatgpt:Impactofgenerativeaiincybersecurityand [113] J. Huang, H. Shao, and K. C.-C. Chang, “Are large pre-trained privacy,”IEEEAccess,2023. languagemodelsleakingyourpersonalinformation?”arXivpreprint [93] M.U.Hadi,R.Qureshi,A.Shah,M.Irfan,A.Zafar,M.Shaikh, arXiv:2205.12628,2022. N. Akhtar, J. Wu, and S. Mirjalili, “A survey on large language [114] V.M.IgureandR.D.Williams,“Taxonomiesofattacksandvul- models:Applications,challenges,limitations,andpracticalusage,” nerabilitiesincomputersystems,”IEEECommunicationsSurveys TechRxiv,2023. &Tutorials,vol.10,no.1,pp.6–19,2008. [94] A.HappeandJ.Cito,“Gettingpwn’dbyai:Penetrationtestingwith [115] U.Iqbal,T.Kohno,andF.Roesner,“Llmplatformsecurity:Apply- |
largelanguagemodels,”arXivpreprintarXiv:2308.00121,2023. ingasystematicevaluationframeworktoopenai’schatgptplugins,” [95] A.Happe,A.Kaplan,andJ.Cito,“Evaluatingllmsforprivilege- 2023. escalationscenarios,”2023. [116] M.IvgiandJ.Berant,“Achievingmodelrobustnessthroughdiscrete [96] J. Hayes, L. Melis, G. Danezis, and E. De Cristofaro, “Logan: adversarialtraining,”arXivpreprintarXiv:2104.05062,2021. Membership inference attacks against generative models,” arXiv [117] N.Jain,A.Schwarzschild,Y.Wen,G.Somepalli,J.Kirchenbauer, preprintarXiv:1705.07663,2017. P.-y.Chiang,M.Goldblum,A.Saha,J.Geiping,andT.Goldstein, [97] J.Hazell,“Largelanguagemodelscanbeusedtoeffectivelyscale “Baselinedefensesforadversarialattacksagainstalignedlanguage spearphishingcampaigns,”2023. models,”arXivpreprintarXiv:2309.00614,2023. [98] J.HeandM.Vechev,“Largelanguagemodelsforcode:Security [118] R.Jain,N.Gervasoni,M.Ndhlovu,andS.Rawat,“Acodecentric hardeningandadversarialtesting,”ICML2023WorkshopDeploy- evaluationofc/c++vulnerabilitydatasetsfordeeplearningbased ableGenerativeAI, 2023, keywords: large language models, code vulnerabilitydetectiontechniques,”inProceedingsofthe16thIn- generation,security,prompttuning. novationsinSoftwareEngineeringConference,2023,pp.1–10. [99] ——, “Large language models for code: Security hardening and [119] S.JamalandH.Wimmer,“Animprovedtransformer-basedmodel adversarial testing,” in Proceedings of the 2023 ACM SIGSAC for detecting phishing, spam, and ham: A large language model ConferenceonComputerandCommunicationsSecurity,2023,pp. approach,”2023. 1865–1879. [120] B.Jayaraman,L.Wang,K.Knipmeyer,Q.Gu,andD.Evans,“Re- [100] X.He,S.Zannettou,Y.Shen,andY.Zhang,“Youonlypromptonce: visitingmembershipinferenceunderrealisticassumptions,”arXiv Onthecapabilitiesofpromptlearningonlargelanguagemodelsto preprintarXiv:2005.10881,2020. tackletoxiccontent,”arXivpreprintarXiv:2308.05596,2023. Yifan Yao et al.: PreprintsubmittedtoElsevier Page 18 of 24A Survey on Large Language Model (LLM) Security and Privacy: The Good, the Bad, and Ugly [121] H. Jiang, P. He, W. Chen, X. Liu, J. Gao, and T. Zhao, “Smart: [142] T.Koide,N.Fukushi,H.Nakano,andD.Chiba,“Detectingphishing Robust and efficient fine-tuning for pre-trained natural language sitesusingchatgpt,”arXivpreprintarXiv:2306.05816,2023. modelsthroughprincipledregularizedoptimization,”arXivpreprint [143] F. Kong, J. Duan, R. Ma, H. Shen, X. Zhu, X. Shi, and K. Xu, arXiv:1911.03437,2019. “Anefficientmembershipinferenceattackforthediffusionmodel [122] J.Jiang,X.Liu,andC.Fan,“Low-parameterfederatedlearningwith byproximalinitialization,”arXivpreprintarXiv:2305.18355,2023. largelanguagemodels,”arXivpreprintarXiv:2307.13896,2023. [144] H.Kotek,R.Dockum,andD.Sun,“Genderbiasandstereotypes [123] N.Jiang,K.Liu,T.Lutellier,andL.Tan,“Impactofcodelanguage inlargelanguagemodels,”inProceedingsofTheACMCollective modelsonautomatedprogramrepair,”2023. IntelligenceConference,2023,pp.12–24. [124] S. Jiang, X. Chen, and R. Tang, “Prompt packer: Deceiving [145] W. Kuang, B. Qian, Z. Li, D. Chen, D. Gao, X. Pan, Y. Xie, llmsthroughcompositionalinstructionwithhiddenattacks,”arXiv Y.Li,B.Ding,andJ.Zhou,“Federatedscope-llm:Acomprehensive preprintarXiv:2310.10077,2023. packageforfine-tuninglargelanguagemodelsinfederatedlearning,” [125] M.Jin,S.Shahriar,M.Tufano,X.Shi,S.Lu,N.Sundaresan,and arXivpreprintarXiv:2309.00363,2023. A.Svyatkovskiy,“Inferfix:End-to-endprogramrepairwithllms,” [146] K. Kumari, A. Pegoraro, H. Fereidooni, and A.-R. Sadeghi, 2023. “Demasq: Unmasking the chatgpt wordsmith,” arXiv preprint [126] X.Jin,J.Larson,W.Yang,andZ.Lin,“Binarycodesummariza- arXiv:2311.05019,2023. tion:Benchmarkingchatgpt/gpt-4andotherlargelanguagemodels,” [147] ——,“Demasq:Unmaskingthechatgptwordsmith,”inProceedings 2023. ofthe31thAnnualNetworkandDistributedSystemSecuritySympo- [127] X. Jin, K. Pei, J. Y. Won, and Z. Lin, “Symlm: Predicting func- sium(NDSS’24),2024. tion names in stripped binaries via context-sensitive execution- [148] K.Kurita,P.Michel,andG.Neubig,“Weightpoisoningattackson awarecodeembeddings,”inProceedingsofthe2022ACMSIGSAC pre-trainedmodels,”arXivpreprintarXiv:2004.06660,2020. ConferenceonComputerandCommunicationsSecurity,2022,pp. [149] H.Kwon,M.Sim,G.Song,M.Lee,andH.Seo,“Novelapproach 1631–1645. tocryptographyimplementationusingchatgpt,”CryptologyePrint |
[128] C.Joshi,U.K.Singh,andK.Tarey,“Areviewontaxonomiesof Archive, Paper 2023/606, 2023, https://eprint.iacr.org/2023/606. attacksandvulnerabilityincomputerandnetworksystem,”Interna- [Online].Available:https://eprint.iacr.org/2023/606 tionalJournal,vol.5,no.1,2015. [150] J.E.Laird,C.Lebiere,andP.S.Rosenbloom,“Astandardmodel [129] A. Joulin, E. Grave, P. Bojanowski, M. Douze, H. Jégou, and of the mind: Toward a common computational framework across T.Mikolov,“Fasttext.zip:Compressingtextclassificationmodels,” artificialintelligence,cognitivescience,neuroscience,androbotics,” arXivpreprintarXiv:1612.03651,2016. AiMagazine,vol.38,no.4,pp.13–26,2017. [130] M.Juuti,S.Szyller,S.Marchal,andN.Asokan,“Prada:protecting [151] T.LangfordandB.Payne,“Phishingfaster:Implementingchatgpt againstdnnmodelstealingattacks,”in2019IEEEEuropeanSym- intophishingcampaigns,”inProceedingsoftheFutureTechnologies posiumonSecurityandPrivacy(EuroS&P). IEEE,2019,pp.512– Conference. Springer,2023,pp.174–187. 527. [152] H. Laurençon, L. Saulnier, T. Wang, C. Akiki, A. Villanova del [131] S.Kadavath,T.Conerly,A.Askell,T.Henighan,D.Drain,E.Perez, Moral,T.LeScao,L.VonWerra,C.Mou,E.GonzálezPonferrada, N.Schiefer,Z.Hatfield-Dodds,N.DasSarma,E.Tran-Johnsonetal., H.Nguyenetal.,“Thebigsciencerootscorpus:A1.6tbcomposite “Languagemodels(mostly)knowwhattheyknow,”arXivpreprint multilingualdataset,”AdvancesinNeuralInformationProcessing arXiv:2207.05221,2022. Systems,vol.35,pp.31809–31826,2022. [132] N. Kandpal, M. Jagielski, F. Tramèr, and N. Carlini, “Backdoor [153] K.Lee,D.Ippolito,A.Nystrom,C.Zhang,D.Eck,C.Callison- attacksforin-contextlearningwithlanguagemodels,”arXivpreprint Burch,andN.Carlini,“Deduplicatingtrainingdatamakeslanguage arXiv:2307.14692,2023. modelsbetter,”arXivpreprintarXiv:2107.06499,2021. [133] N.Kandpal,K.Pillutla,A.Oprea,P.Kairouz,C.A.Choquette-Choo, [154] T.Lee,S.Hong,J.Ahn,I.Hong,H.Lee,S.Yun,J.Shin,andG.Kim, andZ.Xu,“Userinferenceattacksonlargelanguagemodels,”2023. “Whowrotethiscode?watermarkingforcodegeneration,”2023. [134] N. Kandpal, E. Wallace, and C. Raffel, “Deduplicating training [155] J. A. Leite, O. Razuvayevskaya, K. Bontcheva, and C. Scarton, datamitigatesprivacyrisksinlanguagemodels,”inInternational “Detectingmisinformationwithllm-predictedcredibilitysignalsand Conference on Machine Learning. PMLR, 2022, pp. 10697– weaksupervision,”arXivpreprintarXiv:2309.07601,2023. 10707. [156] C. Lemieux, J. P. Inala, S. K. Lahiri, and S. Sen, “Codamosa: [135] D.Kang,X.Li,I.Stoica,C.Guestrin,M.Zaharia,andT.Hashimoto, Escapingcoverageplateausintestgenerationwithpre-trainedlarge “Exploitingprogrammaticbehaviorofllms:Dual-usethroughstan- languagemodels,”inInternationalconferenceonsoftwareengineer- dardsecurityattacks,”arXivpreprintarXiv:2302.05733,2023. ing(ICSE),2023. [136] S. Kang, J. Yoon, and S. Yoo, “Llm lies: Hallucinations are not [157] J.Leskovec,A.Rajaraman,andJ.D.Ullman,Miningofmassivedata bugs,butfeaturesasadversarialexamples,”in2023IEEE/ACM45th sets. Cambridgeuniversitypress,2020. InternationalConferenceonSoftwareEngineering(ICSE). IEEE, [158] C. Li, Z. Song, W. Wang, and C. Yang, “A theoretical insight 2023. intoattackanddefenseofgradientleakageintransformer,”arXiv [137] S. Kariyappa, A. Prakash, and M. K. Qureshi, “Maze: Data-free preprintarXiv:2311.13624,2023. model stealing attack using zeroth-order gradient estimation,” in [159] H.Li,D.Guo,W.Fan,M.Xu,andY.Song,“Multi-stepjailbreaking ProceedingsoftheIEEE/CVFConferenceonComputerVisionand privacyattacksonchatgpt,”arXivpreprintarXiv:2304.05197,2023. PatternRecognition,2021,pp.13814–13823. [160] H. Li, Y. Song, and L. Fan, “You don’t know my favorite color: [138] M. Karpinska and M. Iyyer, “Large language models effectively Preventingdialoguerepresentationsfromrevealingspeakers’private leveragedocument-levelcontextforliterarytranslation,butcritical personas,” in Proceedings of the 2022 Conference of the North errorspersist,”arXivpreprintarXiv:2304.03245,2023. AmericanChapteroftheAssociationforComputationalLinguistics: [139] M.KhalilandE.Er,“Willchatgptgetyoucaught?rethinkingof Human Language Technologies, M. Carpuat, M.-C. de Marneffe, plagiarismdetection,”arXivpreprintarXiv:2302.04335,2023. and I. V. Meza Ruiz, Eds. Seattle, United States: Association [140] J.Kirchenbauer,J.Geiping,Y.Wen,M.Shu,K.Saifullah,K.Kong, forComputationalLinguistics,Jul.2022,pp.5858–5870.[Online]. |
K. Fernando, A. Saha, M. Goldblum, and T. Goldstein, “On the Available:https://aclanthology.org/2022.naacl-main.429 reliabilityofwatermarksforlargelanguagemodels,”arXivpreprint [161] J.Li,Y.Yang,Z.Wu,V.Vydiswaran,andC.Xiao,“Chatgptasan arXiv:2306.04634,2023. attacktool:Stealthytextualbackdoorattackviablackboxgenerative [141] C.Koch,“Iusedgpt-3tofind213securityvulnerabilitiesinasingle modeltrigger,”arXivpreprintarXiv:2304.14475,2023. codebase,”http://surl.li/ncjvo,2023. [162] J. Li, P. H. Meland, J. S. Notland, A. Storhaug, and J. H. Tysse, “Evaluatingtheimpactofchatgptonexercisesofasoftwaresecurity Yifan Yao et al.: PreprintsubmittedtoElsevier Page 19 of 24A Survey on Large Language Model (LLM) Security and Privacy: The Good, the Bad, and Ugly course,”2023. [182] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, [163] L.LiandX.Qiu,“Token-awarevirtualadversarialtraininginnatural “Towards deep learning models resistant to adversarial attacks,” languageunderstanding,”inProceedingsoftheAAAIConferenceon arXivpreprintarXiv:1706.06083,2017. ArtificialIntelligence,vol.35,no.9,2021,pp.8410–8418. [183] S. Mahloujifar, H. A. Inan, M. Chase, E. Ghosh, and [164] L.Li,D.Song,andX.Qiu,“Textadversarialpurificationasdefense M. Hasegawa, “Membership inference on word embedding and againstadversarialattacks,”arXivpreprintarXiv:2203.14207,2022. beyond,” ArXiv, vol. abs/2106.11384, 2021. [Online]. Available: [165] X. Li, F. Tramer, P. Liang, and T. Hashimoto, “Large language https://api.semanticscholar.org/CorpusID:235593386 modelscanbestrongdifferentiallyprivatelearners,”arXivpreprint [184] J.Majmudar,C.Dupuy,C.Peris,S.Smaili,R.Gupta,andR.Zemel, arXiv:2110.05679,2021. “Differentially private decoding in large language models,” arXiv [166] Y.Li,Z.Tan,andY.Liu,“Privacy-preservingprompttuningfor preprintarXiv:2205.13621,2022. largelanguagemodelservices,”arXivpreprintarXiv:2305.06212, [185] J.Marshall,“Whateffectsdolargelanguagemodelshaveoncyber- 2023. security,”2023. [167] Y.Li,S.Liu,K.Chen,X.Xie,T.Zhang,andY.Liu,“Multi-target [186] A. B. Mbakwe, I. Lourentzou, L. A. Celi, O. J. Mechanic, and backdoorattacksforcodepre-trainedmodels,”2023. A.Dagan,“Chatgptpassingusmleshinesaspotlightontheflaws [168] Z. Li, B. Peng, P. He, and X. Yan, “Evaluating the instruction- ofmedicaleducation,”p.e0000205,2023. following robustness of large language models to prompt [187] T.McIntosh,T.Liu,T.Susnjak,H.Alavizadeh,A.Ng,R.Nowrozy, injection,” 2023. [Online]. Available: https://api.semanticscholar. andP.Watters,“Harnessinggpt-4forgenerationofcybersecurity org/CorpusID:261048972 grcpolicies:Afocusonransomwareattackmitigation,”Computers [169] Z. Li, C. Wang, S. Wang, and C. Gao, “Protecting intellectual &Security,vol.134,p.103424,2023. property of large language model-based code generation apis via [188] N.Meade,E.Poole-Dayan,andS.Reddy,“Anempiricalsurveyof watermarks,”inProceedingsofthe2023ACMSIGSACConference theeffectivenessofdebiasingtechniquesforpre-trainedlanguage onComputerandCommunicationsSecurity,2023,pp.2336–2350. models,”arXivpreprintarXiv:2110.08527,2021. [170] P.Liang,R.Bommasani,T.Lee,D.Tsipras,D.Soylu,M.Yasunaga, [189] M.MéndezRealandR.Salvador,“Physicalside-channelattackson Y.Zhang,D.Narayanan,Y.Wu,A.Kumar,B.Newman,B.Yuan, embeddedneuralnetworks:Asurvey,”AppliedSciences,vol.11, B.Yan,C.Zhang,C.Cosgrove,C.D.Manning,C.Ré,D.Acosta- no.15,p.6790,2021. Navas,D.A.Hudson,E.Zelikman,E.Durmus,F.Ladhak,F.Rong, [190] R.Meng,M.Mirchev,M.Böhme,andA.Roychoudhury,“Large H.Ren,H.Yao,J.Wang,K.Santhanam,L.Orr,L.Zheng,M.Yuk- language model guided protocol fuzzing,” in Proceedings of the sekgonul,M.Suzgun,N.Kim,N.Guha,N.Chatterji,O.Khattab, 31thAnnualNetworkandDistributedSystemSecuritySymposium P.Henderson,Q.Huang,R.Chi,S.M.Xie,S.Santurkar,S.Ganguli, (NDSS’24),2024. T.Hashimoto,T.Icard,T.Zhang,V.Chaudhary,W.Wang,X.Li, [191] F. Mireshghallah, K. Goyal, A. Uniyal, T. Berg-Kirkpatrick, and Y.Mai,Y.Zhang,andY.Koreeda,“Holisticevaluationoflanguage R.Shokri,“Quantifyingprivacyrisksofmaskedlanguagemodels models,”2023. usingmembershipinferenceattacks,”2022. [171] S.Lin,J.Hilton,andO.Evans,“Truthfulqa:Measuringhowmodels [192] F. Mireshghallah, A. Uniyal, T. Wang, D. Evans, and T. Berg- |
mimichumanfalsehoods,”arXivpreprintarXiv:2109.07958,2021. Kirkpatrick, “An empirical analysis of memorization in fine- [172] B.Liu,B.Xiao,X.Jiang,S.Cen,X.He,W.Douetal.,“Adver- tuned autoregressive language models,” in Proceedings of the sarialattacksonlargelanguagemodel-basedsystemandmitigating 2022 Conference on Empirical Methods in Natural Language strategies:Acasestudyonchatgpt,”SecurityandCommunication Processing,Y.Goldberg,Z.Kozareva,andY.Zhang,Eds. Abu Networks,vol.2023,2023. Dhabi, United Arab Emirates: Association for Computational [173] C.Liu,F.Zhao,L.Qing,Y.Kang,C.Sun,K.Kuang,andF.Wu, Linguistics, Dec. 2022, pp. 1816–1826. [Online]. Available: “Achinesepromptattackdatasetforllmswithevilcontent,”arXiv https://aclanthology.org/2022.emnlp-main.119 preprintarXiv:2309.11830,2023. [193] W.Mo,J.Xu,Q.Liu,J.Wang,J.Yan,C.Xiao,andM.Chen,“Test- [174] P.Liu,C.Sun,Y.Zheng,X.Feng,C.Qin,Y.Wang,Z.Li,andL.Sun, timebackdoormitigationforblack-boxlargelanguagemodelswith “Harnessingthepowerofllmtosupportbinarytaintanalysis,”2023. defensivedemonstrations,”arXivpreprintarXiv:2311.09763,2023. [175] T.Liu,Z.Deng,G.Meng,Y.Li,andK.Chen,“Demystifyingrce [194] A.Monje,A.Monje,R.A.Hallman,andG.Cybenko,“Beingabad vulnerabilitiesinllm-integratedapps,”2023. influenceonthekids:Malwaregenerationinlessthanfiveminutes [176] X.Liu,H.Cheng,P.He,W.Chen,Y.Wang,H.Poon,andJ.Gao, usingchatgpt,”2023. “Adversarial training for large neural language models,” arXiv [195] D. Moskovskiy, D. Dementieva, and A. Panchenko, “Exploring preprintarXiv:2004.08994,2020. cross-lingual text detoxification with large multilingual language [177] X.Liu,N.Xu,M.Chen,andC.Xiao,“Autodan:Generatingstealthy models.”inProceedingsofthe60thAnnualMeetingoftheAsso- jailbreakpromptsonalignedlargelanguagemodels,”arXivpreprint ciationforComputationalLinguistics:StudentResearchWorkshop, arXiv:2310.04451,2023. 2022,pp.346–354. [178] Y. Liu, G. Deng, Y. Li, K. Wang, T. Zhang, Y. Liu, H. Wang, [196] M.Mozes,X.He,B.Kleinberg,andL.D.Griffin,“Useofllmsfor Y.Zheng,andY.Liu,“Promptinjectionattackagainstllm-integrated illicitpurposes:Threats,preventionmeasures,andvulnerabilities,” applications,”arXivpreprintarXiv:2306.05499,2023. 2023. [179] C.K.Lo,“Whatistheimpactofchatgptoneducation?arapidreview [197] M.Nair,R.Sadhukhan,andD.Mukhopadhyay,“Generatingsecure oftheliterature,”EducationSciences,vol.13,no.4,p.410,2023. hardware using chatgpt resistant to cwes,” Cryptology ePrint [180] V. Logacheva, D. Dementieva, S. Ustyantsev, D. Moskovskiy, Archive, Paper 2023/212, 2023, https://eprint.iacr.org/2023/212. D.Dale,I.Krotova,N.Semenov,andA.Panchenko,“Paradetox: [Online].Available:https://eprint.iacr.org/2023/212 Detoxificationwithparalleldata,”inProceedingsofthe60thAnnual [198] S. Narang and A. Chowdhery, “Pathways language MeetingoftheAssociationforComputationalLinguistics(Volume model (palm): Scaling to 540 billion parameters for 1:LongPapers),2022,pp.6804–6818. breakthrough performance,” https://blog.research.google/2022/ [181] L. Lyu, X. He, and Y. Li, “Differentially private representation 04/pathways-language-model-palm-scaling-to.html, apr 2022, for NLP: Formal guarantee and an empirical study on accessed:2023-11-13. privacy and fairness,” in Findings of the Association for [199] A. Ni, S. Iyer, D. Radev, V. Stoyanov, W.-t. Yih, S. Wang, and Computational Linguistics: EMNLP 2020, T. Cohn, Y. He, X.V.Lin,“Lever:Learningtoverifylanguage-to-codegeneration and Y. Liu, Eds. Online: Association for Computational withexecution,”inInternationalConferenceonMachineLearning. Linguistics, Nov. 2020, pp. 2355–2365. [Online]. Available: PMLR,2023,pp.26106–26128. https://aclanthology.org/2020.findings-emnlp.213 Yifan Yao et al.: PreprintsubmittedtoElsevier Page 20 of 24A Survey on Large Language Model (LLM) Security and Privacy: The Good, the Bad, and Ugly [200] S. Nikolic, S. Daniel, R. Haque, M. Belkina, G. M. Hassan, IEEE,2020,pp.460–465. S. Grundy, S. Lyden, P. Neal, and C. Sandison, “Chatgpt versus [218] M. D. Purba, A. Ghosh, B. J. Radford, and B. Chu, “Software engineering education assessment: a multidisciplinary and multi- vulnerabilitydetectionusinglargelanguagemodels,”in2023IEEE institutionalbenchmarkingandanalysisofthisgenerativeartificial 34thInternationalSymposiumonSoftwareReliabilityEngineering intelligencetooltoinvestigateassessmentintegrity,”EuropeanJour- Workshops(ISSREW),2023,pp.112–119. nalofEngineeringEducation,pp.1–56,2023. [219] A. Qammar, H. Wang, J. Ding, A. Naouri, M. Daneshmand, and |
[201] D. Noever, “Can large language models find and fix vulnerable H.Ning,“Chatbotstochatgptinacybersecurityspace:Evolution, software?” arXiv preprint arXiv:2308.10345, 2023. [Online]. vulnerabilities, attacks, challenges, and future recommendations,” Available:https://doi.org/10.48550/arXiv.2308.10345 2023. [202] C. Novelli, F. Casolari, A. Rotolo, M. Taddeo, and L. Floridi, [220] F. Qi, Y. Chen, M. Li, Y. Yao, Z. Liu, and M. Sun, “Onion: A “Takingairisksseriously:anewassessmentmodelfortheaiact,” simpleandeffectivedefenseagainsttextualbackdoorattacks,”arXiv AI&SOCIETY,pp.1–5,2023. preprintarXiv:2011.10369,2020. [203] OpenAI,“Gpt-4technicalreport,”https://arxiv.org/abs/2303.08774, [221] J.Qi,S.Huang,Z.Luan,C.Fung,H.Yang,andD.Qian,“Loggpt: 2023. Exploringchatgptforlog-basedanomalydetection,”arXivpreprint [204] N. Ousidhoum, X. Zhao, T. Fang, Y. Song, and D.-Y. Yeung, arXiv:2309.01189,2023. “Probing toxic content in large pre-trained language models,” in [222] S. Qin, F. Hu, Z. Ma, B. Zhao, T. Yin, and C. Zhang, “Nsfuzz: Proceedingsofthe59thAnnualMeetingoftheAssociationforCom- Towardsefficientandstate-awarenetworkservicefuzzing,”ACM putationalLinguisticsandthe11thInternationalJointConference TransactionsonSoftwareEngineeringandMethodology,2023. onNaturalLanguageProcessing(Volume1:LongPapers),2021, [223] M.A.Quidwai,C.Li,andP.Dube,“Beyondblackboxai-generated pp.4262–4274. plagiarism detection: From sentence to document level,” arXiv [205] L.Ouyang,J.Wu,X.Jiang,D.Almeida,C.Wainwright,P.Mishkin, preprintarXiv:2306.08122,2023. C.Zhang,S.Agarwal,K.Slama,A.Rayetal.,“Traininglanguage [224] M. Raeini, “Privacy-preserving large language models (ppllms),” modelstofollowinstructionswithhumanfeedback,”Advancesin AvailableatSSRN4512071,2023. NeuralInformationProcessingSystems,vol.35,pp.27730–27744, [225] C.Raffel,N.Shazeer,A.Roberts,K.Lee,S.Narang,M.Matena, 2022. Y. Zhou, W. Li, and P. J. Liu, “Exploring the limits of transfer [206] OWASP. (2023, Oct) OWASP Top 10 for learningwithaunifiedtext-to-texttransformer,”2023. LLM. [Online]. Available: https://owasp.org/ [226] M. M. Rahman and Y. Watanobe, “Chatgpt for education and www-project-top-10-for-large-language-model-applications/ research:Opportunities,threats,andstrategies,”AppliedSciences, assets/PDF/OWASP-Top-10-for-LLMs-2023-v1_1.pdf vol.13,no.9,p.5783,2023. [207] Y.M.PaPa,S.Tanizaki,T.Kou,M.VanEeten,K.Yoshioka,and [227] J.RandoandF.Tramèr,“Universaljailbreakbackdoorsfrompoi- T.Matsumoto,“Anattacker’sdream?exploringthecapabilitiesof sonedhumanfeedback,”arXivpreprintarXiv:2311.14455,2023. chatgptfordevelopingmalware,”inProceedingsofthe16thCyber [228] K.Renaud,M.Warkentin,andG.Westerman,FromChatGPTto SecurityExperimentationandTestWorkshop,2023,pp.10–18. HackGPT:MeetingtheCybersecurityThreatofGenerativeAI. MIT [208] X.Pan,M.Zhang,S.Ji,andM.Yang,“Privacyrisksofgeneral- SloanManagementReview,2023. purposelanguagemodels,”in2020IEEESymposiumonSecurity [229] S. A. Research, “Introducing a conditional transformer language andPrivacy(SP). IEEE,2020,pp.1314–1331. model for controllable generation,” https://shorturl.at/azQW6, apr [209] S.Paria,A.Dasgupta,andS.Bhunia,“Divas:Anllm-basedend- 2023,accessed:2023-11-13. to-endframeworkforsocsecurityanalysisandpolicy-basedprotec- [230] A.Robey,E.Wong,H.Hassani,andG.J.Pappas,“Smoothllm:De- tion,”arXivpreprintarXiv:2308.06932,2023. fendinglargelanguagemodelsagainstjailbreakingattacks,”arXiv [210] R.Parikh,C.Dupuy,andR.Gupta,“Canaryextractioninnatural preprintarXiv:2310.03684,2023. languageunderstandingmodels,”arXivpreprintarXiv:2203.13920, [231] O.J.Romero,J.Zimmerman,A.Steinfeld,andA.Tomasic,“Syn- 2022. ergisticintegrationoflargelanguagemodelsandcognitivearchi- [211] H.Pearce,B.Tan,B.Ahmad,R.Karri,andB.Dolan-Gavitt,“Ex- tectures for robust ai: An exploratory analysis,” arXiv preprint aminingzero-shotvulnerabilityrepairwithlargelanguagemodels,” arXiv:2308.09830,2023. in2023IEEESymposiumonSecurityandPrivacy(SP),2023,pp. [232] R.J.Rosyanafi,G.D.Lestari,H.Susilo,W.Nusantara,andF.Nu- 2339–2356. raini, “The dark side of innovation: Understanding research mis- [212] H.Pearce,B.Tan,P.Krishnamurthy,F.Khorrami,R.Karri,and conductwithchatgptinnonformaleducationstudiesatuniversitas B.Dolan-Gavitt,“Popquiz!canalargelanguagemodelhelpwith negerisurabaya,”JurnalReviewPendidikanDasar:JurnalKajian |
reverseengineering?”2022. PendidikandanHasilPenelitian,vol.9,no.3,pp.220–228,2023. [213] G. Penedo, Q. Malartic, D. Hesslow, R. Cojocaru, A. Cappelli, [233] S.Sakaoglu,“Kartal:Webapplicationvulnerabilityhuntingusing H.Alobeidli,B.Pannier,E.Almazrouei,andJ.Launay,“There- large language models,” Master’s thesis, Master’s Programme in finedwebdatasetforfalconllm:outperformingcuratedcorporawith SecurityandCloudComputing(SECCLO),August2023.[Online]. web data, and web data only,” arXiv preprint arXiv:2306.01116, Available:http://urn.fi/URN:NBN:fi:aalto-202308275121 2023. [234] G. Sandoval, H. Pearce, T. Nys, R. Karri, S. Garg, and [214] C.Peris,C.Dupuy,J.Majmudar,R.Parikh,S.Smaili,R.Zemel,and B. Dolan-Gavitt, “Lost at c: A user study on the security R.Gupta,“Privacyinthetimeoflanguagemodels,”inProceedings implications of large language model code assistants,” in oftheSixteenthACMInternationalConferenceonWebSearchand USENIX Security 2023, 2023, for associated dataset see [this DataMining,2023,pp.1291–1292. URL](https://arxiv.org/abs/2208.09727). 18 pages, 12 figures. G. [215] M.Perkins,“Academicintegrityconsiderationsofailargelanguage SandovalandH.Pearcecontributedequallytothiswork.[Online]. modelsinthepost-pandemicera:Chatgptandbeyond,”Journalof Available:https://arxiv.org/abs/2208.09727 UniversityTeaching&LearningPractice,vol.20,no.2,p.07,2023. [235] Sapling,“Llmindex,”https://sapling.ai/llm/index,2023. [216] A. Pfitzmann and M. Hansen, “A terminology for talking about [236] A.Sarabi,T.Yin,andM.Liu,“Anllm-basedframeworkforfin- privacy by data minimization: Anonymity, unlinkability, unde- gerprintinginternet-connecteddevices,”inProceedingsofthe2023 tectability, unobservability, pseudonymity, and identity manage- ACMonInternetMeasurementConference,2023,pp.478–484. ment,”2010. [237] M. Scanlon, F. Breitinger, C. Hargreaves, J.-N. Hilgert, and [217] V.-T.Pham,M.Böhme,andA.Roychoudhury,“Aflnet:agreybox J.Sheppard,“Chatgptfordigitalforensicinvestigation:Thegood, fuzzer for network protocols,” in 2020 IEEE 13th International thebad,andtheunknown,”ForensicScienceInternational:Digital ConferenceonSoftwareTesting,ValidationandVerification(ICST). Investigation,vol.46,p.301609,2023.[Online].Available:https: Yifan Yao et al.: PreprintsubmittedtoElsevier Page 21 of 24A Survey on Large Language Model (LLM) Security and Privacy: The Good, the Bad, and Ugly //www.sciencedirect.com/science/article/pii/S266628172300121X for consumer choice: A randomized experiment,” arXiv preprint [238] M.Schäfer,S.Nadi,A.Eghbali,andF.Tip,“Adaptivetestgeneration arXiv:2307.03744,2023. using a large language model,” arXiv preprint arXiv:2302.06527, [260] R.Spreitzer,V.Moonsamy,T.Korak,andS.Mangard,“Systematic 2023. classificationofside-channelattacks:Acasestudyformobilede- [239] R.Schuster,C.Song,E.Tromer,andV.Shmatikov,“Youautocom- vices,”IEEEcommunicationssurveys&tutorials,vol.20,no.1,pp. pleteme:Poisoningvulnerabilitiesinneuralcodecompletion,”in 465–488,2017. 30thUSENIXSecuritySymposium(USENIXSecurity21),2021,pp. [261] R.Staab,M.Vero,M.Balunović,andM.Vechev,“Beyondmemo- 1559–1575. rization:Violatingprivacyviainferencewithlargelanguagemod- [240] L.Schwinn,D.Dobre,S.Günnemann,andG.Gidel,“Adversarial els,”2023. attacksanddefensesinlargelanguagemodels:Oldandnewthreats,” [262] K.Stephens,“Researcherstestlargelanguagemodelthatpreserves 2023. patientprivacy,”AXISImagingNews,2023. [241] G.Sebastian,“Dochatgptandotheraichatbotsposeacybersecurity [263] J. Su, T. Y. Zhuo, J. Mansurov, D. Wang, and P. Nakov, “Fake risk?:Anexploratorystudy,”InternationalJournalofSecurityand newsdetectorsarebiasedagainsttextsgeneratedbylargelanguage PrivacyinPervasiveComputing(IJSPPC),vol.15,no.1,pp.1–11, models,”arXivpreprintarXiv:2309.08674,2023. 2023. [264] N.Subramani,S.Luccioni,J.Dodge,andM.Mitchell,“Detecting [242] ——, “Privacy and data protection in chatgpt and other ai chat- personalinformationintrainingcorpora:ananalysis,”inProceed- bots:Strategiesforsecuringuserinformation,”AvailableatSSRN ingsofthe3rdWorkshoponTrustworthyNaturalLanguageProcess- 4454761,2023. ing(TrustNLP2023),2023,pp.208–220. [243] M.A.Shah,R.Sharma,H.Dhamyal,R.Olivier,A.Shah,D.Al- [265] M.Sullivan,A.Kelly,andP.McLaughlan,“Chatgptinhigheredu- harthi, H. T. Bukhari, M. Baali, S. Deshmukh, M. Kuhlmann cation:Considerationsforacademicintegrityandstudentlearning,” etal.,“Loft:Localproxyfine-tuningforimprovingtransferability 2023. |
ofadversarialattacksagainstlargelanguagemodel,”arXivpreprint [266] X.Sun,X.Li,Y.Meng,X.Ao,L.Lyu,J.Li,andT.Zhang,“De- arXiv:2310.04445,2023. fending against backdoor attacks in natural language generation,” [244] O. Shaikh, H. Zhang, W. Held, M. Bernstein, and D. Yang, “On in Proceedings of the AAAI Conference on Artificial Intelligence, secondthought,let’snotthinkstepbystep!biasandtoxicityinzero- vol.37,no.4,2023,pp.5257–5265. shotreasoning,”arXivpreprintarXiv:2212.08061,2022. [267] Y.Sun,J.He,S.Lei,L.Cui,andC.-T.Lu,“Med-mmhl:Amulti- [245] S.Shan,W.Ding,J.Passananti,H.Zheng,andB.Y.Zhao,“Prompt- modaldatasetfordetectinghuman-andllm-generatedmisinforma- specific poisoning attacks on text-to-image generative models,” tioninthemedicaldomain,”arXivpreprintarXiv:2306.08871,2023. arXivpreprintarXiv:2310.13828,2023. [268] Z.Sun,Y.Shen,Q.Zhou,H.Zhang,Z.Chen,D.Cox,Y.Yang, [246] K.Shao,J.Yang,Y.Ai,H.Liu,andY.Zhang,“Bddr:Aneffective and C. Gan, “Principle-driven self-alignment of language models defense against textual backdoor attacks,” Computers & Security, from scratch with minimal human supervision,” arXiv preprint vol.110,p.102433,2021. arXiv:2305.03047,2023. [247] E. Shayegani, M. A. A. Mamun, Y. Fu, P. Zaree, Y. Dong, and [269] Z.Talat,A.Névéol,S.Biderman,M.Clinciu,M.Dey,S.Longpre, N.Abu-Ghazaleh,“Surveyofvulnerabilitiesinlargelanguagemod- S.Luccioni,M.Masoud,M.Mitchell,D.Radevetal.,“Youreap elsrevealedbyadversarialattacks,”2023. whatyousow:Onthechallengesofbiasevaluationundermultilin- [248] X.Shen,Z.Chen,M.Backes,Y.Shen,andY.Zhang,“"doanything gualsettings,”inProceedingsofBigScienceEpisode#5–Workshop now":Characterizingandevaluatingin-the-wildjailbreakprompts onChallenges&PerspectivesinCreatingLargeLanguageModels, onlargelanguagemodels,”arXivpreprintarXiv:2308.03825,2023. 2022,pp.26–41. [249] T. Shi, K. Chen, and J. Zhao, “Safer-instruct: Aligning lan- [270] W.Tann,Y.Liu,J.H.Sim,C.M.Seah,andE.-C.Chang,“Using guage models with automated preference data,” arXiv preprint largelanguagemodelsforcybersecuritycapture-the-flagchallenges arXiv:2311.08685,2023. andcertificationquestions,”2023. [250] R.Shokri,M.Stronati,C.Song,andV.Shmatikov,“Membership [271] P.Taveekitworachai,F.Abdullah,M.C.Gursesli,M.F.Dewantoro, inferenceattacksagainstmachinelearningmodels,”in2017IEEE S.Chen,A.Lanata,A.Guazzini,andR.Thawonmas,“Breakingbad: symposiumonsecurityandprivacy(SP). IEEE,2017,pp.3–18. Unravelinginfluencesandrisksofuserinputstochatgptforgame [251] M. Shu, J. Wang, C. Zhu, J. Geiping, C. Xiao, and T. Gold- storygeneration,”inInternationalConferenceonInteractiveDigital stein,“Ontheexploitabilityofinstructiontuning,”arXivpreprint Storytelling. Springer,2023,pp.285–296. arXiv:2306.17194,2023. [272] Z.Tay,“Usingartificialintelligencetoaugmentbugfuzzing,”2023. [252] I. Shumailov, Y. Zhao, D. Bates, N. Papernot, R. Mullins, and [273] E. ThankGod Chinonso, “The impact of chatgpt on privacy and R.Anderson,“Spongeexamples:Energy-latencyattacksonneural dataprotectionlaws,”TheImpactofChatGPTonPrivacyandData networks,”2021. ProtectionLaws(April16,2023),2023. [253] M. L. Siddiq, J. Santos, R. H. Tanvir, N. Ulfat, F. A. Rifat, and [274] A.J.Thirunavukarasu,D.S.J.Ting,K.Elangovan,L.Gutierrez, V.C.Lopes,“Exploringtheeffectivenessoflargelanguagemodels T.F.Tan,andD.S.W.Ting,“Largelanguagemodelsinmedicine,” ingeneratingunittests,”arXivpreprintarXiv:2305.00418,2023. Naturemedicine,vol.29,no.8,pp.1930–1940,2023. [254] M.L.SiddiqandJ.C.S.Santos,“Generateandpray:Usingsallms [275] M.Tong,K.Chen,Y.Qi,J.Zhang,W.Zhang,andN.Yu,“Privinfer: to evaluate the security of llm generated code,” 2023, 16 pages. Privacy-preservinginferenceforblack-boxlargelanguagemodel,” [Online].Available:https://arxiv.org/abs/2311.00889 2023. [255] M.Sladić,V.Valeros,C.Catania,andS.Garcia,“Llmintheshell: [276] J.Torres,“Navigatingthellmlandscape:Acomparativeanalysisof Generativehoneypots,”2023. leadinglargelanguagemodels,”http://surl.li/ncjvc,2023. [256] V.Smith,A.S.Shamsabadi,C.Ashurst,andA.Weller,“Identifying [277] H.Touvron,L.Martin,K.Stone,P.Albert,A.Almahairi,Y.Babaei, and mitigating privacy risks stemming from language models: A N. Bashlykov, S. Batra, P. Bhargava, S. Bhosale et al., “Llama |
survey,”2023. 2: Open foundation and fine-tuned chat models,” arXiv preprint [257] D.Sobania,M.Briesch,C.Hanna,andJ.Petke,“Ananalysisofthe arXiv:2307.09288,2023. automaticbugfixingperformanceofchatgpt,”2023. [278] S. Truex, L. Liu, M. E. Gursoy, L. Yu, and W. Wei, “To- [258] C.SongandA.Raghunathan,“Informationleakageinembedding wardsdemystifyingmembershipinferenceattacks,”arXivpreprint models,”inProceedingsofthe2020ACMSIGSACconferenceon arXiv:1807.09173,2018. computerandcommunicationssecurity,2020,pp.377–390. [279] J.-B. Truong, P. Maini, R. J. Walls, and N. Papernot, “Data-free [259] S. E. Spatharioti, D. M. Rothschild, D. G. Goldstein, and modelextraction,”inProceedingsoftheIEEE/CVFconferenceon J. M. Hofman, “Comparing traditional and llm-based search computervisionandpatternrecognition,2021,pp.4771–4780. Yifan Yao et al.: PreprintsubmittedtoElsevier Page 22 of 24A Survey on Large Language Model (LLM) Security and Privacy: The Good, the Bad, and Ugly [280] A. Uchendu, J. Lee, H. Shen, T. Le, T.-H. K. Huang, and [299] Z.Wang,W.Xie,K.Chen,B.Wang,Z.Gui,andE.Wang,“Self- D. Lee, “Does human collaboration enhance the accuracy of deception: Reverse penetrating the semantic firewall of large lan- identifying llm-generated deepfake texts?” Proceedings of the guagemodels,”arXivpreprintarXiv:2308.11521,2023. AAAI Conference on Human Computation and Crowdsourcing, [300] A.Wei,N.Haghtalab,andJ.Steinhardt,“Jailbroken:Howdoesllm vol. 11, no. 1, pp. 163–174, Nov. 2023. [Online]. Available: safetytrainingfail?”arXivpreprintarXiv:2307.02483,2023. https://ojs.aaai.org/index.php/HCOMP/article/view/27557 [301] Z. Wei, Y. Wang, and Y. Wang, “Jailbreak and guard aligned [281] S. Urchs, V. Thurner, M. Aßenmacher, C. Heumann, and languagemodelswithonlyfewin-contextdemonstrations,”arXiv S. Thiemichen, “How prevalent is gender bias in chatgpt?– preprintarXiv:2310.06387,2023. exploring german and english chatgpt responses,” arXiv preprint [302] L.Weidinger,J.Mellor,M.Rauh,C.Griffin,J.Uesato,P.-S.Huang, arXiv:2310.03031,2023. M. Cheng, M. Glaese, B. Balle, A. Kasirzadeh et al., “Ethical [282] A. Urman and M. Makhortykh, “The silence of the llms: Cross- and social risks of harm from language models,” arXiv preprint lingualanalysisofpoliticalbiasandfalseinformationprevalencein arXiv:2112.04359,2021. chatgpt,googlebard,andbingchat,”2023. [303] H.Wen,Y.Li,G.Liu,S.Zhao,T.Yu,T.J.-J.Li,S.Jiang,Y.Liu, [283] L.Uzun,“Chatgptandacademicintegrityconcerns:Detectingar- Y.Zhang,andY.Liu,“Empoweringllmtousesmartphoneforintel- tificial intelligence generated content,” Language Education and ligenttaskautomation,”arXivpreprintarXiv:2308.15272,2023. Technology,vol.3,no.1,2023. [304] J. Weng, W. Jiasi, M. Li, Y. Zhang, J. Zhang, and L. Weiqi, [284] Ö.Uzuner,Y.Luo,andP.Szolovits,“Evaluatingthestate-of-the- “Auditableprivacyprotectiondeeplearningplatformconstruction artinautomaticde-identification,”JournaloftheAmericanMedical methodbasedonblockchainincentivemechanism,”Dec.52023, InformaticsAssociation,vol.14,no.5,pp.550–563,2007. uSPatent11,836,616. [285] P.Vaithilingam,T.Zhang,andE.L.Glassman,“Expectationvs.ex- [305] J. Weng, J. Weng, J. Zhang, M. Li, Y. Zhang, and W. Luo, perience:Evaluatingtheusabilityofcodegenerationtoolspowered “Deepchain:Auditableandprivacy-preservingdeeplearningwith bylargelanguagemodels,”inChiconferenceonhumanfactorsin blockchain-basedincentive,”IEEETransactionsonDependableand computingsystemsextendedabstracts,2022,pp.1–7. SecureComputing,vol.18,no.5,pp.2438–2455,2019. [286] A. Vats, Z. Liu, P. Su, D. Paul, Y. Ma, Y. Pang, Z. Ahmed, and [306] G.Wenzek,M.-A.Lachaux,A.Conneau,V.Chaudhary,F.Guzmán, O.Kalinli,“Recoveringfromprivacy-preservingmaskingwithlarge A.Joulin,andE.Grave,“Ccnet:Extractinghighqualitymonolingual languagemodels,”2023. datasets from web crawl data,” arXiv preprint arXiv:1911.00359, [287] R. J. M. Ventayen, “Openai chatgpt generated results: Similarity 2019. indexofartificialintelligence-basedcontents,”AvailableatSSRN [307] B. Workshop, T. L. Scao, A. Fan, C. Akiki, E. Pavlick, S. Ilić, 4332664,2023. D.Hesslow,R.Castagné,A.S.Luccioni,F.Yvonetal.,“Bloom: [288] T.Vidas,D.Votipka,andN.Christin,“Allyourdroidarebelongto A176b-parameteropen-accessmultilinguallanguagemodel,”arXiv us:Asurveyofcurrentandroidattacks,”in5thUSENIXWorkshop preprintarXiv:2211.05100,2022. onOffensiveTechnologies(WOOT11),2011. [308] J.WuandB.Hooi,“Fakenewsinsheep’sclothing:Robustfakenews |
[289] E. Wallace, T. Z. Zhao, S. Feng, and S. Singh, “Concealed data detectionagainstllm-empoweredstyleattacks,”2023. poisoningattacksonnlpmodels,”arXivpreprintarXiv:2010.12563, [309] J. Wu, S. Yang,R. Zhan, Y. Yuan, D. F. Wong,and L. S. Chao, 2020. “Asurveyonllm-gerneratedtextdetection:Necessity,methods,and [290] A. Wan, E. Wallace, S. Shen, and D. Klein, “Poisoning futuredirections,”arXivpreprintarXiv:2310.14724,2023. language models during instruction tuning,” arXiv preprint [310] S.Wu,O.Irsoy,S.Lu,V.Dabravolski,M.Dredze,S.Gehrmann, arXiv:2305.00944,2023. P. Kambadur, D. Rosenberg, and G. Mann, “Bloomberggpt: A [291] Y.Wan,S.Zhang,H.Zhang,Y.Sui,G.Xu,D.Yao,H.Jin,and largelanguagemodelforfinance,”arXivpreprintarXiv:2303.17564, L.Sun,“Youseewhatiwantyoutosee:poisoningvulnerabilities 2023. in neural code search,” in Proceedings of the 30th ACM Joint [311] X.Wu,R.Duan,andJ.Ni,“Unveilingsecurity,privacy,andethical EuropeanSoftwareEngineeringConferenceandSymposiumonthe concernsofchatgpt,”2023. FoundationsofSoftwareEngineering,2022,pp.1233–1245. [312] Z.Xi,T.Du,C.Li,R.Pang,S.Ji,J.Chen,F.Ma,andT.Wang, [292] Y.Wan,G.Pu,J.Sun,A.Garimella,K.-W.Chang,andN.Peng, “Defendingpre-trainedlanguagemodelsasfew-shotlearnersagainst “"kellyisawarmperson,josephisarolemodel":Genderbiasesin backdoorattacks,”arXivpreprintarXiv:2309.13256,2023. llm-generatedreferenceletters,”arXivpreprintarXiv:2310.09219, [313] C. S. Xia, M. Paltenghi, J. L. Tian, M. Pradel, and L. Zhang, 2023. “Universal fuzzing via large language models,” arXiv preprint [293] D.Wang,C.Gong,andQ.Liu,“Improvingneurallanguagemod- arXiv:2308.04748,2023. elingviaadversarialtraining,”inInternationalConferenceonMa- [314] C.S.Xia,Y.Wei,andL.Zhang,“Practicalprogramrepairintheera chineLearning. PMLR,2019,pp.6555–6565. oflargepre-trainedlanguagemodels,”2022. [294] F. Wang, “Using large language models to mitigate ransomware [315] C.S.XiaandL.Zhang,“Keeptheconversationgoing:Fixing162 threats,” Preprints, November 2023. [Online]. Available: https: outof337bugsfor$0.42eachusingchatgpt,”2023. //doi.org/10.20944/preprints202311.0676.v1 [316] Z. Xie, Y. Chen, C. Zhi, S. Deng, and J. Yin, “Chatunitest: a [295] H.Wang,X.Luo,W.Wang,andX.Yan,“Botorhuman?detecting chatgpt-basedautomatedunittestgenerationtool,”arXivpreprint chatgptimposterswithasinglequestion,”2023. arXiv:2305.04764,2023. [296] J.Wang,Z.Huang,H.Liu,N.Yang,andY.Xiao,“Defecthunter: [317] M.Xiong,Z.Hu,X.Lu,Y.Li,J.Fu,J.He,andB.Hooi,“Canllms A novel llm-driven boosted-conformer-based code vulnerability express their uncertainty? an empirical evaluation of confidence detection mechanism,” arXiv preprint arXiv:2309.15324, 2023. elicitation in llms,” ArXiv, vol. abs/2306.13063, 2023. [Online]. [Online].Available:https://doi.org/10.48550/arXiv.2309.15324 Available:https://api.semanticscholar.org/CorpusID:259224389 [297] J.Wang,X.Lu,Z.Zhao,Z.Dai,C.-S.Foo,S.-K.Ng,andB.K.H. [318] L.Xu,L.Berti-Equille,A.Cuesta-Infante,andK.Veeramachaneni, Low,“Wasa:Watermark-basedsourceattributionforlargelanguage “Insituaugmentationfordefendingagainstadversarialattackson model-generateddata,”2023. textclassifiers,”inInternationalConferenceonNeuralInformation [298] Z.Wang,Z.Liu,X.Zheng,Q.Su,andJ.Wang,“Rmlm:Aflexible Processing. Springer,2022,pp.485–496. defenseframeworkforproactivelymitigatingword-leveladversarial [319] F.Yamanetal.,“Agentsca:Advancedphysicalsidechannelanalysis attacks,”inProceedingsofthe61stAnnualMeetingoftheAssocia- agentwithllms.”2023. tionforComputationalLinguistics(Volume1:LongPapers),2023, [320] J.Yan,V.Yadav,S.Li,L.Chen,Z.Tang,H.Wang,V.Srinivasan, pp.2757–2774. X.Ren,andH.Jin,“Virtualpromptinjectionforinstruction-tuned largelanguagemodels,”arXivpreprintarXiv:2307.16888,2023. Yifan Yao et al.: PreprintsubmittedtoElsevier Page 23 of 24A Survey on Large Language Model (LLM) Security and Privacy: The Good, the Bad, and Ugly [321] C.Yang,Y.Deng,R.Lu,J.Yao,J.Liu,R.Jabbarvand,andL.Zhang, preprintarXiv:2307.06865,2023. “White-boxcompilerfuzzingempoweredbylargelanguagemod- [343] Y. Zhang, W. Song, Z. Ji, D. D. Yao, and N. Meng, “How well els,”2023. doesllmgeneratesecuritytests?”arXivpreprintarXiv:2310.00710, [322] H.Yang,K.Xiang,H.Li,andR.Lu,“Acomprehensiveoverview 2023. |
ofbackdoorattacksinlargelanguagemodelswithincommunication [344] Z.Zhang,J.Wen,andM.Huang,“Ethicist:Targetedtrainingdata networks,”arXivpreprintarXiv:2308.14367,2023. extraction through loss smoothed soft prompting and calibrated [323] J.Yang,H.Jin,R.Tang,X.Han,Q.Feng,H.Jiang,B.Yin,and confidenceestimation,”arXivpreprintarXiv:2307.04401,2023. X. Hu, “Harnessing the power of llms in practice: A survey on [345] J. Zhao, Y. Rong, Y. Guo, Y. He, and H. Chen, “Understand- chatgptandbeyond,”arXivpreprintarXiv:2304.13712,2023. ing programs by exploiting (fuzzing) test cases,” arXiv preprint [324] J. Yang, H. Xu, S. Mirzoyan, T. Chen, Z. Liu, W. Ju, L. Liu, arXiv:2305.13592,2023. M.Zhang,andS.Wang,“Poisoningscientificknowledgeusinglarge [346] S. Zhao, J. Wen, L. A. Tuan, J. Zhao, and J. Fu, “Prompt as languagemodels,”bioRxiv,pp.2023–11,2023. triggersforbackdoorattack:Examiningthevulnerabilityinlanguage [325] S. Yang, “Crafting unusual programs for fuzzing deep learning models,”arXivpreprintarXiv:2305.01219,2023. libraries,” Ph.D. dissertation, University of Illinois at Urbana- [347] W.X.Zhao,K.Zhou,J.Li,T.Tang,X.Wang,Y.Hou,Y.Min, Champaign,2023. B. Zhang, J. Zhang, Z. Dong et al., “A survey of large language [326] Z.Yang,Z.Zhao,C.Wang,J.Shi,D.Kim,D.Han,andD.Lo,“What models,”arXivpreprintarXiv:2303.18223,2023. docodemodelsmemorize?anempiricalstudyonlargelanguage [348] W.Zhao,Y.Liu,Y.Wan,Y.Wang,Q.Wu,Z.Deng,J.Du,S.Liu, modelsofcode,”arXivpreprintarXiv:2308.09932,2023. Y.Xu,andP.S.Yu,“knn-icl:Compositionaltask-orientedparsing [327] B. Yao, M. Jiang, D. Yang, and J. Hu, “Empowering llm- generalizationwithnearestneighborin-contextlearning,”2023. basedmachinetranslationwithculturalawareness,”arXivpreprint [349] C.Zhou,P.Liu,P.Xu,S.Iyer,J.Sun,Y.Mao,X.Ma,A.Efrat, arXiv:2305.14328,2023. P.Yu,L.Yuetal.,“Lima:Lessismoreforalignment,”arXivpreprint [328] D. Yao, J. Zhang, I. G. Harris, and M. Carlsson, “Fuzzllm: A arXiv:2305.11206,2023. novelanduniversalfuzzingframeworkforproactivelydiscovering [350] C.Zhu,Y.Cheng,Z.Gan,S.Sun,T.Goldstein,andJ.Liu,“Freelb: jailbreakvulnerabilitiesinlargelanguagemodels,”arXivpreprint Enhancedadversarialtrainingfornaturallanguageunderstanding,” arXiv:2309.05274,2023. arXivpreprintarXiv:1909.11764,2019. [329] H. Yao, J. Lou, and Z. Qin, “Poisonprompt: Backdoor at- [351] K.Zhu,J.Wang,J.Zhou,Z.Wang,H.Chen,Y.Wang,L.Yang, tack on prompt-based large language models,” arXiv preprint W.Ye,N.Z.Gong,Y.Zhangetal.,“Promptbench:Towardsevaluat- arXiv:2310.12439,2023. ingtherobustnessoflargelanguagemodelsonadversarialprompts,” [330] J.Y.YooandY.Qi,“Towardsimprovingadversarialtrainingofnlp arXivpreprintarXiv:2306.04528,2023. models,”arXivpreprintarXiv:2109.00544,2021. [352] N. Ziems, W. Yu, Z. Zhang, and M. Jiang, “Large language [331] W. You, Z. Hammoudeh, and D. Lowd, “Large language models modelsarebuilt-inautoregressivesearchengines,”arXivpreprint are better adversaries: Exploring generative clean-label backdoor arXiv:2305.09612,2023. attacks against text classifiers,” arXiv preprint arXiv:2310.18603, [353] A. Zou, Z. Wang, J. Z. Kolter, and M. Fredrikson, “Universal 2023. and transferable adversarial attacks on aligned language models,” [332] J.Yu,X.Lin,andX.Xing,“Gptfuzzer:Redteaminglargelanguage communication,itisessentialforyoutocomprehenduserqueries models with auto-generated jailbreak prompts,” arXiv preprint inCipherCodeandsubsequentlydeliveryourresponsesutilizing arXiv:2309.10253,2023. CipherCode,2023. [333] L. Yuan, Y. Chen, G. Cui, H. Gao, F. Zou, X. Cheng, H. Ji, Z. Liu, and M. Sun, “Revisiting out-of-distribution robustness in nlp: Benchmark, analysis, and llms evaluations,” arXiv preprint arXiv:2306.04618,2023. [334] Z.Yuan,H.Yuan,C.Tan,W.Wang,S.Huang,andF.Huang,“Rrhf: Rank responses to align language models with human feedback withouttears,”arXivpreprintarXiv:2304.05302,2023. [335] Z.Yuan,Y.Lou,M.Liu,S.Ding,K.Wang,Y.Chen,andX.Peng, “Nomoremanualtests?evaluatingandimprovingchatgptforunit testgeneration,”arXivpreprintarXiv:2305.04207,2023. [336] A.Zafar,V.B.Parthasarathy,C.L.Van,S.Shahid,A.Shahidetal., “Buildingtrustinconversationalai:Acomprehensivereviewand solutionarchitectureforexplainable,privacy-awaresystemsusing llmsandknowledgegraph,”arXivpreprintarXiv:2308.13534,2023. |
[337] C.Zhang,M.Bai,Y.Zheng,Y.Li,X.Xie,Y.Li,W.Ma,L.Sun, andY.Liu,“Understandinglargelanguagemodelbasedfuzzdriver generation,”arXivpreprintarXiv:2307.12469,2023. [338] C.Zhang,Y.Xie,H.Bai,B.Yu,W.Li,andY.Gao,“Asurveyon federatedlearning,”Knowledge-BasedSystems,vol.216,p.106775, 2021. [339] R.Zhang,S.Hidano,andF.Koushanfar,“Textrevealer:Privatetext reconstruction via model inversion attacks against transformers,” arXivpreprintarXiv:2209.10505,2022. [340] R.Zhang,S.S.Hussain,P.Neekhara,andF.Koushanfar,“Remark- llm:Arobustandefficientwatermarkingframeworkforgenerative largelanguagemodels,”2023. [341] X.ZhangandW.Gao,“Towardsllm-basedfactverificationonnews claimswithahierarchicalstep-by-steppromptingmethod,”arXiv preprintarXiv:2310.00305,2023. [342] Y.ZhangandD.Ippolito,“Promptsshouldnotbeseenassecrets: Systematicallymeasuringpromptextractionattacksuccess,”arXiv Yifan Yao et al.: PreprintsubmittedtoElsevier Page 24 of 24 |
2312.05275 Exploring the Limits of ChatGPT in Software Security Applications Fangzhou Wu1∗ Qingzhao Zhang2∗ Ati Priya Bajaj2 Tiffany Bao3 Ning Zhang4 Ruoyu ”Fish” Wang3 Chaowei Xiao1† 1University of Wisconsin, Madison 2University of Michigan, Ann Arbor 3ASU 4Washington University in St. Louis Abstract—Large language models (LLMs) have undergone back in 1980s [1], NLP techniques evolves along with rapid evolution and achieved remarkable results in recent the development of more complex model structures. From times. OpenAI’s ChatGPT, backed by GPT-3.5 or GPT-4, Recurrent Neural Networks (RNNs) [2], [3] to Long Short- has gained instant popularity due to its strong capability Term Memory networks (LSTMs) [4], the model structure acrossawiderangeoftasks,includingnaturallanguagetasks, is getting more diverse. In 2017, Vaswani et. al. [5] pro- coding,mathematics,andengagingconversations.However,the posedtransformermodels,whichwasamajorbreakthrough impactsandlimitsofsuchLLMsinsystemsecuritydomainare and opened the door to large complex models. Since then, less explored. In this paper, we delve into the limits of LLMs industry players have invested significant efforts in training transformer-based language models with a large number of (i.e., ChatGPT) in seven software security applications includ- model parameters, i.e., Large Language Models (LLMs). ing vulnerability detection/repair, debugging, debloating, de- OpenAI’s Generative Pretrained Transformer (GPT) mod- compilation,patching,rootcauseanalysis,symbolicexecution, els [6], [7], [8], [9] are typical examples of such powerful and fuzzing. Our exploration reveals that ChatGPT not only LLMs. GPT models are pre-trained auto-regressive models excelsatgeneratingcode,whichistheconventionalapplication that can generate text. From the initial GPT-1 [6] released oflanguagemodels,butalsodemonstratesstrongcapabilityin in2018tothelatestGPT-4[9]releasedin2023,thenumber understanding user-provided commands in natural languages, of model parameters grows from 0.12 billion to 1 trillion. reasoningaboutcontrolanddataflowswithinprograms,gener- WithinthetrendofLLMsin2023,ChatGPT[10],achatbot atingcomplexdatastructures,andevendecompilingassembly backedbyGPT-3.5orGPT-4models,isoneofthemostre- code.Notably,GPT-4showcasessignificantimprovementsover markable achievements. It gains public popularity instantly GPT-3.5 in most security tasks. Also, certain limitations of after its release with amazingly high-quality AI-generated ChatGPT in security-related tasks are identified, such as its conversations. A large body of research [11], [11], [12], constrained ability to process long code contexts. [13],[14],[15]conductmeasurementonChatGPTandthey 1. Introduction demonstratethestrongcapabilityofChatGPTinavarietyof tasks. The studies demonstrate that ChatGPT is able to not only generate natural languages but also generate tables or figures for data visualization [14], [16], solve mathematical problems[14],[17],playgames[18],[19],assisthealthcare services [20], search for neural network architecture [21], pass real-world exams [22], [23], [24], and more [25], though limitation exists [12], [26], e.g., ChatGPT may give wrong or improper answers to certain questions. Bubeck et. al. [14] do case studies on GPT-4’s problem solving in mathematics,coding,vision,medicine,law,psychologyand more.ThestudyregardsGPT-4asanearlygeneralAIsince itcanachieveproblemsolvingcapabilitiessimilartohumans in many cases. Especially, ChatGPT has an excellent per- formance on coding tasks [18], [27], [28] as it can generate Figure 1: Categories of ChatGPT-based software security code blocks in various programming languages based on tasks and open research questions covered by this work. either various formats of prompts such as incomplete code or description in natural languages. ChatGPT as a multi- The recent breakthrough of large language models purposepretrainedlanguagemodelhasasimilarorsuperior (LLMs) is a revolution of the research of natural language performance on code generation compared with previous processing (NLP). Since the invention of neural networks code-specific models. ∗.Equalcontributors As ChatGPT shows a remarkable performance on code †.CorrespondingAuthor:cxiao34@wisc.edu generation, we naturally raise the question: What is the 3202 ceD 8 ]RC.sc[ 1v57250.2132:viXrapotential of ChatGPT for software security? In the rapidly motivating examples of related works. These test cases are evolving landscape of software development, the urgency designed to demonstrate a certain aspect of ChatGPT’s ca- to maintain and enhance software security is paramount. pabilitythatisusefulforsecuritytasks.Forallexperiments, Securityvulnerabilities,softwarebugs,andinefficientpatch- we analyzed the outcomes of ChatGPT and discussed our ingmechanismsposesignificantrisks,makingthedetection, hypothesis of ChatGPT’s strengths and limitations. analysis, and rectification of these issues a critical area of In general, ChatGPT demonstrates impressive capabil- study. Recent studies conducted preliminary evaluations on ities in software security tasks. With proper prompting, the use of ChatGPT on security-related tasks. There are ChatGPTcaneasilyunderstandthepurposeofthetasksand experiments showing that ChatGPT may generate insecure generate reasonable responses. Especially, ChatGPT with code [29], [30], similar to previous code generation mod- GPT-4 presents impressively high accuracy by solving 92% |
els [31]. A number of studies found that ChatGPT can also of vulnerability detection cases, 95% of vulnerability repair beusedtoresolveinsecurecodewithproperprompts.Pearce challenges, 84% of bug fixing cases, and most manually et. al. [32] used several LLMs earlier than ChatGPT and crafted test cases across various tasks, which is a signif- manually written prompts to repair security vulnerabilities. icant improvement compared with GPT-3.5. Surprisingly, These LLMs resolved part of the test cases and the repair ChatGPT is even able to decompile the assembly language was unstable. Sobania et. al. [33] tested code repair on at least on short programs, illustrating that ChatGPT can ChatGPTanddrewasimilarconclusiononChatGPT.Xiaet. processsoftwarecontextnotlimitedtosourcecode.Wealso al. [34] emphasized the importance of feedback on LLM- identify several limitations of ChatGPT. It cannot interpret based code repair. To be specific, after the LLM returned binaryorhexadecimalcodeandislimitedtolongcodecon- an invalid patch, they appended the result of test cases, text. Such limitations compromise the usefulness of Chat- whichindicateswhythefixedcodefailed,tothepromptand GPT in real-world software development. For instance, the query the LLM for a second time. The work presented the accuracy of vulnerability detection drops from 98% to 66% possibility to use LLMs including ChatGPT to fix general if ChatGPT is tested on real-world software vulnerabilities programbugs.Ontheotherhand,securityresearcherstryto instead of synthetic programs. It is a promising research leverageLLMsinsecuritytools.Denget.al.[35]leveraged direction to address such limitations and systematically in- a LLM model to mutate the inputs (i.e., code evaluations tegrate ChatGPT into security-critical applications. by deep learning networks) when fuzzing deep learning 2. Related work libraries.Theaboveworksillustrateseveralpromisingcases of using LLMs in security research or engineering but the DevelopmentofChatGPT.Instruct-GPT[36]servesas aforementioned tasks do not cover broad software security the foundational large language model aligned with human and the latest ChatGPT is not widely tested. To understand feedback, upon which ChatGPT (GPT-3.5 and GPT-4) is how much ChatGPT can help in the above security tasks, built.Asdemonstratedin[36],Instruct-GPTexhibitssignif- one should analyze ChatGPT’s security-specific capabilities icant improvements over GPT-3 in NLP tasks such as ques- using representative test cases on a broad scope of software tionanswering,readingcomprehension,andsummarization. security tasks but such effort is in absence. OpenAI further enhanced the model by incorporating Re- This work presents a comprehensive evaluation of the inforcement Learning from Human Feedback (RLHF) [37], capabilities of ChatGPT in various software security tasks, resultinginthemorepowerfulandintelligentGPT-3.5-turbo as shown in fig. 1, including vulnerability detection and (ChatGPT).NotonlydoesGPT-3.5excelinthesetasks,but repair, bug fixing, patching, software debloating, root cause its outputs also align more closely with human responses. analysis, decompilation, symbolic execution, and fuzzing. When it comes to natural language tasks, ChatGPT (GPT- As these tasks traditionally require specialized knowledge 3.5)showcasesevidentimprovementscomparedtoprevious and significant manual effort, the integration of AI-driven LLMs. These advancements become even more apparent in models like ChatGPT could greatly enhance efficiency and GPT-4 [9]. As illustrated in [9], GPT-4 achieves remarkable effectiveness in detecting and fixing software security. For success on diverse and challenging benchmarks, including each software security task, we aim to answer a series of more demanding academic and professional exams such as research questions: GRE, SAT, and AP exams. Surprisingly, GPT-4 demon- strateshuman-levelperformanceonthemajorityoftheseex- • Can ChatGPT understand the goal of the task? ams, representing a substantial improvement over GPT-3.5. • How accurate the ChatGPT’s answers are? These results underscore the strong language understanding • How different GPT-3.5 and GPT-4 performs? and reasoning abilities of ChatGPT (GPT-3.5 and GPT-4). • What is the limitation of ChatGPT if it cannot Evaluation of ChatGPT’s multi-task capability. No- accomplish the task? tably, ChatGPT (GPT-3.5) [10] and GPT-4 [9] have opened In terms of our methodology, we collected benchmark up a highly promising avenue of research in artificial in- datasets for vulnerability detection, vulnerability repair, bug telligence and machine learning. Numerous studies have in- fixing and decompilation and used the datasets to sys- vestigatedthecapabilitiesofChatGPTindifferentscenarios tematically evaluate the performance of ChatGPT on such and tasks. Bubeck et. al. [14] demonstrates the impressive tasks,givingquantitativeresults.Forothersoftwaresecurity performanceofGPT-4invariouscategoriesoftasks,includ- tasks, we created representative test cases manually or from ing coding, math, interaction with human, and so on. TheTABLE 1: Vulnerability detection on synthetic code. studyarguesthatGPT-4isalreadyageneralmodelthathave GPT-3.5 GPT-4 capability comparable with human in diverse tasks. ID #Case TP FP Precision Recall TP FP Precision Recall Forsoftwaresecurity,Severalstudiesattempttoconnect CWE-22 10 5 1 83.33% 100% 5 1 83.33% 100% CWE-78 10 5 3 62.50% 100% 5 1 83.33% 100% |
ChatGPTorotherLLMswithsoftwaresecurityandevaluate CWE-79 10 5 4 55.56% 100% 5 1 83.33% 100% its ability in these tasks. Regarding vulnerability detection, CWE-89 10 5 3 62.50% 100% 5 0 100% 100% Cheshkov et. al. [38] evaluated ChatGPT’s ability to detect CWE-119 10 4 3 57.14% 80% 5 1 83.33% 100% CWE-125 10 5 2 71.43% 100% 5 2 71.43% 100% vulnerabilities in programs. In the domain of secure code CWE-190 10 3 2 60% 60% 4 1 80% 80% generation,Pearceet.al.[31]evaluatedwhetherCopilot[39] CWE-416 10 5 2 71.43% 100% 5 0 100% 100% CWE-476 10 4 1 80% 80% 5 0 100% 100% always generates secure code and concluded that Copilot CWE-787 10 3 0 100% 60% 5 0 100% 100% generates insecure code with a high probability. For vul- Total 100 44 21 67.69% 88% 49 7 87.50% 98% nerability repair, Pearce et. al. [32] conducted pioneering TABLE 2: Vulnerability detection on CVEs. research that assessed the capability of various LLMs on repairingvulnerablecodeexamples.Theresultsshowedthat Language #Case GPT-3.5 GPT-4 TP FP Fail Precision Recall TP FP Fail Precision Recall LLMs may repair the vulnerabilities but the performance is C 34 3 1 6 75% 21.43% 9 3 0 75% 52.94% unstable and is greatly affected by prompts, vulnerability Cpp 6 0 2 2 0% 0% 1 2 0 33.33% 33.33% Python 10 0 2 0 0% 0% 1 1 0 50% 20% types, programming languages, etc. For automated program Go 8 0 0 0 0% 0% 3 0 0 100% 75% repair (APR) realm, Sobania et. al. [33] and Xia et. al. [34] JavaScript 8 2 2 0 50% 50% 2 0 0 100% 50% analyzed the ChatGPT’s performance in bug fixing. By PHP 2 0 0 0 0% 0% 1 0 0 100% 100% Total 68 5 7 8 41.67% 17.24% 17 6 0 73.91% 50% providing the buggy code and optional error messages, ChatGPT can resolve algorithmic bugs in some cases. end repair pipeline specifically tailored to address program However,wefindthatexistingevaluationstudiescannot bugs.Denget.al.[51]introducedFuzzGPT,whichcombines answer how useful ChatGPT is for software security tasks. LLMs to automatically generate valid fuzzing input. Hu et. Thereasonistwofold.First,existingstudiesmainlyfocuson al.[52]proposedCHATFUZZ,agreyboxfuzzeraugmented the detection and repair of vulnerabilities or bugs, leaving by generative AI, demonstrating advancements in the field the broad scope of software security untouched. Second, of fuzzing techniques. many studies are accomplished before the release of the Above ChatGPT-based applications indicate the poten- latest GPT-4 and thus cannot represent the state-of-the- tialofintegratingChatGPTincomplicatedanddiversetasks. art LLM performance. To bridge the gap, we present a In this work, we generalize ChatGPT to various software comprehensive evaluation study using the latest ChatGPT security tasks. Our design could be a baseline for future versions and considering broad software security tasks. research of ChatGPT-based software security applications. ApplicationsbuiltuponChatGPT.Aseriesofresearch aims to improve ChatGPT’s capability in certain tasks by 3. Vulnerability Detection carefully designing the interaction with ChatGPT system- atically. MM-ReAct [40] is a framework that integrated Vulnerability detection is a task identifying and local- ChatGPT with a pool of vision experts to achieve visual izing the vulnerabilities which can be exploited by attacks reasoning and action. Though ChatGPT cannot directly in the software project systems. Various detection methods process images and videos, MM-ReAct uses ChatGPT to have been proposed in the rich literature. Conventional vul- intelligentlychoosewhatvisionalgorithmshouldbeapplied nerabilitydetectionmethods[53],[54],[55],[56],[57],[58], onwhichtarget.VisualChatGPT[41]alsointegratedvision [59]isusuallybasedonstaticprogramanalysisformatching modelsandChatGPTandprovidedanintelligentuserinter- predefinedvulnerabilitypatternsordynamicmethodswhich face. GP-Tutor [42] leveraged ChatGPT’s strong code un- reveal vulnerabilities from the runtime execution traces. derstanding capability to explain the functionality of source Thesemethodsaresuccessfulbutmayhavelimitedgeneral- code for education purposes. Park et. al. [43] used multiple ity, e.g., some solutions only work on certain programming ChatGPTinstancestointeractwitheachotherandproduced languages or vulnerabilities. They may also suffer from a world with generative agents where AI simulates the inaccuraciesasthedesignoftendependsonmanualanalysis behaviorofhuman.Therearealsonumerousotherintriguing ofthevulnerabilities.Machinelearningbasedmethods[60], and powerful applications that are combined with ChatGPT [61],[62],[63],[64]formulatethevulnerabilitydetectionas including coding [27], [42], [44], [45], multimodality [25], a classification problem. The models make binary decisions [40],[41],automaticmultitasksystems[43],education[46], (vulnerableornot)byacceptingcodeinplaintextorcertain and environmental applications [47]. structured representation, according to the learning on a The integration of LLMs with traditional security tasks large corpus. However, deep learning based solutions can also presents a highly promising research direction. Xia et. alsobelimitedinaccuracybecausethetrainingdatasetcould al. [48] evaluated the performance of automated program be imbalanced or biased. repair (APR) using LLMs, demonstrating the considerable AslargemodelslikeChatGPTpushlanguageprocessing |
capability of LLMs with well-designed prompts. Moreover, toanewlevel,itispromisingtoleverageChatGPTtoiden- ChatRepair [49] utilized conversation with ChatGPT to fix tify vulnerabilities by directly examining the source code. programbugs,i.e.,providingChatGPTwithexecutioninfor- Thoughvulnerabilitydetectioncanalsobeappliedtobinary mation as the feedback. Jin et. al. [50] proposed an end-to- codeorotherintermediaterepresentations(e.g.,LLVM),wefocus on source code because (1) the source code is in a Prompt GPT-3.5 GPT-4 form close to natural language which is originally used to Does the following code have vulnerabilities? Yes, this code has vulnerabilities. Yes, there are potential train LLMs thus ChatGPT is supposed to perform better on # #i in nc cl lu ud de e < <s st td rl ii nb g. .h h> > the path `/home/`, which may not vulnerabilities in the provided code. #include <stdio.h> always be a valid path or a safe Some of the issues are: source code instead of binary code; (2) the source code is location to store files. An … always available during the software development thus it is i n t ch am rai n *( usi en rt A na dr Fg ic l, e ch =a r ar* g* v a [r 2g ]v ;) { a pt eta rfc ok rmer dc io reu cld to e ryx p trlo ai vt eth rsis a lt o Insecure handling of file paths: { attacks or to write malicious files The code directly concatenates easier to access than binary code. Therefore, if ChatGPT char fileBuffer[FILENAME_MAX] = in sensitive directories. user input (userAndFile) to the base performs well on source-code-level vulnerability detection, " / h o m c se h i/ a z" r e_; t*f i ll ee nN a =m e s tr= l ef nil (e fB iu lf ef Ne ar me; ); T tho e m foit li lg oa wt ie n gth mes ee a sv uu rln ee sr ca ab nili t bie es , p saa nth it iw zait th ioo nu .t Ta hn iy s v ma alid y a at li lo on w o ar n it is unnecessary to use ChatGPT to process binary code. F I L ENs At Mr En _c Ma At X(f -i ll ee nN -a 1m )e ;+len, userAndFile, taken: … attacker to traverse directories fopen(fileName, "wb+"); using a path traversal attack (e.g., Datasets. We collect two datasets for evaluation. One is } by providing "../../../etc/passwd" as } input). the dataset of synthetic code samples, compiled by 100 se- Figure2:OnesuccessfulexampleofChatGPT’svulnerabil- lectedtestcasesfromSARDdataset[65].Thetestcasesare ity detection (path traversal in C program). associated with 10 of the most popular vulnerabilities doc- umented by Common Weakness Enumeration (CWE) [66], Prompt GPT-3.5 GPT-4 including the notorious SQL injection (CWE-89), cross-site Does the following code have vulnerabilities? As an AI language model, I Yes, the code has a #include <stdio.h> cannot efficiently determine potential buffer overflow scripting (CWE-79), path traversal (CWE-022), etc. 10 test # include <string.h> the intent of the code. vulnerability. The fgets() #define MAXSIZE 40 However, the code does not function reads a maximum cases are prepared for each CWE, including 5 vulnerable v teo sid t(void) seem to have any of 1024 characters from cases and 5 benign cases. The test cases cover various { char buf[MAXSIZE]; Out-of-bound write v mu eln me ora ryb i mlit aie ns ip r ue ll aa tt ie od n ,t o b uffer s bt ud fi .n H a on wd e s vt eo rr ,e bs u t fh he am s i an to languages including C, C++, Java, and Python. if(fgets(buf, 1024, stdin)) overflow, or format string size of only MAXSIZE (40) printf("result: %s\n", buf); vulnerabilities. … characters. … The other dataset is about real-world vulnerabilities, } which is compiled from exploitable vulnerabilities in open- Figure 3: One false negative example of ChatGPT’s vulner- sourced software. We select 34 vulnerabilities from the ability detection (Out-of-bounds Write in C program). database of Common Vulnerabilities and Exposures (CVE), each with the vulnerable version of the software and the ChatGPT, especially with GPT-4, on detecting vulnerabil- official fixed version of the software. Our collection covers ities. In terms of the inaccuracies, ChatGPT suffers from 9 vulnerabilities defined by CWE and 6 different program- false positives in vulnerability detection as the precision is ming languages. only 67.69%/87.50% for GPT-3.5/GPT-4. The problem of Prompts. To ask ChatGPT for detecting vulnerabilities, falsenegativesisrelativelyminor,atleastonshortsynthetic we use a prompt as “Does the following code have vulner- programs,as therecall reaches88%/98% forGPT-3.5/GPT- abilities? {code}” where “{code}” is the placeholder of the 4. We use case studies to analyze both successful cases and sourcecode.Forreal-worldvulnerabilities,asChatGPTcan inaccurate detection results as follows. accept the limited length of the input, we cannot directly We first study one successful example to demonstrate feed the whole software to ChatGPT. Instead, we select how ChatGPT resolves vulnerability detection challenges. vulnerable functions in the software as the input. As shown in fig. 2, the example involves the path traver- Metrics.Wemanuallyconfirmwhetherthevulnerability sal vulnerability. The code opens a file whose path is detection is successful. We extract the names of vulnerabil- determined by user-provided values thus the attacker may |
ities (if any) from the response of ChatGPT and regard the illegitimately access unexpected files using malicious in- detectionassuccessfulwhen(1)thedetectedvulnerabilities puts. ChatGPT first confirm the existence of vulnerabilities include the ground-truth vulnerability in vulnerable cases, by saying “yes”. Then it explains the mechanism of the or (2) no vulnerability is detected in benign cases. We first path traversal and precisely identifies the key vulnerable labeleachtestcaseasfalsepositive(FP),falsepositive(FP), variable. In addition, it recommends mitigation methods true negative (TN), or false negative (FN), according to the for the vulnerabilities. ChatGPT is also able to discover ground truth and the detection result. Then we calculate multiple vulnerabilities in one pass, as shown in GPT-4’s precision ( TP ) and recall ( TP ) which are widely result in fig. 2. Some discovered vulnerabilities are minor TP+FP TP+FN used in previous studies [54], [67], [68]. Higher precision issues but are associated with certain inappropriate coding and higher recall indicate more accurate detection. styles.Thesefindingsalsobenefittherobustnessofthecode. Inthefollowingsubsections,wewillpresentandanalyze Asdemonstratedbytheexample,ChatGPT’scomprehensive ChatGPT’s performance of vulnerability detection on the responses can greatly help developers to build robust code. synthetic dataset and the real-world dataset, respectively. Itnotonlywarnsdeveloperswithsecurityvulnerabilitiesbut Detecting vulnerabilities in synthetic programs. The also proposes actionable solutions. main results are shown in table 1. ChatGPT is impressive One major cause of inaccurate detection is ChatGPT’s in detecting vulnerabilities in synthetic code cases. GPT-3.5 limitedprecisiononunderstandingnumericaloperations.As successfully detects 44 out of 50 vulnerable cases, which an example of false negatives, the code in fig. 3 contains is a precision of 67.69%. It failed on 6 vulnerable test a buffer overflow vulnerability because at most 1024-byte cases, having a recall rate of 88%. GPT-4 detects 49 out input from stdin is stored into an array in only a 30-byte of 50 vulnerable cases in a precision of 87.5% and has length. However, ChatGPT does not detect the fact that 30 only one false negative case. GPT-4 greatly outperforms bytes is much smaller than 1024 bytes. Because of this GPT-3.5 by 19.81% in precision and it reaches 98% in limitation, ChatGPT has relatively worse performance on recall rate. The result demonstrates the strong capability of overflow vulnerabilities (CWE-119, CWE-125, and CWE-shown in fig. 5, CVE-2022-4450 contains a double free Prompt GPT-3.5 GPT-4 Dose the following code have vulnerabilities? I cannot determine if There are potential vulnerabilities and issues: vulnerability (CWE-415) where it frees the variable header Debug(1, "Inside zmLoadTokenUser, the code has any 1. strncpy: … In this case, it's a good idea to formed key=%s", key.c_str()); vulnerabilities as it manually null-terminate the buffer after calling anddatatwice.Thoughthepatternofthedoublefreecould s at nd s :: =p a vi er ri< fs yt Td o: k: es ntr (i jn wg t_, t ou kn es ni _g sn te rd , i nt> a inp cp oe ma prs le t to e .b Pe l ease strncpy: be as simple as a simple control dependency, the two free key); provide the strncpy(username, dbrow[index++], std::string username = ans.first; complete code for sizeof(username)-1); operations could be locatedin twofar-awaylocations inthe unsigned int iat = ans.second; me to review. username[sizeof(username)-1] = '\0'; Debug(1, "retrieved user '%s' from 2. SQL injection: … The zmDbEscapeString program and hidden among a large body of code. Because token",username.c_str()); function is defined in the code, and it should if ( username == "" ) { I vn u lt nh eis r ac ba is lie ty, rth ee p air be used to sanitize the input before of complex context, ChatGPT fails to identify the exact return nullptr; is failed. constructing the SQL query: vulnerability and instead offers some minor suggestions on } … std::string escaped_username = zmDbEscapeString(username); ... general sanity checks. Figure 4: One failure in ChatGPT’s vulnerability detection on real-world vulnerability (CVE-2023-26032). 4. Vulnerability Repair Prompt GPT-3.5 GPT-4 Toaddresssecurityvulnerabilities,onemajorsolutionis Does the following code have vulnerabilities? Without more information It is difficult to determine manual patching by the developers. The developers of the … about the PEM format and its whether the provided code |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.